The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
WBMS automates sending of message to multiple numbers via WhatsApp Web

WhatsApp Bulk Message Sender - WBMS WBMS automates sending of message to multiple numbers via WhatsApp Web. Report Bug · Request Feature Love the proj

Akshay Parakh 3 Jun 26, 2022
iCloudPy is a simple iCloud webservices wrapper library written in Python

iCloudPy 🤟 Please star this repository if you end up using the library. It will help me continue supporting this product. 🙏 iCloudPy is a simple iCl

Mandar Patil 49 Dec 26, 2022
A Multi-Tool with 30+Options.

A Multi-Tool with 30+Options.

Mervin404 15 Apr 12, 2022
2b2t Priority queue discord bot announcer

2b2t Priority queue discord bot announcer Commands !prioq - Checks the priority queue length and sends it. !start - Starts a loop that sends the sta

Gumi 5 Jun 06, 2022
WatonAPI is an API used to connect to spigot servers with the WatonPlugin to communicate.

WatonAPI is an API used to connect to spigot servers with the WatonPlugin to communicate. You can send messages to the server and read messages, making it useful for cross-chat programs.

Waton 1 Nov 22, 2021
Fully asynchronous trace.moe API wrapper

AioMoe Fully asynchronous trace.moe API wrapper Installation You can install the stable version from PyPI: $ pip install aiomoe Or get it from github

2 Jun 26, 2022
A Bot Upload file|video To Telegram using given Links.

A Bot Upload file|video To Telegram using given Links.

Hash Minner 19 Jan 02, 2023
Stock market bot that will be used to learn about API calls and database connections.

Stock market bot that will be used to learn about API calls and database connections.

1 Dec 24, 2021
🖥️ Windows Batch and powershell Discord Token grabber. Made for Troll (lmao)

Batched-Grabber Windows Batch and powershell Discord Token grabber. Made for Troll ! Setup. 1. pip(3) install numpy colored 2. python(3) Batched.py 3.

Ѵιcнч 41 Nov 01, 2022
List of twitch bots n bigots

This is a collection of bot account names NamelistMASTER contains all the names we reccomend you ban in your channel Sometimes people get on that list

62 Sep 05, 2021
Nowadays we don't have time to listen to each and every song that we come across in a playlist.

Nowadays we don't have time to listen to each and every song that we come across in a playlist. so, this project helps you. we used Spotify API for collecting the dataset information and able to do E

3 Apr 26, 2022
Disco is an extensive and extendable Python 2.x/3.x library for the Discord API.

disco Disco is an extensive and extendable Python 2.x/3.x library for the Discord API. Disco boasts the following major features: Expressive, function

1 Nov 18, 2021
WakeNote is a tool that hides notifications from you until you confirm you want to read them, with technology to help prevent the reading of depressing messages first thing in the morning.

By: Seanpm2001, Et; Al. Top README.md Read this article in a different language Sorted by: A-Z Sorting options unavailable ( af Afrikaans Afrikaans |

Sean P. Myrick V19.1.7.2 3 Oct 21, 2022
thumbor is an open-source photo thumbnail service by globo.com

Survey If you use thumbor, please take 1 minute and answer this survey? It's only 2 questions and one is multiple choice!!! thumbor is a smart imaging

Thumbor (by @globocom) 9.3k Dec 31, 2022
Get some python in google cloud functions

[NOTE]: This is a highly experimental (and proof of concept) library so do not expect all python packages to work flawlessly. Also, cloud functions ar

Martin Abelson Sahlen 200 Nov 24, 2022
A discord bot written in discord.py to manage custom roles assigned to boosters of your server.

BBotty A discord bot written in discord.py to manage custom roles assigned to boosters of your server. v0.0.1-alpha released! This version is incomple

Oui002 1 Nov 27, 2021
Python script to decode the EU Covid-19 vaccine certificate

vacdec Python script to decode the EU Covid-19 vaccine certificate This script takes an image with a QR code of a vaccine certificate as the parameter

Hanno Böck 244 Nov 30, 2022
EC2 that automatically move files received through FTP to S3

ftp-ec2-s3-cf EC2 that automatically move files received through FTP to S3 Installation CloudFormation template Deploy now! Usage IP / domain name: ta

Javier Santana 1 Jun 19, 2021
Shuffle and add items from jellyfin to mpd (use in tandem with jellyfin-mopidy and mpd-mopidy). Similar to ncmpcpp's "Add random" feature..

jellyshuf Essentially implements ncmpcpp's add random feature (default hotkey: `) through a script which grabs info from jellyfin api itself. jellyfin

Ethan Djeric 2 Dec 14, 2021
Visualização de dados do TaxiGov.

Visualização de dados do TaxiGov Este repositório apresenta uma visualização das corridas de táxi do programa TaxiGov do governo federal, realizadas n

Ministério da Economia do Brasil 5 Dec 20, 2022