The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
A melhor maneira de atender seus clientes no Telegram!

Clientes.Chat Sobre o serviço Configuração Banco de Dados Variáveis de Ambiente Docker Python Heroku Contribuição Sobre o serviço A maneira mais organ

Gabriel R F 10 Oct 12, 2022
qualysclient - a python SDK for interacting with the Qualys API

qualysclient - a python SDK for interacting with the Qualys API

5 Oct 28, 2022
The wrapper you need for the osu!api v2

oppy (op.py) oppy is the wrapper for use on the osu! v2 API. Version 1.0.0 Installation To install please use pip to install oppy pip install op.py To

Wayde 2 May 01, 2022
The fastest nuker on discord, Proxy support and more

About okuru nuker is a nuker for discord written in python, It uses methods such as threading and requests to ban faster and perform at higher speeds.

63 Dec 31, 2022
Written in Python, freezed into stand-alone executable with PyInstaller. This app will make sure you stay in New World without getting kicked for inactivity.

New World - AFK Written in Python, freezed into stand-alone executable with PyInstaller. This app will make sure you stay in New World without getting

Rodney 5 Oct 31, 2021
A simple Discord Bot created for basic functionality and fun chat commands for use in a private server.

LoveAndChaos-Bot v0.1.0 LoveAndChaos-Bot is a Discord Bot specifically designed for a private server; this bot is merely a test and a method to expose

Morgan Rose 1 Dec 12, 2021
FAIR Enough Metrics is an API for various FAIR Metrics Tests, written in python

☑️ FAIR Enough metrics for research FAIR Enough Metrics is an API for various FAIR Metrics Tests, written in python, conforming to the specifications

Maastricht University IDS 3 Jul 06, 2022
Temperature Monitoring and Prediction Using a Modified Lambda Architecture

Temperature Monitoring and Prediction Using a Modified Lambda Architecture A more detailed write up can be seen in this paper. Original Lambda Archite

Parsa Yousefi 2 Jun 27, 2022
SaltConf21: Adding Workflow Approval to Salt

SaltConf21: Adding Workflow Approval to Salt Running To run the example, install Docker and docker-compose and run the following commands: docker-comp

SSYS Sistemas 4 Nov 24, 2021
Osmnx-examples - Usage examples, demos, and tutorials for OSMnx.

OSMnx Examples OSMnx is a Python package to work with street networks and other spatial data from OpenStreetMap: retrieve, model, analyze, and visuali

Geoff Boeing 1.2k Jan 03, 2023
Telegram bot untuk mencari jawaban dibrainly, support inline juga

Brainly-Telebot Bot Untuk Mencari Jawaban Dibrainly Jika ingin clone. Boleh kok Dibuat dengan python menggunakan MTproto Library. Yaitu Pyrogram Bot y

... 7 Mar 17, 2022
A Bot to Track Kernel Upstreams from kernel.org and Post it on Telegram Channel

Channel Kernel Tracker is the channel where the bot will be sending the updates in. Introduction This is a Telegram Bot to Track Kernel Upstreams kern

Kartikeya Hegde 3 Oct 05, 2021
Detects members having unicode names. Public bot: @scarletwitchprobot

✨ Scarletwitch bot ✨ Detects unicode names members in a tg chat & provides a option to take action on that user ! Public bot: @scarletwitchprobot Supp

ÁÑÑÍHÌLÅTØR SPÄRK 18 Nov 12, 2022
This is a scalable system that reads messages from public Telegram channels using Telethon and stores the data in a PostgreSQL database.

This is a scalable system that reads messages from public Telegram channels using Telethon and stores the data in a PostgreSQL database. Its original intention is to monitor cryptocurrency related ch

Greg 3 Jun 07, 2022
Discord Multitool made in python 3.9

XTool Discord Multitool 24 Features: Webhook Delete VC Lagger Fast Token Checker Mass Report [Not Done] Token rape 2K Characters Bypass Block bypass M

Tiie 50 Dec 20, 2022
Pogodasbot - Telegram bot sending channel weather info

Pogodasbot - Telegram bot sending channel weather info

Qayrat Sultan 1 Dec 15, 2022
Super simple anti-spam Discord bot

AutoAntiRaidBot Super simple anti-spam Discord bot. Will automatically kick any member with an account made under 1 day ago, and will ban any member w

Kainoa Kanter 6 Jun 27, 2022
An API wrapper for Henrik's Unofficial VALORANT API

ValorantAPI.py An API wrapper for Henrik's Unofficial VALORANT API Warning!! This project is still in beta and only contains barely anything yet. If y

Jakkaphat Chalermphanaphan 0 Feb 04, 2022
A discord tool to use bugs and exploits

DiscordTool A discord tool to use bugs and exploits Features: send a buggy messa

6 Aug 19, 2022
A bot that is an updated & modified version of calvinnfernando's WebReg-Bot

WaitList-Bot A bot that is an updated & modified version of calvinnfernando's WebReg-Bot to automate getting into waitlisted classes in UCSD WebReg on

Issac In 1 Dec 01, 2022