Exploring the Top ML and DL GitHub Repositories

Overview

Exploring the Top ML and DL GitHub Repositories

This repository contains my work related to my project where I collected data on the most popular machine learning and deep learning GitHub repositories in order to further visualize and analyze it.

I've written a corresponding article about this project, which you can find on Towards Data Science. The article was selected as an "Editors Pick", and was also selected to be in their "Hands on Tutorials" section of their publication.

At a high level, my analysis is as follows:

  1. I collected data on the top machine learning and deep learning repositories and their respective owners from GitHub.
  2. I cleaned and prepared the data.
  3. I visualized what I thought were interesting patterns, trends, and findings within the data, and discuss each visualization in detail within the TDS article above.

Tools used

Python NumPy pandas tqdm PyGitHub GeoPy Altair tqdm wordcloud docopt black

Replicating the Analysis

I've designed the analysis in this repository so that anyone is able to recreate the data collection, cleaning, and visualization steps in a fully automated manner. To do this, open up a terminal and follow the steps below:

Step 1: Clone this repository to your computer

# clone the repo
git clone https://github.com/nicovandenhooff/top-repo-analysis.git

# change working directory to the repos root directory
cd top-repo-analysis

Step 2: Create and activate the required virtual environment

# create the environment
conda env create -f environment.yaml

# activate the environment
conda activate top-repo-analysis

Step 3: Obtain a GitHub personal access token ("PAT") and add it to the credentials file

Please see how to obtain a PAT here.

Once you have it perform the following:

# open the credentials file
open src/credentials.json

This will open the credentials json file which contains the following:

" }">
{
"github_token": "
   
    "
   
}

Change to your PAT.

Step 4: Run the following command to delete the current data and visualizations in the repository

make clean

Step 5: Run the following command to recreate the analysis

make all

Please note that if you are recreating the analysis:

  • The last step will take several hours to run (approximately 6-8 hours) as the data collection process from GitHub has to sleep to respect the GitHub API rate limit. The total number of API requests for the data collection will approximately be between 20,000 to 30,000.
  • When the data cleaning script data_cleaning.py runs, there make be some errors may be printed to the screen by GeoPy if the Noinatim geolocation service is unable to find a valid location for a GitHub user. This will not cause the script to terminate, and is just ugly in the terminal. Unfortunately you cannot suppress these error messages, so just ignore them if they occur.
  • Getting the location data with GeoPy in the data cleaning script also takes about 30 minutes as the Nominatim geolocation service limits 1 API request per second.
  • I ran this analysis on December 30, 2021 and as such collected the data from GitHub on this date. If you run this analysis in the future, the data you collect will inherently be slightly different if the machine learning and deep learning repositories with the highest number of stars has changed since the date when I ran the analysis. This will slightly change how the resulting visualizations look.

Using the Scraper to Collect New Data

You can also use the scraping script in isolation to collect new data from GitHub if you desire.

If you'd like to do this, all you'll need to do is open up a terminal, follow steps 1 to 3 above, and then perform the following:

Step a) Run the scraping script with your desired options as follows

python src/github_scraper.py --queries=<queries> --path=<path>
  • Replace with your desired queries. Note that if you desire multiple search queries, enclose them in "" separate them by a single comma with NO SPACE after the comma. For example "Machine Learning,Deep Learning"
  • Replace with the output path that you want the scraped data to be saved at.

Please see the documentation in the header of the scraping script for additional options that are available.

Step b) Run the data cleaning script to clean your newly scraped data

python src/data_cleaning.py --input_path=<path> --output_path=<output_path>
  • Replace with the path that you saved the scraped data at.
  • Replace with the output path that you want the cleaned data to be saved at.
  • As metioned in the last section, some errors may be printed to the terminal by GeoPy during the data cleaning process, but feel free to ignore these as they do not affect the execution of the script.

Dependencies

Please see the environment file for a full list of dependencies.

License

The source code for the site is licensed under the MIT license.

You might also like...
Spectacular AI SDK fuses data from cameras and IMU sensors and outputs an accurate 6-degree-of-freedom pose of a device.
Spectacular AI SDK fuses data from cameras and IMU sensors and outputs an accurate 6-degree-of-freedom pose of a device.

Spectacular AI SDK examples Spectacular AI SDK fuses data from cameras and IMU sensors (accelerometer and gyroscope) and outputs an accurate 6-degree-

Working Time Statistics of working hours and working conditions by industry and company

Working Time Statistics of working hours and working conditions by industry and company

A python package which can be pip installed to perform statistics and visualize binomial and gaussian distributions of the dataset

GBiStat package A python package to assist programmers with data analysis. This package could be used to plot : Binomial Distribution of the dataset p

ToeholdTools is a Python package and desktop app designed to facilitate analyzing and designing toehold switches, created as part of the 2021 iGEM competition.

ToeholdTools Category Status Repository Package Build Quality A library for the analysis of toehold switch riboregulators created by the iGEM team Cit

A collection of robust and fast processing tools for parsing and analyzing web archive data.

ChatNoir Resiliparse A collection of robust and fast processing tools for parsing and analyzing web archive data. Resiliparse is part of the ChatNoir

Python beta calculator that retrieves stock and market data and provides linear regressions.

Stock and Index Beta Calculator Python script that calculates the beta (β) of a stock against the chosen index. The script retrieves the data and resa

Larch: Applications and Python Library for Data Analysis of X-ray Absorption Spectroscopy (XAS, XANES, XAFS, EXAFS), X-ray Fluorescence (XRF) Spectroscopy and Imaging

Larch: Data Analysis Tools for X-ray Spectroscopy and More Documentation: http://xraypy.github.io/xraylarch Code: http://github.com/xraypy/xraylarch L

A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.
A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.

Realtime Financial Market Data Visualization and Analysis Introduction This repo shows my project about real-time stock data pipeline. All the code is

Python script to automate the plotting and analysis of percentage depth dose and dose profile simulations in TOPAS.

topas-create-graphs A script to automatically plot the results of a topas simulation Works for percentage depth dose (pdd) and dose profiles (dp). Dep

Releases(v1.0.0)
Owner
Nico Van den Hooff
UBC Master of Data Science 2022
Nico Van den Hooff
Stochastic Gradient Trees implementation in Python

Stochastic Gradient Trees - Python Stochastic Gradient Trees1 by Henry Gouk, Bernhard Pfahringer, and Eibe Frank implementation in Python. Based on th

John Koumentis 2 Nov 18, 2022
Pandas and Spark DataFrame comparison for humans

DataComPy DataComPy is a package to compare two Pandas DataFrames. Originally started to be something of a replacement for SAS's PROC COMPARE for Pand

Capital One 259 Dec 24, 2022
Data analysis and visualisation projects from a range of individual projects and applications

Python-Data-Analysis-and-Visualisation-Projects Data analysis and visualisation projects from a range of individual projects and applications. Python

Tom Ritman-Meer 1 Jan 25, 2022
Pipeline and Dataset helpers for complex algorithm evaluation.

tpcp - Tiny Pipelines for Complex Problems A generic way to build object-oriented datasets and algorithm pipelines and tools to evaluate them pip inst

Machine Learning and Data Analytics Lab FAU 3 Dec 07, 2022
Manage large and heterogeneous data spaces on the file system.

signac - simple data management The signac framework helps users manage and scale file-based workflows, facilitating data reuse, sharing, and reproduc

Glotzer Group 109 Dec 14, 2022
Python script to automate the plotting and analysis of percentage depth dose and dose profile simulations in TOPAS.

topas-create-graphs A script to automatically plot the results of a topas simulation Works for percentage depth dose (pdd) and dose profiles (dp). Dep

Sebastian Schäfer 10 Dec 08, 2022
Zipline, a Pythonic Algorithmic Trading Library

Zipline is a Pythonic algorithmic trading library. It is an event-driven system for backtesting. Zipline is currently used in production as the backte

Quantopian, Inc. 15.7k Jan 07, 2023
Toolchest provides APIs for scientific and bioinformatic data analysis.

Toolchest Python Client Toolchest provides APIs for scientific and bioinformatic data analysis. It allows you to abstract away the costliness of runni

Toolchest 11 Jun 30, 2022
Conduits - A Declarative Pipelining Tool For Pandas

Conduits - A Declarative Pipelining Tool For Pandas Traditional tools for declaring pipelines in Python suck. They are mostly imperative, and can some

Kale Miller 7 Nov 21, 2021
Python utility to extract differences between two pandas dataframes.

Python utility to extract differences between two pandas dataframes.

Jaime Valero 8 Jan 07, 2023
A Python Tools to imaging the shallow seismic structure

ShallowSeismicImaging Tools to imaging the shallow seismic structure, above 10 km, based on the ZH ratio measured from the ambient seismic noise, and

Xiao Xiao 9 Aug 09, 2022
An easy-to-use feature store

A feature store is a data storage system for data science and machine-learning. It can store raw data and also transformed features, which can be fed straight into an ML model or training script.

ByteHub AI 48 Dec 09, 2022
A set of procedures that can realize covid19 virus detection based on blood.

A set of procedures that can realize covid19 virus detection based on blood.

Nuyoah-xlh 3 Mar 07, 2022
Data Analytics: Modeling and Studying data relating to climate change and adoption of electric vehicles

Correlation-Study-Climate-Change-EV-Adoption Data Analytics: Modeling and Studying data relating to climate change and adoption of electric vehicles I

Jonathan Feng 1 Jan 03, 2022
Tkinter Izhikevich Neuron Model With Python

TKINTER IZHIKEVICH NEURON MODEL WITH PYTHON Hodgkin-Huxley Model It is a mathematical model for the generation and transmission of action potentials i

Rabia KOÇ 8 Jul 16, 2022
Binance Kline Data With Python

Binance Kline Data by seunghan(gingerthorp) reference https://github.com/binance/binance-public-data/ All intervals are supported: 1m, 3m, 5m, 15m, 30

shquant 5 Jul 13, 2022
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Jan 02, 2023
A Python package for modular causal inference analysis and model evaluations

Causal Inference 360 A Python package for inferring causal effects from observational data. Description Causal inference analysis enables estimating t

International Business Machines 506 Dec 19, 2022
🧪 Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.

🧪📈 🐍. The purpose of the panel-chemistry project is to make it really easy for you to do DATA ANALYSIS and build powerful DATA AND VIZ APPLICATIONS within the domain of Chemistry using using Python a

Marc Skov Madsen 97 Dec 08, 2022
Stream-Kafka-ELK-Stack - Weather data streaming using Apache Kafka and Elastic Stack.

Streaming Data Pipeline - Kafka + ELK Stack Streaming weather data using Apache Kafka and Elastic Stack. Data source: https://openweathermap.org/api O

Felipe Demenech Vasconcelos 2 Jan 20, 2022