Exploring the Top ML and DL GitHub Repositories

Overview

Exploring the Top ML and DL GitHub Repositories

This repository contains my work related to my project where I collected data on the most popular machine learning and deep learning GitHub repositories in order to further visualize and analyze it.

I've written a corresponding article about this project, which you can find on Towards Data Science. The article was selected as an "Editors Pick", and was also selected to be in their "Hands on Tutorials" section of their publication.

At a high level, my analysis is as follows:

  1. I collected data on the top machine learning and deep learning repositories and their respective owners from GitHub.
  2. I cleaned and prepared the data.
  3. I visualized what I thought were interesting patterns, trends, and findings within the data, and discuss each visualization in detail within the TDS article above.

Tools used

Python NumPy pandas tqdm PyGitHub GeoPy Altair tqdm wordcloud docopt black

Replicating the Analysis

I've designed the analysis in this repository so that anyone is able to recreate the data collection, cleaning, and visualization steps in a fully automated manner. To do this, open up a terminal and follow the steps below:

Step 1: Clone this repository to your computer

# clone the repo
git clone https://github.com/nicovandenhooff/top-repo-analysis.git

# change working directory to the repos root directory
cd top-repo-analysis

Step 2: Create and activate the required virtual environment

# create the environment
conda env create -f environment.yaml

# activate the environment
conda activate top-repo-analysis

Step 3: Obtain a GitHub personal access token ("PAT") and add it to the credentials file

Please see how to obtain a PAT here.

Once you have it perform the following:

# open the credentials file
open src/credentials.json

This will open the credentials json file which contains the following:

" }">
{
"github_token": "
   
    "
   
}

Change to your PAT.

Step 4: Run the following command to delete the current data and visualizations in the repository

make clean

Step 5: Run the following command to recreate the analysis

make all

Please note that if you are recreating the analysis:

  • The last step will take several hours to run (approximately 6-8 hours) as the data collection process from GitHub has to sleep to respect the GitHub API rate limit. The total number of API requests for the data collection will approximately be between 20,000 to 30,000.
  • When the data cleaning script data_cleaning.py runs, there make be some errors may be printed to the screen by GeoPy if the Noinatim geolocation service is unable to find a valid location for a GitHub user. This will not cause the script to terminate, and is just ugly in the terminal. Unfortunately you cannot suppress these error messages, so just ignore them if they occur.
  • Getting the location data with GeoPy in the data cleaning script also takes about 30 minutes as the Nominatim geolocation service limits 1 API request per second.
  • I ran this analysis on December 30, 2021 and as such collected the data from GitHub on this date. If you run this analysis in the future, the data you collect will inherently be slightly different if the machine learning and deep learning repositories with the highest number of stars has changed since the date when I ran the analysis. This will slightly change how the resulting visualizations look.

Using the Scraper to Collect New Data

You can also use the scraping script in isolation to collect new data from GitHub if you desire.

If you'd like to do this, all you'll need to do is open up a terminal, follow steps 1 to 3 above, and then perform the following:

Step a) Run the scraping script with your desired options as follows

python src/github_scraper.py --queries=<queries> --path=<path>
  • Replace with your desired queries. Note that if you desire multiple search queries, enclose them in "" separate them by a single comma with NO SPACE after the comma. For example "Machine Learning,Deep Learning"
  • Replace with the output path that you want the scraped data to be saved at.

Please see the documentation in the header of the scraping script for additional options that are available.

Step b) Run the data cleaning script to clean your newly scraped data

python src/data_cleaning.py --input_path=<path> --output_path=<output_path>
  • Replace with the path that you saved the scraped data at.
  • Replace with the output path that you want the cleaned data to be saved at.
  • As metioned in the last section, some errors may be printed to the terminal by GeoPy during the data cleaning process, but feel free to ignore these as they do not affect the execution of the script.

Dependencies

Please see the environment file for a full list of dependencies.

License

The source code for the site is licensed under the MIT license.

You might also like...
Spectacular AI SDK fuses data from cameras and IMU sensors and outputs an accurate 6-degree-of-freedom pose of a device.
Spectacular AI SDK fuses data from cameras and IMU sensors and outputs an accurate 6-degree-of-freedom pose of a device.

Spectacular AI SDK examples Spectacular AI SDK fuses data from cameras and IMU sensors (accelerometer and gyroscope) and outputs an accurate 6-degree-

Working Time Statistics of working hours and working conditions by industry and company

Working Time Statistics of working hours and working conditions by industry and company

A python package which can be pip installed to perform statistics and visualize binomial and gaussian distributions of the dataset

GBiStat package A python package to assist programmers with data analysis. This package could be used to plot : Binomial Distribution of the dataset p

ToeholdTools is a Python package and desktop app designed to facilitate analyzing and designing toehold switches, created as part of the 2021 iGEM competition.

ToeholdTools Category Status Repository Package Build Quality A library for the analysis of toehold switch riboregulators created by the iGEM team Cit

A collection of robust and fast processing tools for parsing and analyzing web archive data.

ChatNoir Resiliparse A collection of robust and fast processing tools for parsing and analyzing web archive data. Resiliparse is part of the ChatNoir

Python beta calculator that retrieves stock and market data and provides linear regressions.

Stock and Index Beta Calculator Python script that calculates the beta (β) of a stock against the chosen index. The script retrieves the data and resa

Larch: Applications and Python Library for Data Analysis of X-ray Absorption Spectroscopy (XAS, XANES, XAFS, EXAFS), X-ray Fluorescence (XRF) Spectroscopy and Imaging

Larch: Data Analysis Tools for X-ray Spectroscopy and More Documentation: http://xraypy.github.io/xraylarch Code: http://github.com/xraypy/xraylarch L

A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.
A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.

Realtime Financial Market Data Visualization and Analysis Introduction This repo shows my project about real-time stock data pipeline. All the code is

Python script to automate the plotting and analysis of percentage depth dose and dose profile simulations in TOPAS.

topas-create-graphs A script to automatically plot the results of a topas simulation Works for percentage depth dose (pdd) and dose profiles (dp). Dep

Releases(v1.0.0)
Owner
Nico Van den Hooff
UBC Master of Data Science 2022
Nico Van den Hooff
DaDRA (day-druh) is a Python library for Data-Driven Reachability Analysis.

DaDRA (day-druh) is a Python library for Data-Driven Reachability Analysis. The main goal of the package is to accelerate the process of computing estimates of forward reachable sets for nonlinear dy

2 Nov 08, 2021
Fit models to your data in Python with Sherpa.

Table of Contents Sherpa License How To Install Sherpa Using Anaconda Using pip Building from source History Release History Sherpa Sherpa is a modeli

134 Jan 07, 2023
a tool that compiles a csv of all h1 program stats

h1stats - h1 Program Stats Scraper This python3 script will call out to HackerOne's graphql API and scrape all currently active programs for informati

Evan 40 Oct 27, 2022
cLoops2: full stack analysis tool for chromatin interactions

cLoops2: full stack analysis tool for chromatin interactions Introduction cLoops2 is an extension of our previous work, cLoops. From loop-calling base

YaqiangCao 25 Dec 14, 2022
A neural-based binary analysis tool

A neural-based binary analysis tool Introduction This directory contains the demo of a neural-based binary analysis tool. We test the framework using

Facebook Research 208 Dec 22, 2022
Pandas and Dask test helper methods with beautiful error messages.

beavis Pandas and Dask test helper methods with beautiful error messages. test helpers These test helper methods are meant to be used in test suites.

Matthew Powers 18 Nov 28, 2022
Pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

AWS Data Wrangler Pandas on AWS Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretMana

Amazon Web Services - Labs 3.3k Jan 04, 2023
Data Scientist in Simple Stock Analysis of PT Bukalapak.com Tbk for Long Term Investment

Data Scientist in Simple Stock Analysis of PT Bukalapak.com Tbk for Long Term Investment Brief explanation of PT Bukalapak.com Tbk Bukalapak was found

Najibulloh Asror 2 Feb 10, 2022
Data Analysis for First Year Laboratory at Imperial College, London.

Data Analysis for First Year Laboratory at Imperial College, London. For personal reference only, and to reference in lab reports and lab books.

Martin He 0 Aug 29, 2022
Weather Image Recognition - Python weather application using series of data

Weather Image Recognition - Python weather application using series of data

Kushal Shingote 1 Feb 04, 2022
The Master's in Data Science Program run by the Faculty of Mathematics and Information Science

The Master's in Data Science Program run by the Faculty of Mathematics and Information Science is among the first European programs in Data Science and is fully focused on data engineering and data a

Amir Ali 2 Jun 17, 2022
Implementation in Python of the reliability measures such as Omega.

OmegaPy Summary Simple implementation in Python of the reliability measures: Omega Total, Omega Hierarchical and Omega Hierarchical Total. Name Link O

Rafael Valero Fernández 2 Apr 27, 2022
Functional tensors for probabilistic programming

Funsor Funsor is a tensor-like library for functions and distributions. See Functional tensors for probabilistic programming for a system description.

208 Dec 29, 2022
Stochastic Gradient Trees implementation in Python

Stochastic Gradient Trees - Python Stochastic Gradient Trees1 by Henry Gouk, Bernhard Pfahringer, and Eibe Frank implementation in Python. Based on th

John Koumentis 2 Nov 18, 2022
Analyzing Earth Observation (EO) data is complex and solutions often require custom tailored algorithms.

eo-grow Earth observation framework for scaled-up processing in Python. Analyzing Earth Observation (EO) data is complex and solutions often require c

Sentinel Hub 18 Dec 23, 2022
Pip install minimal-pandas-api-for-polars

Minimal Pandas API for Polars Install From PyPI: pip install minimal-pandas-api-for-polars Example Usage (see tests/test_minimal_pandas_api_for_polars

Austin Ray 6 Oct 16, 2022
International Space Station data with Python research 🌎

International Space Station data with Python research 🌎 Plotting ISS trajectory, calculating the velocity over the earth and more. Plotting trajector

Facundo Pedaccio 41 Jun 16, 2022
Intercepting proxy + analysis toolkit for Second Life compatible virtual worlds

Hippolyzer Hippolyzer is a revival of Linden Lab's PyOGP library targeting modern Python 3, with a focus on debugging issues in Second Life-compatible

Salad Dais 6 Sep 01, 2022
Analytical view of olist e-commerce in Brazil

Analysis of E-Commerce Public Dataset by Olist The objective of this project is to propose an analytical view of olist e-commerce in Brazil. For this

Gurpreet Singh 1 Jan 11, 2022
Larch: Applications and Python Library for Data Analysis of X-ray Absorption Spectroscopy (XAS, XANES, XAFS, EXAFS), X-ray Fluorescence (XRF) Spectroscopy and Imaging

Larch: Data Analysis Tools for X-ray Spectroscopy and More Documentation: http://xraypy.github.io/xraylarch Code: http://github.com/xraypy/xraylarch L

xraypy 95 Dec 13, 2022