I will implement Fastai in each projects present in this repository.

Overview

DEEP LEARNING FOR CODERS WITH FASTAI AND PYTORCH

The repository contains a list of the projects which I have worked on while reading the book Deep Learning For Coders with Fastai and PyTorch.

📚 NOTEBOOKS:

1. INTRODUCTION

  • The Introduction notebook is a comprehensive notebook as it contains a list of projects such as Cat and Dog Classification, Semantic Segmentation, Sentiment Classification, Tabular Classification and Recommendation System.

2. MODEL PRODUCTION

  • The BearDetector notebook contains all the dependencies for a complete Image Classification project.

3. TRAINING A CLASSIFIER

  • The DigitClassifier notebook contains all the dependencies required for Image Classification project from scratch.

4. IMAGE CLASSIFICATION

  • The Image Classification notebook contains all the dependencies for Image Classification such as getting image data ready for modeling i.e presizing and data block summary and for fitting the model i.e learning rate finder, unfreezing, discriminative learning rates, setting the number of epochs and using deeper architectures. It has explanations of cross entropy loss function as well.

5. MULTILABEL CLASSIFICATION AND REGRESSION

  • The Multilabel Classification notebook contains all the dependencies required to understand Multilabel Classification. It contains the explanations of initializing DataBlock and DataLoaders. The Regression notebook contains all the dependencies required to understand Image Regression.

6. ADVANCED CLASSIFICATION

  • The Imagenette Classification notebook contains all the dependencies required to train a state of art machine learning model in computer vision whether from scratch or using transfer learning. It contains explanations and implementation of Normalization, Progressive Resizing, Test Time Augmentation, Mixup Augmentation and Label Smoothing.

7. COLLABORATIVE FILTERING

  • The Collaborative Filtering notebook contains all the dependencies required to build a Recommendation System. It presents how gradient descent can learn intrinsic factors or biases about items from a history of ratings which then gives information about the data.

8. TABULAR MODELING

  • The Tabular Model notebook contains all the dependencies required for Tabular Modeling. It presents the detailed explanations of two approaches to Tabular Modeling: Decision Tree Ensembles and Neural Networks.

9. NATURAL LANGUAGE PROCESSING

  • The NLP notebook contains all the dependencies required build Language Model that can generate texts and a Classifier Model that determines whether a review is positive or negative. It presents the state of art Classifier Model which is build using a pretrained language model and fine tuned it to the corpus of task. Then the Encoder model is used for classification.

10. DATA MUNGING

  • The DataMunging notebook contains all the dependencies required to implement mid level API of Fast.ai in Natural Language Processing and Computer Vision which provides greater flexibility to apply transformations on data items.

11. LANGUAGE MODEL FROM SCRATCH

  • The LanguageModel notebook contains all the dependencies that is inside AWD-LSTM architecture for Text Classification. It presents the implementation of Language Model using simple Linear Model, Recurrent Neural Network, Long Short Term Memory, Dropout Regularization and Activation Regularization.

12. CONVOLUTIONAL NEURAL NETWORK

  • The CNN notebook contains all the dependencies required to understand Convolutional Neural Networks. Convolutions are just a type of matrix multiplication with two constraints on the weight matrix: some elements are always zero and some elements are tied or forced to always have the same value.

13. RESIDUAL NETWORKS

  • The ResNets notebook contains all the dependencies required to understand the implementation of skip connections which allow deeper models to be trained. ResNet is the pretrained model when using Transfer Learning.

14. ARCHITECTURE DETAILS

  • The Architecture Details notebook contains all the dependencies required to create a complete state of art computer vision models. It presents some aspects of natural language processing as well.

15. TRAINING PROCESS

  • The Training notebook contains all the dependencies required to create a training loop and explored variants of Stochastic Gradient Descent.

16. NEURAL NETWORK FOUNDATIONS

  • The Neural Foundations notebook contains all the dependencies required to understand the foundations of deep learning, begining with matrix multiplication and moving on to implementing the forward and backward passes of a neural net from scratch.

17. CNN INTERPRETATION WITH CAM

  • The CNN Interpretation notebook presents the implementation of Class Activation Maps in model interpretation. Class activation maps give insights into why a model predicted a certain result by showing the areas of images that were most responsible for a given prediction.

18. FASTAI LEARNER FROM SCRATCH

  • The Fastai Learner notebook contains all the dependencies to understand the key concepts of Fastai.

19. CHEST X-RAYS CLASSIFICATION

20. TRANSFORMERS MODEL

Owner
Thinam Tamang
Machine Learning and Deep Learning
Thinam Tamang
Code Repository for The Kaggle Book, Published by Packt Publishing

The Kaggle Book Data analysis and machine learning for competitive data science Code Repository for The Kaggle Book, Published by Packt Publishing "Lu

Packt 1.6k Jan 07, 2023
Official codes for the paper "Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech"

ResDAVEnet-VQ Official PyTorch implementation of Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech What is in this repo? M

Wei-Ning Hsu 21 Aug 23, 2022
HomeAssitant custom integration for dyson

HomeAssistant Custom Integration for Dyson This custom integration is still under development. This is a HA custom integration for dyson. There are se

Xiaonan Shen 232 Dec 31, 2022
A repo for Causal Imitation Learning under Temporally Correlated Noise

CausIL A repo for Causal Imitation Learning under Temporally Correlated Noise. Running Experiments To re-train an expert, run: python experts/train_ex

Gokul Swamy 5 Nov 01, 2022
PyTorch Implementation of CycleGAN and SSGAN for Domain Transfer (Minimal)

MNIST-to-SVHN and SVHN-to-MNIST PyTorch Implementation of CycleGAN and Semi-Supervised GAN for Domain Transfer. Prerequites Python 3.5 PyTorch 0.1.12

Yunjey Choi 401 Dec 30, 2022
[CVPR2021] Look before you leap: learning landmark features for one-stage visual grounding.

LBYL-Net This repo implements paper Look Before You Leap: Learning Landmark Features For One-Stage Visual Grounding CVPR 2021. Getting Started Prerequ

SVIP Lab 45 Dec 12, 2022
Fiddle is a Python-first configuration library particularly well suited to ML applications.

Fiddle Fiddle is a Python-first configuration library particularly well suited to ML applications. Fiddle enables deep configurability of parameters i

Google 227 Dec 26, 2022
Locationinfo - A script helps the user to show network information such as ip address

Description This script helps the user to show network information such as ip ad

Roxcoder 1 Dec 30, 2021
ConvMixer unofficial implementation

ConvMixer ConvMixer 非官方实现 pytorch 版本已经实现。 nets 是重构版本 ,test 是官方代码 感兴趣小伙伴可以对照看一下。 keras 已经实现 tf2.x 中 是tensorflow 2 版本 gelu 激活函数要求 tf=2.4 否则使用入下代码代替gelu

Jian Tengfei 8 Jul 11, 2022
This program will stylize your photos with fast neural style transfer.

Neural Style Transfer (NST) Using TensorFlow Demo TensorFlow TensorFlow is an end-to-end open source platform for machine learning. It has a comprehen

Ismail Boularbah 1 Aug 08, 2022
Code for Reciprocal Adversarial Learning for Brain Tumor Segmentation: A Solution to BraTS Challenge 2021 Segmentation Task

BRATS 2021 Solution For Segmentation Task This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmenta

Himashi Amanda Peiris 6 Sep 15, 2022
Hi Guys, here I am providing examples, which will help you in Lerarning Python

LearningPython Hi guys, here I am trying to include as many practice examples of Python Language, as i Myself learn, and hope these will help you in t

4 Feb 03, 2022
Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding (CVPR2022)

Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding by Qiaole Dong*, Chenjie Cao*, Yanwei Fu Paper and Supple

Qiaole Dong 190 Dec 27, 2022
Deep Learning ❤️ OneFlow

Deep Learning with OneFlow made easy 🚀 ! Carefree? carefree-learn aims to provide CAREFREE usages for both users and developers. User Side Computer V

21 Oct 27, 2022
LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT

LightHuBERT LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT | Github | Huggingface | SUPER

WangRui 46 Dec 29, 2022
Sematic-Segmantation - Semantic Segmentation on MIT ADE20K dataset in PyTorch

Semantic Segmentation on MIT ADE20K dataset in PyTorch This is a PyTorch impleme

Berat Eren Terzioğlu 4 Mar 22, 2022
GrailQA: Strongly Generalizable Question Answering

GrailQA is a new large-scale, high-quality KBQA dataset with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It ca

OSU DKI Lab 76 Dec 21, 2022
Learning Visual Words for Weakly-Supervised Semantic Segmentation

[IJCAI 2021] Learning Visual Words for Weakly-Supervised Semantic Segmentation Implementation of IJCAI 2021 paper Learning Visual Words for Weakly-Sup

Lixiang Ru 24 Oct 05, 2022
Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

66 Dec 15, 2022
A modular active learning framework for Python

Modular Active Learning framework for Python3 Page contents Introduction Active learning from bird's-eye view modAL in action From zero to one in a fe

modAL 1.9k Dec 31, 2022