Deep Learning (with PyTorch)

Overview

Deep Learning (with PyTorch) Binder

This notebook repository now has a companion website, where all the course material can be found in video and textual format.

🇬🇧   🇨🇳   🇰🇷   🇪🇸   🇮🇹   🇹🇷   🇯🇵   🇸🇦   🇫🇷   🇮🇷   🇷🇺   🇻🇳   🇷🇸   🇵🇹   🇭🇺

Getting started

To be able to follow the exercises, you are going to need a laptop with Miniconda (a minimal version of Anaconda) and several Python packages installed. The following instruction would work as is for Mac or Ubuntu Linux users, Windows users would need to install and work in the Git BASH terminal.

Download and install Miniconda

Please go to the Anaconda website. Download and install the latest Miniconda version for Python 3.7 for your operating system.

wget <http:// link to miniconda>
sh <miniconda*.sh>

Check-out the git repository with the exercise

Once Miniconda is ready, checkout the course repository and proceed with setting up the environment:

git clone https://github.com/Atcold/pytorch-Deep-Learning

Create isolated Miniconda environment

Change directory (cd) into the course folder, then type:

# cd pytorch-Deep-Learning
conda env create -f environment.yml
source activate pDL

Start Jupyter Notebook or JupyterLab

Start from terminal as usual:

jupyter lab

Or, for the classic interface:

jupyter notebook

Notebooks visualisation

Jupyter Notebooks are used throughout these lectures for interactive data exploration and visualisation.

We use dark styles for both GitHub and Jupyter Notebook. You should try to do the same, or they will look ugly. JupyterLab has a built-in selectable dark theme, so you only need to install something if you want to use the classic notebook interface. To see the content appropriately in the classic interface install the following:

Comments
  • Chapter 5-2 docs

    Chapter 5-2 docs

    Optimization techniques II

    We discuss adaptive methods for SGD such as RMSprop and ADAM. We also talk about normalization layers and their effects on the neural network training process. Finally, we discuss a real-world example of neural nets being used in industry to make MRI scans faster and more efficient.

    Please let me know if any changes need to be made before merging.

    opened by guidopetri 16
  • Updates to current packages

    Updates to current packages

    This:

    • Moves PyTorch from 0.4 to 1.1 (one tiny code change)
    • Moves Python from 3.6 to 3.7 (no changes to code, just env)
    • Moves 1-2 requirements out of notebooks and into environment (potential nasty scipy pip install from librosa avoided!)
    • Uses conda kernels so the correct environment kernel is available (all notebooks rerun to pick up proper kernel)
    • Adds JuptyerLab (not required, but nice) - the interactive backend in the final notebook is still best in the classic interface. Try out built-in dark mode!

    All notebooks seem to run (except noted minor issue with JupyterLab)

    opened by henryiii 16
  • [FR & EN] YouTube subtitles

    [FR & EN] YouTube subtitles

    Hi Alf :wave:,

    As indicated in my last email, I can't afford to wait for Yann's return without a big delay on my side. So here are the subtitle files:

    • For English, it is the addition of the unicode. In practice:
    1. The list of files not modified during this review of the unicode: practinum1 (didn't need unicode), practinum4 (the file contains blocks of 3 instead of 2 for the others), for lecture 12 (the only file I didn't translate into French)

    2. The list of finished files (full English review + unicode) : lecture 6 & 9

    3. The list of about clean files (partial English review + unicode) : lecture 1-3,10,11 + practinum 1-3, 7-8, 10

    4. The list of not clean files (no English review + unicode): lecture 5-9,12-15 + practinum 5-6,9,11-15

    • For French, these are all the subtitles (except for lecture 12 where I have huge problems understanding Mike Lewis's accent and so I preferred not to put anything than to translate badly).

    I also added a disclamer for the V2 of the French translation of the website which should arrive this month. It should be my next and last PR closing the French translation work :boom:

    Loïck

    opened by lbourdois 13
  • Broken image links in 3.3. Properties of natural signals

    Broken image links in 3.3. Properties of natural signals

    The following image links are broken:

    • [x] Figure 2(a)
    • [x] Figure 2(b)
    • [x] Figure 3(a)
    • [x] Figure 3(b)

    See https://atcold.github.io/pytorch-Deep-Learning/en/week03/03-3/

    I think the images were originally obtained from this presentation: 02 - CNN.pdf

    See pages 10-11


    Also, small suggestions:

    • [x] Change Figure 4 to include R^7 and R^2 as in Slide 20 . This would better match the text for Figure 4.

    • [x] Include Figure (4b maybe?) with that on Slide 21 to show what Padding is doing

    opened by feedthebeat90 11
  • Portuguese translation

    Portuguese translation

    Hi @Atcold ! I would like to know how and where should I commit markdown files in Portuguese? I recall that you have commented something with @ebetica .

    opened by ricardobarroslourenco 11
  • [ZH] 13-3 Inline latex broken

    [ZH] 13-3 Inline latex broken

    Hi @JonathanSum ! Just for your info, There seems to be some inline latex broken on lecture 13-3:

    Screen Shot 2020-09-23 at 22 41 44

    The rest of the lectures I've checked seem to be fine.

    opened by xcastilla 9
  • Reorganize the website structure

    Reorganize the website structure

    This PR reorganizes the website structure, so we now have:

    en/
      index.md
      about.dm
      week01/
      week02/
      ...
    zh/
      index.md
      about.md
      week01/
      week02/
      ...
    ...
    

    Hopefully it's less messy and easier to work with.

    After this is merged, I will pull the images out into a global directory as well.

    Also fixes some broken links in zh/index.md

    opened by ebetica 9
  • Problem visualizing spanish translation on github.io

    Problem visualizing spanish translation on github.io

    I found an error visualizing on the github.io page the file /docs/es/week02/02-1.md.

    The english version of the file appears before some parts and the layout of the spanish parts after the english parts gets a bit messed up.

    grafik

    grafik

    opened by mt0rm0 8
  • [EN] Fix timers

    [EN] Fix timers

    A PR that fixes the timers of the sbv files that I couldn't correct in PR #660 to avoid conflicts.

    I also took the opportunity to correct the few errors I caught when translating the lecture10.

    I also noticed that the sbv files of the practinums of weeks 14 and 15 were missing.

    opened by lbourdois 8
  • [ZH] translation of 06-2 and 06.md

    [ZH] translation of 06-2 and 06.md

    I have translated the top 50% of the RNN(06-02) in Chinese.

    I passed the course on deep learning.ai and I also wrote a few notebooks to help students in the coursera Tensorflow time series seq2seq notebook.

    opened by JonathanSum 8
  • Vanishing gradient notebook

    Vanishing gradient notebook

    Poornima and I have compared an LSTM and RNN and visualized the gradients with respect to the input. We see that the gradients for the RNN are much smaller compared to the LSTM.

    We are able to train MNIST for a large input sequence with an LSTM and failed to do so with an RNN.

    Hope this is useful. If we need to make any chances, please let us know !

    opened by karanchahal 8
  • Software version update for 2023

    Software version update for 2023

    Hi there,

    I hope these tips can help you: Using Docker, with torchtext version 0.9.0 and PyTorch version 1.8.0.

    Please note that PyTorch 1.8 may not have good support for CUDA versions newer than 11. If you are using a newer version of CUDA, you may want to consider using the CPU instead.

    opened by wenxin-bupt 0
  • <Fix> evaluation dataset, printed samples

    evaluation dataset, printed samples

    Bunch of minor "theoretical" changes in the evaluation function:

    1. test_data_gen was used as the data generator in the evaluation, instead of data_generator, thereby evaluating the net on the test set used for training (not an actual issue here given the sequences are randomized and not sampled from existing datasets, but in principle would lead to a data leak in realistic scenarios);
    2. the correct sequences printed were a sampling (with reinsertion) of the first 10 evaluated, instead of 10 sampled from the whole set of correct ones;
    3. the condition for printing the incorrectly classified sequences would declare the absence of misclassifications if verbose==False, independently of their actual presence;
    opened by hypothe 1
  • Added controller trainer and improved truck class

    Added controller trainer and improved truck class

    Added

    • new truck methods for randomizing state within contraints
    • new truck methods for seeing if truck is at dock or offscreen
    • Training script for optimizing controller

    Note

    I currently have not successfully trained the controller to convergence. I have based the training off of this. On the website, they mention that the controller is hard to train. I have tried training it on the website with no success, so it seems like even their lessons are difficult to train. However, the code for training should be very similar to the code on the website. You may also alter the amount of lessons, max time steps, learning rate etc. to see if the model converges. I have been trying for over a week and have not succeeded yet.

    opened by dafaronbi 1
  • fix chinese version of 12-3

    fix chinese version of 12-3

    I found that the Chinese version was basically machine translated, which caused the latex syntax to be broken. Of course, there are a lot of unreasonable translation. This PR is mainly about fixing broken latex. I also did my best to fix some of the translations that were too much bullshit.

    opened by vipcxj 0
  • Russian translation (dictionary)

    Russian translation (dictionary)

    I would question some translations in the dictionary for Russian: I've graduated this year and we haven't really translated everything. For example, it will be more understandable if I say "one-hot" in Russian as it is, rather than "унитарный код". Basically, I've never heard anyone calling it "унитарный код", to be honest...

    So I guess there is a choice between being academically strict or being understood.

    opened by xufana 4
  • Use conda instead of source activate

    Use conda instead of source activate

    I think source activate is a few years old now and isn't supported anymore. https://stackoverflow.com/questions/49600611/python-anaconda-should-i-use-conda-activate-or-source-activate-in-linux

    opened by ebetica 0
Releases(dlsp19)
  • dlsp19(Jan 30, 2020)

    This is the notes for the Spring 2019 Deep Learning course at NYU. This course concerns the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional net and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition.

    This is the initial draft of the course notes - they are based off of a course developed for the the African Masters of Machine Intelligence (AMMI). You can access that version here

    Source code(tar.gz)
    Source code(zip)
  • aims-fl18(Jan 30, 2020)

    The African Masters of Machine Intelligence (AMMI) is Africa's flagship program in machine intelligence led by The African Institute for Mathematical Sciences (AIMS). These lessons, developed during the course of several years while I've been teaching at Purdue and NYU, are here proposed for the AMMI (AIMS).

    Prior to this course delivered for AMMI (AIMS), an earlier version of this was delivered and video-recorded for the Computational and Data Science for High Energy Physics (CoDaS-HEP) summer school at Princeton University. Please refer to this version release here.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Nov 5, 2018)

    Click CoDaS-HEP_2018 to jump to this release.

    These lessons, developed during the course of several years while I've been teaching at Purdue and NYU, are here proposed for the Computational and Data Science for High Energy Physics (CoDaS-HEP) summer school at Princeton University. The whole course has been recorded and the playlist is made available here. Check the slides for drawings of better visual quality.

    Source code(tar.gz)
    Source code(zip)
Owner
Alfredo Canziani
Musician, math lover, cook, dancer, 🏳️‍🌈, and assistant professor of Computer Science at New York University
Alfredo Canziani
The 2nd place solution of 2021 google landmark retrieval on kaggle.

Google_Landmark_Retrieval_2021_2nd_Place_Solution The 2nd place solution of 2021 google landmark retrieval on kaggle. Environment We use cuda 11.1/pyt

229 Dec 13, 2022
This repo tries to recognize faces in the dataset you created

YÜZ TANIMA SİSTEMİ Bu repo oluşturacağınız yüz verisetlerini tanımaya çalışan ma

Mehdi KOŞACA 2 Dec 30, 2021
All the essential resources and template code needed to understand and practice data structures and algorithms in python with few small projects to demonstrate their practical application.

Data Structures and Algorithms Python INDEX 1. Resources - Books Data Structures - Reema Thareja competitiveCoding Big-O Cheat Sheet DAA Syllabus Inte

Shushrut Kumar 129 Dec 15, 2022
LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021

LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021 We propose a cross encoder model (LTR_CrossEncoder) for information retrieval, re-retrie

Hieu Duong 7 Jan 12, 2022
Fashion Landmark Estimation with HRNet

HRNet for Fashion Landmark Estimation (Modified from deep-high-resolution-net.pytorch) Introduction This code applies the HRNet (Deep High-Resolution

SVIP Lab 91 Dec 26, 2022
Learning Energy-Based Models by Diffusion Recovery Likelihood

Learning Energy-Based Models by Diffusion Recovery Likelihood Ruiqi Gao, Yang Song, Ben Poole, Ying Nian Wu, Diederik P. Kingma Paper: https://arxiv.o

Ruiqi Gao 41 Nov 22, 2022
A small tool to joint picture including gif

README 做设计的时候遇到拼接长图的情况,但是发现没有什么好用的能拼接gif的工具。 于是自己写了个gif拼接小工具。 可以自动拼接gif、png和jpg等常见格式。 效果 从上至下 从下至上 从左至右 从右至左 使用 克隆仓库 git clone https://github.com/Dels

3 Dec 15, 2021
[CVPR22] Official codebase of Semantic Segmentation by Early Region Proxy.

RegionProxy Figure 2. Performance vs. GFLOPs on ADE20K val split. Semantic Segmentation by Early Region Proxy Yifan Zhang, Bo Pang, Cewu Lu CVPR 2022

Yifan 54 Nov 29, 2022
A Pytorch implementation of "Splitter: Learning Node Representations that Capture Multiple Social Contexts" (WWW 2019).

Splitter ⠀⠀ A PyTorch implementation of Splitter: Learning Node Representations that Capture Multiple Social Contexts (WWW 2019). Abstract Recent inte

Benedek Rozemberczki 201 Nov 09, 2022
The final project of "Applying AI to 2D Medical Imaging Data" of "AI for Healthcare" nanodegree - Udacity.

Pneumonia Detection from X-Rays Project Overview In this project, you will apply the skills that you have acquired in this 2D medical imaging course t

Omar Laham 1 Jan 14, 2022
Code, final versions, and information on the Sparkfun Graphical Datasheets

Graphical Datasheets Code, final versions, and information on the SparkFun Graphical Datasheets. Generated Cells After Running Script Example Complete

SparkFun Electronics 102 Jan 05, 2023
Library for implementing reservoir computing models (echo state networks) for multivariate time series classification and clustering.

Framework overview This library allows to quickly implement different architectures based on Reservoir Computing (the family of approaches popularized

Filippo Bianchi 249 Dec 21, 2022
NLG evaluation via Statistical Measures of Similarity: BaryScore, DepthScore, InfoLM

NLG evaluation via Statistical Measures of Similarity: BaryScore, DepthScore, InfoLM Automatic Evaluation Metric described in the papers BaryScore (EM

Pierre Colombo 28 Dec 28, 2022
The Rich Get Richer: Disparate Impact of Semi-Supervised Learning

The Rich Get Richer: Disparate Impact of Semi-Supervised Learning Preprocess file of the dataset used in implicit sub-populations: (Demographic groups

<a href=[email protected]"> 4 Oct 14, 2022
The easiest tool for extracting radiomics features and training ML models on them.

Simple pipeline for experimenting with radiomics features Installation git clone https://github.com/piotrekwoznicki/ClassyRadiomics.git cd classrad pi

Piotr Woźnicki 17 Aug 04, 2022
[NeurIPS 2021] Source code for the paper "Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes"

Qu-ANTI-zation This repository contains the code for reproducing the results of our paper: Qu-ANTI-zation: Exploiting Quantization Artifacts for Achie

Secure AI Systems Lab 8 Mar 26, 2022
AirLoop: Lifelong Loop Closure Detection

AirLoop This repo contains the source code for paper: Dasong Gao, Chen Wang, Sebastian Scherer. "AirLoop: Lifelong Loop Closure Detection." arXiv prep

Chen Wang 53 Jan 03, 2023
Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019

PoseNet of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image" Introduction This repo is official Py

Gyeongsik Moon 677 Dec 25, 2022
A very simple baseline to estimate 2D & 3D SMPL-compatible keypoints from a single color image.

Minimal Body A very simple baseline to estimate 2D & 3D SMPL-compatible keypoints from a single color image. The model file is only 51.2 MB and runs a

Yuxiao Zhou 49 Dec 05, 2022