TensorFlow Tutorials with YouTube Videos

Overview

TensorFlow Tutorials

Original repository on GitHub

Original author is Magnus Erik Hvass Pedersen

Introduction

  • These tutorials are intended for beginners in Deep Learning and TensorFlow.
  • Each tutorial covers a single topic.
  • The source-code is well-documented.
  • There is a YouTube video for each tutorial.

Tutorials for TensorFlow 2

The following tutorials have been updated and work with TensorFlow 2 (some of them run in "v.1 compatibility mode").

  1. Simple Linear Model (Notebook) (Google Colab)

  2. Convolutional Neural Network (Notebook) (Google Colab)

3-C. Keras API (Notebook) (Google Colab)

  1. Fine-Tuning (Notebook) (Google Colab)

13-B. Visual Analysis for MNIST (Notebook) (Google Colab)

  1. Reinforcement Learning (Notebook) (Google Colab)

  2. Hyper-Parameter Optimization (Notebook) (Google Colab)

  3. Natural Language Processing (Notebook) (Google Colab)

  4. Machine Translation (Notebook) (Google Colab)

  5. Image Captioning (Notebook) (Google Colab)

  6. Time-Series Prediction (Notebook) (Google Colab)

Tutorials for TensorFlow 1

The following tutorials only work with the older TensorFlow 1 API, so you would need to install an older version of TensorFlow to run these. It would take too much time and effort to convert these tutorials to TensorFlow 2.

  1. Pretty Tensor (Notebook) (Google Colab)

3-B. Layers API (Notebook) (Google Colab)

  1. Save & Restore (Notebook) (Google Colab)

  2. Ensemble Learning (Notebook) (Google Colab)

  3. CIFAR-10 (Notebook) (Google Colab)

  4. Inception Model (Notebook) (Google Colab)

  5. Transfer Learning (Notebook) (Google Colab)

  6. Video Data (Notebook) (Google Colab)

  7. Adversarial Examples (Notebook) (Google Colab)

  8. Adversarial Noise for MNIST (Notebook) (Google Colab)

  9. Visual Analysis (Notebook) (Google Colab)

  10. DeepDream (Notebook) (Google Colab)

  11. Style Transfer (Notebook) (Google Colab)

  12. Estimator API (Notebook) (Google Colab)

  13. TFRecords & Dataset API (Notebook) (Google Colab)

Videos

These tutorials are also available as YouTube videos.

Translations

These tutorials have been translated to the following languages:

New Translations

You can help by translating the remaining tutorials or reviewing the ones that have already been translated. You can also help by translating to other languages.

It is a very big job to translate all the tutorials, so you should just start with Tutorials #01, #02 and #03-C which are the most important for beginners.

New Videos

You are also very welcome to record your own YouTube videos in other languages. It is strongly recommended that you get a decent microphone because good sound quality is very important. I used vokoscreen for recording the videos and the free DaVinci Resolve for editing the videos.

Forks

See the selected list of forks for community modifications to these tutorials.

Installation

There are different ways of installing and running TensorFlow. This section describes how I did it for these tutorials. You may want to do it differently and you can search the internet for instructions.

If you are new to using Python and Linux then this may be challenging to get working and you may need to do internet searches for error-messages, etc. It will get easier with practice. You can also run the tutorials without installing anything by using Google Colab, see further below.

Some of the Python Notebooks use source-code located in different files to allow for easy re-use across multiple tutorials. It is therefore recommended that you download the whole repository from GitHub, instead of just downloading the individual Python Notebooks.

Git

The easiest way to download and install these tutorials is by using git from the command-line:

git clone https://github.com/Hvass-Labs/TensorFlow-Tutorials.git

This will create the directory TensorFlow-Tutorials and download all the files to it.

This also makes it easy to update the tutorials, simply by executing this command inside that directory:

git pull

Download Zip-File

You can also download the contents of the GitHub repository as a Zip-file and extract it manually.

Environment

I use Anaconda because it comes with many Python packages already installed and it is easy to work with. After installing Anaconda, you should create a conda environment so you do not destroy your main installation in case you make a mistake somewhere:

conda create --name tf python=3

When Python gets updated to a new version, it takes a while before TensorFlow also uses the new Python version. So if the TensorFlow installation fails, then you may have to specify an older Python version for your new environment, such as:

conda create --name tf python=3.6

Now you can switch to the new environment by running the following (on Linux):

source activate tf

Required Packages

The tutorials require several Python packages to be installed. The packages are listed in requirements.txt

To install the required Python packages and dependencies you first have to activate the conda-environment as described above, and then you run the following command in a terminal:

pip install -r requirements.txt

Starting with TensorFlow 2.1 it includes both the CPU and GPU versions and will automatically switch if you have a GPU. But this requires the installation of various NVIDIA drivers, which is a bit complicated and is not described here.

Python Version 3.5 or Later

These tutorials were developed on Linux using Python 3.5 / 3.6 (the Anaconda distribution) and PyCharm.

There are reports that Python 2.7 gives error messages with these tutorials. Please make sure you are using Python 3.5 or later!

How To Run

If you have followed the above installation instructions, you should now be able to run the tutorials in the Python Notebooks:

cd ~/development/TensorFlow-Tutorials/  # Your installation directory.
jupyter notebook

This should start a web-browser that shows the list of tutorials. Click on a tutorial to load it.

Run in Google Colab

If you do not want to install anything on your own computer, then the Notebooks can be viewed, edited and run entirely on the internet by using Google Colab. There is a YouTube video explaining how to do this. You click the "Google Colab"-link next to each tutorial listed above. You can view the Notebook on Colab but in order to run it you need to login using your Google account. Then you need to execute the following commands at the top of the Notebook, which clones the contents of this repository to your work-directory on Colab.

# Clone the repository from GitHub to Google Colab's temporary drive.
import os
work_dir = "/content/TensorFlow-Tutorials/"
if not os.path.exists(work_dir):
    !git clone https://github.com/Hvass-Labs/TensorFlow-Tutorials.git
os.chdir(work_dir)

All required packages should already be installed on Colab, otherwise you can run the following command:

!pip install -r requirements.txt

Older Versions

Sometimes the source-code has changed from that shown in the YouTube videos. This may be due to bug-fixes, improvements, or because code-sections are moved to separate files for easy re-use.

If you want to see the exact versions of the source-code that were used in the YouTube videos, then you can browse the history of commits to the GitHub repository.

License (MIT)

These tutorials and source-code are published under the MIT License which allows very broad use for both academic and commercial purposes.

A few of the images used for demonstration purposes may be under copyright. These images are included under the "fair usage" laws.

You are very welcome to modify these tutorials and use them in your own projects. Please keep a link to the original repository.

Official PyTorch implementation of the paper Image-Based CLIP-Guided Essence Transfer.

TargetCLIP- official pytorch implementation of the paper Image-Based CLIP-Guided Essence Transfer This repository finds a global direction in StyleGAN

Hila Chefer 221 Dec 13, 2022
An official implementation of "Background-Aware Pooling and Noise-Aware Loss for Weakly-Supervised Semantic Segmentation" (CVPR 2021) in PyTorch.

BANA This is the implementation of the paper "Background-Aware Pooling and Noise-Aware Loss for Weakly-Supervised Semantic Segmentation". For more inf

CV Lab @ Yonsei University 59 Dec 12, 2022
Replication attempt for the Protein Folding Model

RGN2-Replica (WIP) To eventually become an unofficial working Pytorch implementation of RGN2, an state of the art model for MSA-less Protein Folding f

Eric Alcaide 36 Nov 29, 2022
Virtual Dance Reality Stage is a feature that offers you to share a stage with another user virtually.

Virtual Dance Reality Stage is a feature that offers you to share a stage with another user virtually. It uses the concept of Image Background Removal using DeepLab Architecture (based on Semantic Se

Devashi Choudhary 5 Aug 24, 2022
improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

310 Dec 28, 2022
Data, notebooks, and articles associated with the RSNA AI Deep Learning Lab at RSNA 2021

RSNA AI Deep Learning Lab 2021 Intro Welcome Deep Learners! This document provides all the information you need to participate in the RSNA AI Deep Lea

RSNA 65 Dec 16, 2022
OREO: Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning (NeurIPS 2021)

OREO: Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning (NeurIPS 2021) Video demo We here provide a video demo from co

20 Nov 25, 2022
Example for AUAV 2022 with obstacle avoidance.

AUAV 2022 Sample This is a sample PX4 based quadrotor path planning framework based on Ubuntu 20.04 and ROS noetic for the IEEE Autonomous UAS 2022 co

James Goppert 11 Sep 16, 2022
Official page of Struct-MDC (RA-L'22 with IROS'22 option); Depth completion from Visual-SLAM using point & line features

Struct-MDC (click the above buttons for redirection!) Official page of "Struct-MDC: Mesh-Refined Unsupervised Depth Completion Leveraging Structural R

Urban Robotics Lab. @ KAIST 37 Dec 22, 2022
SeisComP/SeisBench interface to enable deep-learning (re)picking in SeisComP

scdlpicker SeisComP/SeisBench interface to enable deep-learning (re)picking in SeisComP Objective This is a simple deep learning (DL) repicker module

Joachim Saul 6 May 13, 2022
Code for the paper titled "Prabhupadavani: A Code-mixed Speech Translation Data for 25 languages"

Prabhupadavani: A Code-mixed Speech Translation Data for 25 languages Code for the paper titled "Prabhupadavani: A Code-mixed Speech Translation Data

Ayush Daksh 12 Dec 01, 2022
ICON: Implicit Clothed humans Obtained from Normals

ICON: Implicit Clothed humans Obtained from Normals arXiv, December 2021. Yuliang Xiu · Jinlong Yang · Dimitrios Tzionas · Michael J. Black Table of C

Yuliang Xiu 1.1k Dec 30, 2022
TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR2022

TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation Paper Links: TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentati

Hust Visual Learning Team 253 Dec 21, 2022
A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

Tom 50 Dec 16, 2022
the official implementation of the paper "Isometric Multi-Shape Matching" (CVPR 2021)

Isometric Multi-Shape Matching (IsoMuSh) Paper-CVF | Paper-arXiv | Video | Code Citation If you find our work useful in your research, please consider

Maolin Gao 9 Jul 17, 2022
Space Ship Simulator using python

FlyOver Basic space-ship simulator using python How to run? Just double click run.py What modules do i need? All modules that i currently using is bui

0 Oct 09, 2022
Pytorch codes for Feature Transfer Learning for Face Recognition with Under-Represented Data

FTLNet_Pytorch Pytorch codes for Feature Transfer Learning for Face Recognition with Under-Represented Data 1. Introduction This repo is an unofficial

1 Nov 04, 2020
Deep Learning pipeline for motor-imagery classification.

BCI-ToolBox 1. Introduction BCI-ToolBox is deep learning pipeline for motor-imagery classification. This repo contains five models: ShallowConvNet, De

DongHee 18 Oct 31, 2022
[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

Rex Cheng 364 Jan 03, 2023
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022