PyTorch common framework to accelerate network implementation, training and validation

Overview

pytorch-framework

PyTorch common framework to accelerate network implementation, training and validation.

This framework is inspired by works from MMLab, which modularize the data, network, loss, metric, etc. to make the framework to be flexible, easy to modify and to extend.

How to use

# install necessary libs
pip install -r requirements.txt

The framework contains six different subfolders:

  • networks: all networks should be implemented under the networks folder with {NAME}_network.py filename.
  • datasets: all datasets should be implemented under the datasets folder with {NAME}_dataset.py filename.
  • losses: all losses should be implemented under the losses folder with {NAME}_loss.py filename.
  • metrics: all metrics should be implemented under the metrics folder with {NAME}_metric.py filename.
  • models: all models should be implemented under the models folder with {NAME}_model.py filename.
  • utils: all util functions should be implemented under the utils folder with {NAME}_util.py filename.

The training and validation procedure can be defined in the specified .yaml file.

# training 
CUDA_VISIBLE_DEVICES=gpu_ids python train.py --opt options/train.yaml

# validation/test
CUDA_VISIBLE_DEVICES=gpu_ids python test.py --opt options/test.yaml

In the .yaml file for training, you can define all the things related to training such as the experiment name, model, dataset, network, loss, optimizer, metrics and other hyper-parameters. Here is an example to train VGG16 for image classification:

# general setting
name: vgg_train
backend: dp # DataParallel
type: ClassifierModel
num_gpu: auto

# path to resume network
path:
  resume_state: ~

# datasets
datasets:
  train_dataset:
    name: TrainDataset
    type: ImageNet
    data_root: ../data/train_data
  val_dataset:
    name: ValDataset
    type: ImageNet
    data_root: ../data/val_data
  # setting for train dataset
  batch_size: 8

# network setting
networks:
  classifier:
    type: VGG16
    num_classes: 1000

# training setting
train:
  total_iter: 10000
  optims:
    classifier:
      type: Adam
      lr: 1.0e-4
  schedulers:
    classifier:
      type: none
  losses:
    ce_loss:
      type: CrossEntropyLoss

# validation setting
val:
  val_freq: 10000

# log setting
logger:
  print_freq: 100
  save_checkpoint_freq: 10000

In the .yaml file for validation, you can define all the things related to validation such as: model, dataset, metrics. Here is an example:

# general setting
name: test
backend: dp # DataParallel
type: ClassifierModel
num_gpu: auto
manual_seed: 1234

# path
path:
  resume_state: experiments/train/models/final.pth
  resume: false

# datasets
datasets:
  val_dataset:
    name: ValDataset
    type: ImageNet
    data_root: ../data/test_data

# network setting
networks:
  classifier:
    type: VGG
    num_classes: 1000

# validation setting
val:
  metrics:
    accuracy:
      type: calculate_accuracy

Framework Details

The core of the framework is the BaseModel in the base_model.py. The BaseModel controls the whole training/validation procedure from initialization over training/validation iteration to results saving.

  • Initialization: In the model initialization, it will read the configuration in the .yaml file and construct the corresponding networks, datasets, losses, optimizers, metrics, etc.
  • Training/Validation: In the training/validation procedure, you can refer the training process in the train.py and the validation process in the test.py.
  • Results saving: The model will automatically save the state_dict for networks, optimizers and other hyperparameters during the training.

The configuration of the framework is down by Register in the registry.py. The Register has a object map (key-value pair). The key is the name of the object, the value is the class of the object. There are total 4 different registers for networks, datasets, losses and metrics. Here is an example to register a new network:

import torch
import torch.nn as nn

from utils.registry import NETWORK_REGISTRY

@NETWORK_REGISTRY.register()
class MyNet(nn.Module):
  ...
Owner
Dongliang Cao
Dongliang Cao
Small repo describing how to use Hugging Face's Wav2Vec2 with PyCTCDecode

🤗 Transformers Wav2Vec2 + PyCTCDecode Introduction This repo shows how 🤗 Transformers can be used in combination with kensho-technologies's PyCTCDec

Patrick von Platen 102 Oct 22, 2022
A Pytorch Implementation for Compact Bilinear Pooling.

CompactBilinearPooling-Pytorch A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling Prerequisites I

169 Dec 23, 2022
3D HourGlass Networks for Human Pose Estimation Through Videos

3D-HourGlass-Network 3D CNN Based Hourglass Network for Human Pose Estimation (3D Human Pose) from videos. This was my summer'18 research project. Dis

Naman Jain 51 Jan 02, 2023
OCRA (Object-Centric Recurrent Attention) source code

OCRA (Object-Centric Recurrent Attention) source code Hossein Adeli and Seoyoung Ahn Please cite this article if you find this repository useful: For

Hossein Adeli 2 Jun 18, 2022
Normal Learning in Videos with Attention Prototype Network

Codes_APN Official codes of CVPR21 paper: Normal Learning in Videos with Attention Prototype Network (https://arxiv.org/abs/2108.11055) Overview of ou

11 Dec 13, 2022
Implementation of FSGNN

FSGNN Implementation of FSGNN. For more details, please refer to our paper Experiments were conducted with following setup: Pytorch: 1.6.0 Python: 3.8

19 Dec 05, 2022
CLASP - Contrastive Language-Aminoacid Sequence Pretraining

CLASP - Contrastive Language-Aminoacid Sequence Pretraining Repository for creating models pretrained on language and aminoacid sequences similar to C

Michael Pieler 133 Dec 29, 2022
A simple and useful implementation of LPIPS.

lpips-pytorch Description Developing perceptual distance metrics is a major topic in recent image processing problems. LPIPS[1] is a state-of-the-art

So Uchida 121 Dec 24, 2022
This is the paddle code for SeBoW(Self-Born wiring for neural trees), a kind of neural tree born form a large search space

SeBoW: Self-Born Wiring for neural trees(PaddlePaddle version) This is the paddle code for SeBoW(Self-Born wiring for neural trees), a kind of neural

HollyLee 13 Dec 08, 2022
Code of the paper "Shaping Visual Representations with Attributes for Few-Shot Learning (ASL)".

Shaping Visual Representations with Attributes for Few-Shot Learning This code implements the Shaping Visual Representations with Attributes for Few-S

chx_nju 9 Sep 01, 2022
Implementation of Squeezenet in pytorch, pretrained models on Cifar 10 data to come

Pytorch Squeeznet Pytorch implementation of Squeezenet model as described in https://arxiv.org/abs/1602.07360 on cifar-10 Data. The definition of Sque

gaurav pathak 86 Oct 28, 2022
A pytorch implementation of Paper "Improved Training of Wasserstein GANs"

WGAN-GP An pytorch implementation of Paper "Improved Training of Wasserstein GANs". Prerequisites Python, NumPy, SciPy, Matplotlib A recent NVIDIA GPU

Marvin Cao 1.4k Dec 14, 2022
BaseCls BaseCls 是一个基于 MegEngine 的预训练模型库,帮助大家挑选或训练出更适合自己科研或者业务的模型结构

BaseCls BaseCls 是一个基于 MegEngine 的预训练模型库,帮助大家挑选或训练出更适合自己科研或者业务的模型结构。 文档地址:https://basecls.readthedocs.io 安装 安装环境 BaseCls 需要 Python = 3.6。 BaseCls 依赖 M

MEGVII Research 28 Dec 23, 2022
CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image Segmentation

CoTr: Efficient 3D Medical Image Segmentation by bridging CNN and Transformer This is the official pytorch implementation of the CoTr: Paper: CoTr: Ef

218 Dec 25, 2022
Learning the Beauty in Songs: Neural Singing Voice Beautifier; ACL 2022 (Main conference); Official code

Learning the Beauty in Songs: Neural Singing Voice Beautifier Jinglin Liu, Chengxi Li, Yi Ren, Zhiying Zhu, Zhou Zhao Zhejiang University ACL 2022 Mai

Jinglin Liu 257 Dec 30, 2022
This repository contains code from the paper "TTS-GAN: A Transformer-based Time-Series Generative Adversarial Network"

TTS-GAN: A Transformer-based Time-Series Generative Adversarial Network This repository contains code from the paper "TTS-GAN: A Transformer-based Tim

Intelligent Multimodal Computing and Sensing Laboratory (IMICS Lab) - Texas State University 108 Dec 29, 2022
A C implementation for creating 2D voronoi diagrams

Branch OSX/Linux Windows master dev jc_voronoi A fast C/C++ header only implementation for creating 2D Voronoi diagrams from a point set Uses Fortune'

Mathias Westerdahl 481 Dec 29, 2022
Introduction to Statistics and Basics of Mathematics for Data Science - The Hacker's Way

HackerMath for Machine Learning “Study hard what interests you the most in the most undisciplined, irreverent and original manner possible.” ― Richard

Amit Kapoor 1.4k Dec 22, 2022
PFLD pytorch Implementation

PFLD-pytorch Implementation of PFLD A Practical Facial Landmark Detector by pytorch. 1. install requirements pip3 install -r requirements.txt 2. Datas

zhaozhichao 669 Jan 02, 2023
Justmagic - Use a function as a method with this mystic script, like in Nim

justmagic Use a function as a method with this mystic script, like in Nim. Just

witer33 8 Oct 08, 2022