This repository provides an efficient PyTorch-based library for training deep models.

Related tags

Deep LearningHammer
Overview

An Efficient Library for Training Deep Models

This repository provides an efficient PyTorch-based library for training deep models.

Installation

Make sure your Python >= 3.7, CUDA version >= 11.1, and CUDNN version >= 7.6.5.

  1. Install package requirements via conda:

    conda create -n <ENV_NAME> python=3.7  # create virtual environment with Python 3.7
    conda activate <ENV_NAME>
    pip install -r requirements/minimal.txt -f https://download.pytorch.org/whl/cu111/torch_stable.html
  2. To use video visualizer (optional), please also install ffmpeg.

    • Ubuntu: sudo apt-get install ffmpeg.
    • MacOS: brew install ffmpeg.
  3. To reduce memory footprint (optional), you can switch to either jemalloc (recommended) or tcmalloc rather than your default memory allocator.

    • jemalloc (recommended):
      • Ubuntu: sudo apt-get install libjemalloc
    • tcmalloc:
      • Ubuntu: sudo apt-get install google-perftools
  4. (optional) To speed up data loading on NVIDIA GPUs, you can install DALI, together with dill to pickle python objects. It is optional to also install CuPy for some customized operations if needed:

    pip install --extra-index-url https://developer.download.nvidia.com/compute/redist --upgrade nvidia-dali-<CUDA_VERSION>
    pip install dill
    pip install cupy  # optional, installation can be slow

    For example, on CUDA 11.1, DALI can be installed via:

    pip install --extra-index-url https://developer.download.nvidia.com/compute/redist --upgrade nvidia-dali-cuda110  # CUDA 11.1 compatible
    pip install dill
    pip install cupy  # optional, installation can be slow

Quick Demo

Train StyleGAN2 on FFHQ in Resolution of 256x256

In your Terminal, run:

./scripts/training_demos/stylegan2_ffhq256.sh <NUM_GPUS> <PATH_TO_DATA> [OPTIONS]

where

  • refers to the number of GPUs. Setting as 1 helps launch a training job on single-GPU platforms.

  • refers to the path of FFHQ dataset (in resolution of 256x256) with zip format. If running on local machines, a soft link of the data will be created under the data folder of the working directory to save disk space.

  • [OPTIONS] refers to any additional option to pass. Detailed instructions on available options can be shown via ./scripts/training_demos/stylegan2_ffhq256.sh --help .

This demo script uses stylegan2_ffhq256 as the default value of job_name, which is particularly used to identify experiments. Concretely, a directory with name job_name will be created under the root working directory (with is set as work_dirs/ by default). To prevent overwriting previous experiments, an exception will be raised to interrupt the training if the job_name directory has already existed. To change the job name, please use --job_name= option.

More Demos

Please find more training demos under ./scripts/training_demos/.

Inspect Training Results

Besides using TensorBoard to track the training process, the raw results (e.g., training losses and running time) are saved in JSON format. They can be easily inspected with the following script

import json

file_name = '
   
    /log.json'
   

data_entries = []
with open(file_name, 'r') as f:
    for line in f:
        data_entry = json.loads(line)
        data_entries.append(data_entry)

# An example of data entry
# {"Loss/D Fake": 0.4833524551040682, "Loss/D Real": 0.4966000154727226, "Loss/G": 1.1439273656869773, "Learning Rate/Discriminator": 0.002352941082790494, "Learning Rate/Generator": 0.0020000000949949026, "data time": 0.0036810599267482758, "iter time": 0.24490128830075264, "run time": 66108.140625}

Convert Pre-trained Models

See Model Conversion for details.

Prepare Datasets

See Dataset Preparation for details.

Develop

See Contributing Guide for details.

License

The project is under MIT License.

Acknowledgement

This repository originates from GenForce, with all modules carefully optimized to make it more flexible and robust for distributed training. On top of GenForce where only StyleGAN training is provided, this repository also supports training StyleGAN2 and StyleGAN3, both of which are fully reproduced. Any new method is welcome to merge into this repository! Please refer to the Develop section.

Contributors

The main contributors are listed as follows.

Member Contribution
Yujun Shen Refactor and optimize the entire codebase and reproduce start-of-the-art approaches.
Zhiyi Zhang Contribute to a number of sub-modules and functions, especially dataset related.
Dingdong Yang Contribute to DALI data loading acceleration.
Yinghao Xu Originally contribute to runner and loss functions in GenForce.
Ceyuan Yang Originally contribute to data loader in GenForce.
Jiapeng Zhu Originally contribute to evaluation metrics in GenForce.

BibTex

We open source this library to the community to facilitate the research. If you do like our work and use the codebase for your projects, please cite our work as follows.

@misc{hammer2022,
  title =        {Hammer: An Efficient Toolkit for Training Deep Models.},
  author =       {Shen, Yujun and Zhang, Zhiyi and Yang, Dingdong and Xu, Yinghao and Yang, Ceyuan and Zhu, Jiapeng},
  howpublished = {\url{https://github.com/bytedance/Hammer}},
  year =         {2022}
}
Owner
Bytedance Inc.
Bytedance Inc.
Optimizing synthesizer parameters using gradient approximation

Optimizing synthesizer parameters using gradient approximation NASH 2021 Hackathon! These are some experiments I conducted during NASH 2021, the Neura

Jordie Shier 10 Feb 10, 2022
A tool to analyze leveraged liquidity mining and find optimal option combination for hedging.

LP-Option-Hedging Description A Python program to analyze leveraged liquidity farming/mining and find the optimal option combination for hedging imper

Aureliano 18 Dec 19, 2022
A Deep Reinforcement Learning Framework for Stock Market Trading

DQN-Trading This is a framework based on deep reinforcement learning for stock market trading. This project is the implementation code for the two pap

61 Jan 01, 2023
Jupyter notebooks showing best practices for using cx_Oracle, the Python DB API for Oracle Database

Python cx_Oracle Notebooks, 2022 The repository contains Jupyter notebooks showing best practices for using cx_Oracle, the Python DB API for Oracle Da

Christopher Jones 13 Dec 15, 2022
Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging

Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging This repository contains an implementation

Computational Photography Lab @ SFU 1.1k Jan 02, 2023
GANsformer: Generative Adversarial Transformers Drew A

GANformer: Generative Adversarial Transformers Drew A. Hudson* & C. Lawrence Zitnick Update: We released the new GANformer2 paper! *I wish to thank Ch

Drew Arad Hudson 1.2k Jan 02, 2023
PyG (PyTorch Geometric) - A library built upon PyTorch to easily write and train Graph Neural Networks (GNNs)

PyG (PyTorch Geometric) is a library built upon PyTorch to easily write and train Graph Neural Networks (GNNs) for a wide range of applications related to structured data.

PyG 16.5k Jan 08, 2023
Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).

Fisher Induced Sparse uncHanging (FISH) Mask This repo contains the code for Fisher Induced Sparse uncHanging (FISH) Mask training, from "Training Neu

Varun Nair 37 Dec 30, 2022
The all new way to turn your boring vector meshes into the new fad in town; Voxels!

Voxelator The all new way to turn your boring vector meshes into the new fad in town; Voxels! Notes: I have not tested this on a rotated mesh. With fu

6 Feb 03, 2022
meProp: Sparsified Back Propagation for Accelerated Deep Learning (ICML 2017)

meProp The codes were used for the paper meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting (ICML 2017) [pdf]

LancoPKU 107 Nov 18, 2022
A script helps the user to update Linux and Mac systems through the terminal

Description This script helps the user to update Linux and Mac systems through the terminal. All the user has to install some requirements and then ru

Roxcoder 2 Jan 23, 2022
Pixel Consensus Voting for Panoptic Segmentation (CVPR 2020)

Implementation for Pixel Consensus Voting (CVPR 2020). This codebase contains the essential ingredients of PCV, including various spatial discretizati

Haochen 23 Oct 25, 2022
Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services

Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning

MaCan 4.2k Dec 29, 2022
Pretrained Cost Model for Distributed Constraint Optimization Problems

Pretrained Cost Model for Distributed Constraint Optimization Problems Requirements PyTorch 1.9.0 PyTorch Geometric 1.7.1 Directory structure baseline

2 Aug 28, 2022
Segmentation vgg16 fcn - cityscapes

VGGSegmentation Segmentation vgg16 fcn - cityscapes Priprema skupa skripta prepare_dataset_downsampled.py Iz slika cityscapesa izrezuje haubu automobi

6 Oct 24, 2020
The Curious Layperson: Fine-Grained Image Recognition without Expert Labels (BMVC 2021)

The Curious Layperson: Fine-Grained Image Recognition without Expert Labels Subhabrata Choudhury, Iro Laina, Christian Rupprecht, Andrea Vedaldi Code

Subhabrata Choudhury 18 Dec 27, 2022
JORLDY an open-source Reinforcement Learning (RL) framework provided by KakaoEnterprise

Repository for Open Source Reinforcement Learning Framework JORLDY

Kakao Enterprise Corp. 330 Dec 30, 2022
Predictive Modeling on Electronic Health Records(EHR) using Pytorch

Predictive Modeling on Electronic Health Records(EHR) using Pytorch Overview Although there are plenty of repos on vision and NLP models, there are ve

81 Jan 01, 2023
Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video

Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video

1 Jan 23, 2022
Code for SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics (ACL'2020).

SentiBERT Code for SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics (ACL'2020). https://arxiv.org/abs/20

Da Yin 66 Aug 13, 2022