[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

Overview

AnimeGANv2

「Open Source」. The improved version of AnimeGAN.
Project Page」 | Landscape photos/videos to anime

News
(2020.12.25) AnimeGANv3 will be released along with its paper in the spring of 2021.
(2021.02.21) The pytorch version of AnimeGANv2 has been released, Be grateful to @bryandlee for his contribution.

Focus:

Anime style Film Picture Number Quality Download Style Dataset
Miyazaki Hayao The Wind Rises 1752 1080p Link
Makoto Shinkai Your Name & Weathering with you 1445 BD
Kon Satoshi Paprika 1284 BDRip

     Different styles of training have different loss weights!

News:

The improvement directions of AnimeGANv2 mainly include the following 4 points:  
  • 1. Solve the problem of high-frequency artifacts in the generated image.

  • 2. It is easy to train and directly achieve the effects in the paper.

  • 3. Further reduce the number of parameters of the generator network. (generator size: 8.17 Mb), The lite version has a smaller generator model.

  • 4. Use new high-quality style data, which come from BD movies as much as possible.

          AnimeGAN can be accessed from here.


Requirements

  • python 3.6
  • tensorflow-gpu
    • tensorflow-gpu 1.8.0 (ubuntu, GPU 1080Ti or Titan xp, cuda 9.0, cudnn 7.1.3)
    • tensorflow-gpu 1.15.0 (ubuntu, GPU 2080Ti, cuda 10.0.130, cudnn 7.6.0)
  • opencv
  • tqdm
  • numpy
  • glob
  • argparse

Usage

1. Download vgg19

vgg19.npy

2. Download Train/Val Photo dataset

Link

3. Do edge_smooth

python edge_smooth.py --dataset Hayao --img_size 256

4. Calculate the three-channel(BGR) color difference

python data_mean.py --dataset Hayao

5. Train

python main.py --phase train --dataset Hayao --data_mean [13.1360,-8.6698,-4.4661] --epoch 101 --init_epoch 10
For light version: python main.py --phase train --dataset Hayao --data_mean [13.1360,-8.6698,-4.4661] --light --epoch 101 --init_epoch 10

6. Extract the weights of the generator

python get_generator_ckpt.py --checkpoint_dir ../checkpoint/AnimeGAN_Hayao_lsgan_300_300_1_2_10_1 --style_name Hayao

7. Inference

python test.py --checkpoint_dir checkpoint/generator_Hayao_weight --test_dir dataset/test/HR_photo --style_name Hayao/HR_photo

8. Convert video to anime

python video2anime.py --video video/input/お花見.mp4 --checkpoint_dir checkpoint/generator_Paprika_weight


Results


😍 Photo to Paprika Style













😍 Photo to Hayao Style













😍 Photo to Shinkai Style











License

This repo is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications. Permission is granted to use the AnimeGANv2 given that you agree to my license terms. Regarding the request for commercial use, please contact us via email to help you obtain the authorization letter.

Author

Xin Chen

Comments
  • Training code coming soon?

    Training code coming soon?

    Hello Tachibana san, Great work and congratulations on the work on AnimeGANv2. I had good success in converting the models and running the models on Android. However the latency is still an issue, it takes about 500 ms to run a 128x128 patch of image using Tensorflow Android(I tried tflite but it increases the inference time strangely.) I want to modify the network architecture and optimize its performance further to make it a real-time application (under 100 ms) So to cut a long story short, are you planning to release the training code in near future? :)

    Thank you.

    opened by maderix 10
  • Strange G_vgg loss curve

    Strange G_vgg loss curve

    Hello, thank you for posting this great work!

    I have retrained the model with a customized dataset, the results look great but the loss curves seem strange to me.

    image

    The adversary loss seems ok, I set the weights for D and G to 200 and 300, respectively, and the losses are approaching the equilibrium.

    However, the G_vgg loss, which consists of c_loss, s_loss, color_loss, tv_loss, reaches the bottom at around epoch 30 and then starts increasing. I looked at each individual loss among G_vgg_loss, only the s_loss is decreasing over time, and all others starts increasing after epoch 30. image

    Interestingly, the validation samples from epoch 100 is apparently better than the ones from epoch 30. Does anyone experience the same?

    opened by HaozhouPang 4
  • Run on CPU insted of GPU

    Run on CPU insted of GPU

    Hi, I Try To run this Project But I have little problem and when I try to start train phase this code is just only run on cpu but I install cudatoolkit and tensorflow-gpu. can u help me ?

    2020-12-06 18:00:45.278352: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA     [693/1792]
    2020-12-06 18:00:45.302721: E tensorflow/stream_executor/cuda/cuda_driver.cc:397] failed call to cuInit: CUDA_ERROR_NO_DEVICE                                                              
    2020-12-06 18:00:45.302766: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:158] retrieving CUDA diagnostic information for host: host-name
    2020-12-06 18:00:45.302775: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:165] hostname: host-name
    2020-12-06 18:00:45.302816: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] libcuda reported version is: 450.66.0
    2020-12-06 18:00:45.302853: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:193] kernel reported version is: 450.66.0                                                                 2020-12-06 18:00:45.302863: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:300] kernel version seems to match DSO: 450.66.0
    
    npy file loaded -------  vgg19_weight/vgg19.npy
    ##### Information #####
    # gan type :  lsgan
    # light :  False
    # dataset :  Hayao
    # max dataset number :  6656
    # batch_size :  12
    # epoch :  101
    # init_epoch :  10
    # training image size [H, W] :  [256, 256]
    # g_adv_weight,d_adv_weight,con_weight,sty_weight,color_weight,tv_weight :  300.0 300.0 1.5 2.5 10.0 1.0
    # init_lr,g_lr,d_lr :  0.0002 2e-05 4e-05
    # training_rate G -- D: 1 : 1
    build model finished: 0.138872s
    build model finished: 0.130571s
    build model finished: 0.120662s
    build model finished: 0.127711s
    build model finished: 0.123440s
    G:
    ---------
    Variables: name (type shape) [size]
    ---------
    generator/G_MODEL/A/Conv/weights:0 (float32_ref 7x7x3x32) [4704, bytes: 18816]
    generator/G_MODEL/A/LayerNorm/beta:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/A/LayerNorm/gamma:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/A/Conv_1/weights:0 (float32_ref 3x3x32x64) [18432, bytes: 73728]
    generator/G_MODEL/A/LayerNorm_1/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/LayerNorm_1/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/Conv_2/weights:0 (float32_ref 3x3x64x64) [36864, bytes: 147456]
    generator/G_MODEL/A/LayerNorm_2/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/LayerNorm_2/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/B/Conv/weights:0 (float32_ref 3x3x64x128) [73728, bytes: 294912]
    generator/G_MODEL/B/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/B/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/B/Conv_1/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/B/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]                                                                                                        [649/1792]generator/G_MODEL/B/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/Conv/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/C/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/r1/Conv/weights:0 (float32_ref 1x1x128x256) [32768, bytes: 131072]
    generator/G_MODEL/C/r1/LayerNorm/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/LayerNorm/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/r1/w:0 (float32_ref 3x3x256x1) [2304, bytes: 9216]
    generator/G_MODEL/C/r1/r1/bias:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/1/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/1/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/Conv_1/weights:0 (float32_ref 1x1x256x256) [65536, bytes: 262144]
    generator/G_MODEL/C/r1/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r2/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r2/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/r2/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r2/r2/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r2/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r2/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r3/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r3/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/r3/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r3/r3/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r3/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r3/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r4/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r4/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/r4/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r4/r4/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r4/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r4/2/gamma:0 (float32_ref 256) [256, bytes: 1024]                                                                                                             [605/1792]generator/G_MODEL/C/Conv_1/weights:0 (float32_ref 3x3x256x128) [294912, bytes: 1179648]
    generator/G_MODEL/C/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/Conv/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/D/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/Conv_1/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/D/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/E/Conv/weights:0 (float32_ref 3x3x128x64) [73728, bytes: 294912]
    generator/G_MODEL/E/LayerNorm/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/LayerNorm/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/Conv_1/weights:0 (float32_ref 3x3x64x64) [36864, bytes: 147456]
    generator/G_MODEL/E/LayerNorm_1/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/LayerNorm_1/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/Conv_2/weights:0 (float32_ref 7x7x64x32) [100352, bytes: 401408]
    generator/G_MODEL/E/LayerNorm_2/beta:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/E/LayerNorm_2/gamma:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/out_layer/Conv/weights:0 (float32_ref 1x1x32x3) [96, bytes: 384]
    Total size of variables: 2143552
    Total bytes of variables: 8574208
     [*] Reading checkpoints...
     [*] Failed to find a checkpoint
     [!] Load failed...
    Epoch:   0 Step:     0 /   554  time: 80.342747 s init_v_loss: 592.22143555  mean_v_loss: 592.22143555
    
    opened by amirzenoozi 3
  • Add ⭐️Weights and Biases⭐️ logging

    Add ⭐️Weights and Biases⭐️ logging

    Hey @TachibanaYoshino 👋, This PR aims to add basic Weights and Biases Metric Logging by appending to the existing codebase with minimal changes.

    The changes can be summarized as follows :-

    Add 3 extra arguments namely --use_wandb, --wandb_project and --wandb_entity which can be used to specify whether to use wandb, the name of the project to be used ("AnimeGANv2" by default) and name of the entity to be used.

    opened by SauravMaheshkar 2
  • Can I train a model by using multiple GPUs?

    Can I train a model by using multiple GPUs?

    Thank you for your awesome project. I think if i can using mulitple GPUs to traning, it's making things be more efficient. Hope to get some advice from you. Thanks.

    opened by MorningStarJ 2
  • typo on the folders

    typo on the folders

    Hello Author! your folder naming has a typo though It's kinda annoying to rename it again and again cuz I'm using google colab cuz I tried to use shinkai image image

    opened by IchimakiKasura 2
  • issue saving checkpoints of model

    issue saving checkpoints of model

    hello! when I try to train the model, I get the following error when the code tries to save the checkpoint:

    Traceback (most recent call last):
      File "main.py", line 115, in <module>
        main()
      File "main.py", line 107, in main
        gan.train()
      File "/content/drive/.shortcut-targets-by-id/1X8hfrOWE2KxmaJG4LFKH9ydVQ4BA7oyZ/cs7643-final-project/AnimeGANv2.py", line 302, in train
        self.save(self.checkpoint_dir, epoch)
      File "/content/drive/.shortcut-targets-by-id/1X8hfrOWE2KxmaJG4LFKH9ydVQ4BA7oyZ/cs7643-final-project/AnimeGANv2.py", line 341, in save
        self.saver.save(self.sess, os.path.join(checkpoint_dir, self.model_name + '.model'), global_step=step)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/saver.py", line 1186, in save
        save_relative_paths=self._save_relative_paths)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/checkpoint_management.py", line 231, in update_checkpoint_state_internal
        last_preserved_timestamp=last_preserved_timestamp)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/checkpoint_management.py", line 110, in generate_checkpoint_state_proto
        model_checkpoint_path = os.path.relpath(model_checkpoint_path, save_dir)
      File "/usr/lib/python3.7/posixpath.py", line 475, in relpath
        start_list = [x for x in abspath(start).split(sep) if x]
      File "/usr/lib/python3.7/posixpath.py", line 383, in abspath
        cwd = os.getcwd()
    FileNotFoundError: [Errno 2] No such file or directory
    

    I mounted my google drive into colab and am using colab to train the model. When I check my checkpoint folder, I have two files there but it appears that I am missing the checkpoint binary file and the .meta file. Any idea why this could be happening? image

    opened by wooae 1
  • Cannot understand rgb2yuv function code

    Cannot understand rgb2yuv function code

    def rgb2yuv(rgb):
        """
        Convert RGB image into YUV https://en.wikipedia.org/wiki/YUV
        """
        rgb = (rgb + 1.0)/2.0
        return tf.image.rgb_to_yuv(rgb)
    

    tf.image.rgb_to_yuv(rgb) do the op: rgb_to_yuv so,I can't understand what this line of code means: "rgb = (rgb + 1.0)/2.0"

    opened by wan-h 1
  • What attributed to the better performance of the model compared to your earlier model?

    What attributed to the better performance of the model compared to your earlier model?

    Hi, thanks for sharing your work. What in your opinion was the key to achieving better performance compared to your earlier model (v1) and/or other models?

    I've roughly seen the code of this repo but I can't figure it out.

    opened by xiankgx 1
  • How to use 512 x 512 or higher-definition pictures for training

    How to use 512 x 512 or higher-definition pictures for training

    I want to use 512 x 512 or higher resolution images, my plan is as follows:

    1. ffmpeg extracts 1080 * 1080 pictures, then sacle to 512 x 512
    2. python edge_smooth.py --dataset xxxx --img_size 512
    3. python train.py --dataset xxxx --epoch 101 --init_epoch 10 But I see that the pictures in train_photo under the dataset will also be used for training, so do the pictures in train_photo need to be updated to 512 x 512?
    opened by mjgaga 0
  • Could you share how you get the improvements that you mentioned in the readme?

    Could you share how you get the improvements that you mentioned in the readme?

    Hi, Could you share how you get these 3 improvements that you mentioned in the readme?


    1. Solve the problem of high-frequency artifacts in the generated image.

    2. It is easy to train and directly achieve the effects in the paper.

    3. Further reduce the number of parameters of the generator network. (generator size: 8.17 Mb), The lite version has a smaller generator model.


    opened by kasim0226 0
  • How to train face model?

    How to train face model?

    Is the training method the same as the training landscape photos? The difference is the data set. As long as the human face data is aligned, plus the animation face alignment, is that right?

    opened by baixinping618 0
Owner
CC
AI Algorithm Engineer
CC
A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up/down.

HandTrackingBrightnessControl A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up

Teemu Laurila 19 Feb 12, 2022
Weakly Supervised Dense Event Captioning in Videos, i.e. generating multiple sentence descriptions for a video in a weakly-supervised manner.

WSDEC This is the official repo for our NeurIPS paper Weakly Supervised Dense Event Captioning in Videos. Description Repo directories ./: global conf

Melon(Xuguang Duan) 96 Nov 01, 2022
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.

Note: This is an alpha (preview) version which is still under refining. nn-Meter is a novel and efficient system to accurately predict the inference l

Microsoft 244 Jan 06, 2023
Reverse engineering Rosetta 2 in M1 Mac

Project Champollion About this project Rosetta 2 is an emulation mechanism to run the x86_64 applications on Arm-based Apple Silicon with Ahead-Of-Tim

FFRI Security, Inc. 258 Jan 07, 2023
Graph Analysis From Scratch

Graph Analysis From Scratch Goal In this notebook we wanted to implement some functionalities to analyze a weighted graph only by using algorithms imp

Arturo Ghinassi 0 Sep 17, 2022
Code release for Local Light Field Fusion at SIGGRAPH 2019

Local Light Field Fusion Project | Video | Paper Tensorflow implementation for novel view synthesis from sparse input images. Local Light Field Fusion

1.1k Dec 27, 2022
Official Pytorch implementation of C3-GAN

Official pytorch implemenation of C3-GAN Contrastive Fine-grained Class Clustering via Generative Adversarial Networks [Paper] Authors: Yunji Kim, Jun

NAVER AI 114 Dec 02, 2022
AutoVideo: An Automated Video Action Recognition System

AutoVideo is a system for automated video analysis. It is developed based on D3M infrastructure, which describes machine learning with generic pipeline languages. Currently, it focuses on video actio

Data Analytics Lab at Texas A&M University 267 Dec 17, 2022
Gluon CV Toolkit

Gluon CV Toolkit | Installation | Documentation | Tutorials | GluonCV provides implementations of the state-of-the-art (SOTA) deep learning models in

Distributed (Deep) Machine Learning Community 5.4k Jan 06, 2023
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
An Artificial Intelligence trying to drive a car by itself on a user created map

An Artificial Intelligence trying to drive a car by itself on a user created map

Akhil Sahukaru 17 Jan 13, 2022
PyTorch trainer and model for Sequence Classification

PyTorch-trainer-and-model-for-Sequence-Classification After cloning the repository, modify your training data so that the training data is a .csv file

NhanTieu 2 Dec 09, 2022
Data pipelines for both TensorFlow and PyTorch!

rapidnlp-datasets Data pipelines for both TensorFlow and PyTorch ! If you want to load public datasets, try: tensorflow/datasets huggingface/datasets

1 Dec 08, 2021
TensorFlow implementation of "Learning from Simulated and Unsupervised Images through Adversarial Training"

Simulated+Unsupervised (S+U) Learning in TensorFlow TensorFlow implementation of Learning from Simulated and Unsupervised Images through Adversarial T

Taehoon Kim 569 Dec 29, 2022
Creating predictive checklists from data using integer programming.

Learning Optimal Predictive Checklists A Python package to learn simple predictive checklists from data subject to customizable constraints. For more

Healthy ML 5 Apr 19, 2022
A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal

A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop which is flexible enough to handle the majority of use cases,

Chris Hughes 110 Dec 23, 2022
This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effects in Video."

Omnimatte in PyTorch This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effect

Erika Lu 728 Dec 28, 2022
Multi-Horizon-Forecasting-for-Limit-Order-Books

Multi-Horizon-Forecasting-for-Limit-Order-Books This jupyter notebook is used to demonstrate our work, Multi-Horizon Forecasting for Limit Order Books

Zihao Zhang 116 Dec 23, 2022
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation

CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation (CVPR 2021, oral presentation) CoCosNet v2: Full-Resolution Correspondence

Microsoft 308 Dec 07, 2022
[NeurIPS'20] Self-supervised Co-Training for Video Representation Learning. Tengda Han, Weidi Xie, Andrew Zisserman.

CoCLR: Self-supervised Co-Training for Video Representation Learning This repository contains the implementation of: InfoNCE (MoCo on videos) UberNCE

Tengda Han 271 Jan 02, 2023