[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

Overview

AnimeGANv2

「Open Source」. The improved version of AnimeGAN.
Project Page」 | Landscape photos/videos to anime

News
(2020.12.25) AnimeGANv3 will be released along with its paper in the spring of 2021.
(2021.02.21) The pytorch version of AnimeGANv2 has been released, Be grateful to @bryandlee for his contribution.

Focus:

Anime style Film Picture Number Quality Download Style Dataset
Miyazaki Hayao The Wind Rises 1752 1080p Link
Makoto Shinkai Your Name & Weathering with you 1445 BD
Kon Satoshi Paprika 1284 BDRip

     Different styles of training have different loss weights!

News:

The improvement directions of AnimeGANv2 mainly include the following 4 points:  
  • 1. Solve the problem of high-frequency artifacts in the generated image.

  • 2. It is easy to train and directly achieve the effects in the paper.

  • 3. Further reduce the number of parameters of the generator network. (generator size: 8.17 Mb), The lite version has a smaller generator model.

  • 4. Use new high-quality style data, which come from BD movies as much as possible.

          AnimeGAN can be accessed from here.


Requirements

  • python 3.6
  • tensorflow-gpu
    • tensorflow-gpu 1.8.0 (ubuntu, GPU 1080Ti or Titan xp, cuda 9.0, cudnn 7.1.3)
    • tensorflow-gpu 1.15.0 (ubuntu, GPU 2080Ti, cuda 10.0.130, cudnn 7.6.0)
  • opencv
  • tqdm
  • numpy
  • glob
  • argparse

Usage

1. Download vgg19

vgg19.npy

2. Download Train/Val Photo dataset

Link

3. Do edge_smooth

python edge_smooth.py --dataset Hayao --img_size 256

4. Calculate the three-channel(BGR) color difference

python data_mean.py --dataset Hayao

5. Train

python main.py --phase train --dataset Hayao --data_mean [13.1360,-8.6698,-4.4661] --epoch 101 --init_epoch 10
For light version: python main.py --phase train --dataset Hayao --data_mean [13.1360,-8.6698,-4.4661] --light --epoch 101 --init_epoch 10

6. Extract the weights of the generator

python get_generator_ckpt.py --checkpoint_dir ../checkpoint/AnimeGAN_Hayao_lsgan_300_300_1_2_10_1 --style_name Hayao

7. Inference

python test.py --checkpoint_dir checkpoint/generator_Hayao_weight --test_dir dataset/test/HR_photo --style_name Hayao/HR_photo

8. Convert video to anime

python video2anime.py --video video/input/お花見.mp4 --checkpoint_dir checkpoint/generator_Paprika_weight


Results


😍 Photo to Paprika Style













😍 Photo to Hayao Style













😍 Photo to Shinkai Style











License

This repo is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications. Permission is granted to use the AnimeGANv2 given that you agree to my license terms. Regarding the request for commercial use, please contact us via email to help you obtain the authorization letter.

Author

Xin Chen

Comments
  • Training code coming soon?

    Training code coming soon?

    Hello Tachibana san, Great work and congratulations on the work on AnimeGANv2. I had good success in converting the models and running the models on Android. However the latency is still an issue, it takes about 500 ms to run a 128x128 patch of image using Tensorflow Android(I tried tflite but it increases the inference time strangely.) I want to modify the network architecture and optimize its performance further to make it a real-time application (under 100 ms) So to cut a long story short, are you planning to release the training code in near future? :)

    Thank you.

    opened by maderix 10
  • Strange G_vgg loss curve

    Strange G_vgg loss curve

    Hello, thank you for posting this great work!

    I have retrained the model with a customized dataset, the results look great but the loss curves seem strange to me.

    image

    The adversary loss seems ok, I set the weights for D and G to 200 and 300, respectively, and the losses are approaching the equilibrium.

    However, the G_vgg loss, which consists of c_loss, s_loss, color_loss, tv_loss, reaches the bottom at around epoch 30 and then starts increasing. I looked at each individual loss among G_vgg_loss, only the s_loss is decreasing over time, and all others starts increasing after epoch 30. image

    Interestingly, the validation samples from epoch 100 is apparently better than the ones from epoch 30. Does anyone experience the same?

    opened by HaozhouPang 4
  • Run on CPU insted of GPU

    Run on CPU insted of GPU

    Hi, I Try To run this Project But I have little problem and when I try to start train phase this code is just only run on cpu but I install cudatoolkit and tensorflow-gpu. can u help me ?

    2020-12-06 18:00:45.278352: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA     [693/1792]
    2020-12-06 18:00:45.302721: E tensorflow/stream_executor/cuda/cuda_driver.cc:397] failed call to cuInit: CUDA_ERROR_NO_DEVICE                                                              
    2020-12-06 18:00:45.302766: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:158] retrieving CUDA diagnostic information for host: host-name
    2020-12-06 18:00:45.302775: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:165] hostname: host-name
    2020-12-06 18:00:45.302816: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] libcuda reported version is: 450.66.0
    2020-12-06 18:00:45.302853: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:193] kernel reported version is: 450.66.0                                                                 2020-12-06 18:00:45.302863: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:300] kernel version seems to match DSO: 450.66.0
    
    npy file loaded -------  vgg19_weight/vgg19.npy
    ##### Information #####
    # gan type :  lsgan
    # light :  False
    # dataset :  Hayao
    # max dataset number :  6656
    # batch_size :  12
    # epoch :  101
    # init_epoch :  10
    # training image size [H, W] :  [256, 256]
    # g_adv_weight,d_adv_weight,con_weight,sty_weight,color_weight,tv_weight :  300.0 300.0 1.5 2.5 10.0 1.0
    # init_lr,g_lr,d_lr :  0.0002 2e-05 4e-05
    # training_rate G -- D: 1 : 1
    build model finished: 0.138872s
    build model finished: 0.130571s
    build model finished: 0.120662s
    build model finished: 0.127711s
    build model finished: 0.123440s
    G:
    ---------
    Variables: name (type shape) [size]
    ---------
    generator/G_MODEL/A/Conv/weights:0 (float32_ref 7x7x3x32) [4704, bytes: 18816]
    generator/G_MODEL/A/LayerNorm/beta:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/A/LayerNorm/gamma:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/A/Conv_1/weights:0 (float32_ref 3x3x32x64) [18432, bytes: 73728]
    generator/G_MODEL/A/LayerNorm_1/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/LayerNorm_1/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/Conv_2/weights:0 (float32_ref 3x3x64x64) [36864, bytes: 147456]
    generator/G_MODEL/A/LayerNorm_2/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/LayerNorm_2/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/B/Conv/weights:0 (float32_ref 3x3x64x128) [73728, bytes: 294912]
    generator/G_MODEL/B/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/B/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/B/Conv_1/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/B/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]                                                                                                        [649/1792]generator/G_MODEL/B/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/Conv/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/C/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/r1/Conv/weights:0 (float32_ref 1x1x128x256) [32768, bytes: 131072]
    generator/G_MODEL/C/r1/LayerNorm/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/LayerNorm/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/r1/w:0 (float32_ref 3x3x256x1) [2304, bytes: 9216]
    generator/G_MODEL/C/r1/r1/bias:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/1/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/1/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/Conv_1/weights:0 (float32_ref 1x1x256x256) [65536, bytes: 262144]
    generator/G_MODEL/C/r1/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r2/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r2/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/r2/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r2/r2/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r2/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r2/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r3/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r3/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/r3/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r3/r3/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r3/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r3/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r4/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r4/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/r4/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r4/r4/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r4/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r4/2/gamma:0 (float32_ref 256) [256, bytes: 1024]                                                                                                             [605/1792]generator/G_MODEL/C/Conv_1/weights:0 (float32_ref 3x3x256x128) [294912, bytes: 1179648]
    generator/G_MODEL/C/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/Conv/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/D/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/Conv_1/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/D/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/E/Conv/weights:0 (float32_ref 3x3x128x64) [73728, bytes: 294912]
    generator/G_MODEL/E/LayerNorm/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/LayerNorm/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/Conv_1/weights:0 (float32_ref 3x3x64x64) [36864, bytes: 147456]
    generator/G_MODEL/E/LayerNorm_1/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/LayerNorm_1/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/Conv_2/weights:0 (float32_ref 7x7x64x32) [100352, bytes: 401408]
    generator/G_MODEL/E/LayerNorm_2/beta:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/E/LayerNorm_2/gamma:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/out_layer/Conv/weights:0 (float32_ref 1x1x32x3) [96, bytes: 384]
    Total size of variables: 2143552
    Total bytes of variables: 8574208
     [*] Reading checkpoints...
     [*] Failed to find a checkpoint
     [!] Load failed...
    Epoch:   0 Step:     0 /   554  time: 80.342747 s init_v_loss: 592.22143555  mean_v_loss: 592.22143555
    
    opened by amirzenoozi 3
  • Add ⭐️Weights and Biases⭐️ logging

    Add ⭐️Weights and Biases⭐️ logging

    Hey @TachibanaYoshino 👋, This PR aims to add basic Weights and Biases Metric Logging by appending to the existing codebase with minimal changes.

    The changes can be summarized as follows :-

    Add 3 extra arguments namely --use_wandb, --wandb_project and --wandb_entity which can be used to specify whether to use wandb, the name of the project to be used ("AnimeGANv2" by default) and name of the entity to be used.

    opened by SauravMaheshkar 2
  • Can I train a model by using multiple GPUs?

    Can I train a model by using multiple GPUs?

    Thank you for your awesome project. I think if i can using mulitple GPUs to traning, it's making things be more efficient. Hope to get some advice from you. Thanks.

    opened by MorningStarJ 2
  • typo on the folders

    typo on the folders

    Hello Author! your folder naming has a typo though It's kinda annoying to rename it again and again cuz I'm using google colab cuz I tried to use shinkai image image

    opened by IchimakiKasura 2
  • issue saving checkpoints of model

    issue saving checkpoints of model

    hello! when I try to train the model, I get the following error when the code tries to save the checkpoint:

    Traceback (most recent call last):
      File "main.py", line 115, in <module>
        main()
      File "main.py", line 107, in main
        gan.train()
      File "/content/drive/.shortcut-targets-by-id/1X8hfrOWE2KxmaJG4LFKH9ydVQ4BA7oyZ/cs7643-final-project/AnimeGANv2.py", line 302, in train
        self.save(self.checkpoint_dir, epoch)
      File "/content/drive/.shortcut-targets-by-id/1X8hfrOWE2KxmaJG4LFKH9ydVQ4BA7oyZ/cs7643-final-project/AnimeGANv2.py", line 341, in save
        self.saver.save(self.sess, os.path.join(checkpoint_dir, self.model_name + '.model'), global_step=step)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/saver.py", line 1186, in save
        save_relative_paths=self._save_relative_paths)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/checkpoint_management.py", line 231, in update_checkpoint_state_internal
        last_preserved_timestamp=last_preserved_timestamp)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/checkpoint_management.py", line 110, in generate_checkpoint_state_proto
        model_checkpoint_path = os.path.relpath(model_checkpoint_path, save_dir)
      File "/usr/lib/python3.7/posixpath.py", line 475, in relpath
        start_list = [x for x in abspath(start).split(sep) if x]
      File "/usr/lib/python3.7/posixpath.py", line 383, in abspath
        cwd = os.getcwd()
    FileNotFoundError: [Errno 2] No such file or directory
    

    I mounted my google drive into colab and am using colab to train the model. When I check my checkpoint folder, I have two files there but it appears that I am missing the checkpoint binary file and the .meta file. Any idea why this could be happening? image

    opened by wooae 1
  • Cannot understand rgb2yuv function code

    Cannot understand rgb2yuv function code

    def rgb2yuv(rgb):
        """
        Convert RGB image into YUV https://en.wikipedia.org/wiki/YUV
        """
        rgb = (rgb + 1.0)/2.0
        return tf.image.rgb_to_yuv(rgb)
    

    tf.image.rgb_to_yuv(rgb) do the op: rgb_to_yuv so,I can't understand what this line of code means: "rgb = (rgb + 1.0)/2.0"

    opened by wan-h 1
  • What attributed to the better performance of the model compared to your earlier model?

    What attributed to the better performance of the model compared to your earlier model?

    Hi, thanks for sharing your work. What in your opinion was the key to achieving better performance compared to your earlier model (v1) and/or other models?

    I've roughly seen the code of this repo but I can't figure it out.

    opened by xiankgx 1
  • How to use 512 x 512 or higher-definition pictures for training

    How to use 512 x 512 or higher-definition pictures for training

    I want to use 512 x 512 or higher resolution images, my plan is as follows:

    1. ffmpeg extracts 1080 * 1080 pictures, then sacle to 512 x 512
    2. python edge_smooth.py --dataset xxxx --img_size 512
    3. python train.py --dataset xxxx --epoch 101 --init_epoch 10 But I see that the pictures in train_photo under the dataset will also be used for training, so do the pictures in train_photo need to be updated to 512 x 512?
    opened by mjgaga 0
  • Could you share how you get the improvements that you mentioned in the readme?

    Could you share how you get the improvements that you mentioned in the readme?

    Hi, Could you share how you get these 3 improvements that you mentioned in the readme?


    1. Solve the problem of high-frequency artifacts in the generated image.

    2. It is easy to train and directly achieve the effects in the paper.

    3. Further reduce the number of parameters of the generator network. (generator size: 8.17 Mb), The lite version has a smaller generator model.


    opened by kasim0226 0
  • How to train face model?

    How to train face model?

    Is the training method the same as the training landscape photos? The difference is the data set. As long as the human face data is aligned, plus the animation face alignment, is that right?

    opened by baixinping618 0
Owner
CC
AI Algorithm Engineer
CC
The world's simplest facial recognition api for Python and the command line

Face Recognition You can also read a translated version of this file in Chinese 简体中文版 or in Korean 한국어 or in Japanese 日本語. Recognize and manipulate fa

Adam Geitgey 46.9k Jan 03, 2023
Image Deblurring using Generative Adversarial Networks

DeblurGAN arXiv Paper Version Pytorch implementation of the paper DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. Our netwo

Orest Kupyn 2.2k Jan 01, 2023
Implementation of the master's thesis "Temporal copying and local hallucination for video inpainting".

Temporal copying and local hallucination for video inpainting This repository contains the implementation of my master's thesis "Temporal copying and

David Álvarez de la Torre 1 Dec 02, 2022
A pytorch reproduction of { Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation }.

A PyTorch Reproduction of HCN Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation. Ch

Guyue Hu 210 Dec 31, 2022
[ICCV 2021 Oral] Mining Latent Classes for Few-shot Segmentation

Mining Latent Classes for Few-shot Segmentation Lihe Yang, Wei Zhuo, Lei Qi, Yinghuan Shi, Yang Gao. This codebase contains baseline of our paper Mini

Lihe Yang 66 Nov 29, 2022
Python interface for the DIGIT tactile sensor

DIGIT-INTERFACE Python interface for the DIGIT tactile sensor. For updates and discussions please join the #DIGIT channel at the www.touch-sensing.org

Facebook Research 35 Dec 22, 2022
The object detection pipeline is based on Ultralytics YOLOv5

AYOLOv2 The main goal of this repository is to rewrite the object detection pipeline with a better code structure for better portability and adaptabil

153 Dec 22, 2022
This is a custom made virus code in python, using tkinter module.

skeleterrorBetaV0.1-Virus-code This is a custom made virus code in python, using tkinter module. This virus is not harmful to the computer, it only ma

AR 0 Nov 21, 2022
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
Lightweight Face Image Quality Assessment

LightQNet This is a demo code of training and testing [LightQNet] using Tensorflow. Uncertainty Losses: IDQ loss PCNet loss Uncertainty Networks: Mobi

Kaen 5 Nov 18, 2022
A tight inclusion function for continuous collision detection

Tight-Inclusion Continuous Collision Detection A conservative Continuous Collision Detection (CCD) method with support for minimum separation. You can

Continuous Collision Detection 89 Jan 01, 2023
Current state of supervised and unsupervised depth completion methods

Awesome Depth Completion Table of Contents About Sparse-to-Dense Depth Completion Current State of Depth Completion Unsupervised VOID Benchmark Superv

224 Dec 28, 2022
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning

H-Transformer-1D Implementation of H-Transformer-1D, Transformer using hierarchical Attention for sequence learning with subquadratic costs. For now,

Phil Wang 123 Nov 17, 2022
Azion the best solution of Edge Computing in the world.

Azion Edge Function docker action Create or update an Edge Functions on Azion Edge Nodes. The domain name is the key for decision to a create or updat

8 Jul 16, 2022
Official Pytorch Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images.

IAug_CDNet Official Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images. Overview We propose a

53 Dec 02, 2022
BanditPAM: Almost Linear-Time k-Medoids Clustering

BanditPAM: Almost Linear-Time k-Medoids Clustering This repo contains a high-performance implementation of BanditPAM from BanditPAM: Almost Linear-Tim

254 Dec 12, 2022
Code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition"

SEW (Squeezed and Efficient Wav2vec) The repo contains the code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speec

ASAPP Research 67 Dec 01, 2022
dualPC.R contains the R code for the main functions.

dualPC.R contains the R code for the main functions. dualPC_sim.R contains an example run with the different PC versions; it calls dualPC_algs.R whic

3 May 30, 2022
Code for "Reconstructing 3D Human Pose by Watching Humans in the Mirror", CVPR 2021 oral

Reconstructing 3D Human Pose by Watching Humans in the Mirror Qi Fang*, Qing Shuai*, Junting Dong, Hujun Bao, Xiaowei Zhou CVPR 2021 Oral The videos a

ZJU3DV 178 Dec 13, 2022
Acoustic mosquito detection code with Bayesian Neural Networks

HumBugDB Acoustic mosquito detection with Bayesian Neural Networks. Extract audio or features from our large-scale dataset on Zenodo. This repository

31 Nov 28, 2022