[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

Overview

AnimeGANv2

「Open Source」. The improved version of AnimeGAN.
Project Page」 | Landscape photos/videos to anime

News
(2020.12.25) AnimeGANv3 will be released along with its paper in the spring of 2021.
(2021.02.21) The pytorch version of AnimeGANv2 has been released, Be grateful to @bryandlee for his contribution.

Focus:

Anime style Film Picture Number Quality Download Style Dataset
Miyazaki Hayao The Wind Rises 1752 1080p Link
Makoto Shinkai Your Name & Weathering with you 1445 BD
Kon Satoshi Paprika 1284 BDRip

     Different styles of training have different loss weights!

News:

The improvement directions of AnimeGANv2 mainly include the following 4 points:  
  • 1. Solve the problem of high-frequency artifacts in the generated image.

  • 2. It is easy to train and directly achieve the effects in the paper.

  • 3. Further reduce the number of parameters of the generator network. (generator size: 8.17 Mb), The lite version has a smaller generator model.

  • 4. Use new high-quality style data, which come from BD movies as much as possible.

          AnimeGAN can be accessed from here.


Requirements

  • python 3.6
  • tensorflow-gpu
    • tensorflow-gpu 1.8.0 (ubuntu, GPU 1080Ti or Titan xp, cuda 9.0, cudnn 7.1.3)
    • tensorflow-gpu 1.15.0 (ubuntu, GPU 2080Ti, cuda 10.0.130, cudnn 7.6.0)
  • opencv
  • tqdm
  • numpy
  • glob
  • argparse

Usage

1. Download vgg19

vgg19.npy

2. Download Train/Val Photo dataset

Link

3. Do edge_smooth

python edge_smooth.py --dataset Hayao --img_size 256

4. Calculate the three-channel(BGR) color difference

python data_mean.py --dataset Hayao

5. Train

python main.py --phase train --dataset Hayao --data_mean [13.1360,-8.6698,-4.4661] --epoch 101 --init_epoch 10
For light version: python main.py --phase train --dataset Hayao --data_mean [13.1360,-8.6698,-4.4661] --light --epoch 101 --init_epoch 10

6. Extract the weights of the generator

python get_generator_ckpt.py --checkpoint_dir ../checkpoint/AnimeGAN_Hayao_lsgan_300_300_1_2_10_1 --style_name Hayao

7. Inference

python test.py --checkpoint_dir checkpoint/generator_Hayao_weight --test_dir dataset/test/HR_photo --style_name Hayao/HR_photo

8. Convert video to anime

python video2anime.py --video video/input/お花見.mp4 --checkpoint_dir checkpoint/generator_Paprika_weight


Results


😍 Photo to Paprika Style













😍 Photo to Hayao Style













😍 Photo to Shinkai Style











License

This repo is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications. Permission is granted to use the AnimeGANv2 given that you agree to my license terms. Regarding the request for commercial use, please contact us via email to help you obtain the authorization letter.

Author

Xin Chen

Comments
  • Training code coming soon?

    Training code coming soon?

    Hello Tachibana san, Great work and congratulations on the work on AnimeGANv2. I had good success in converting the models and running the models on Android. However the latency is still an issue, it takes about 500 ms to run a 128x128 patch of image using Tensorflow Android(I tried tflite but it increases the inference time strangely.) I want to modify the network architecture and optimize its performance further to make it a real-time application (under 100 ms) So to cut a long story short, are you planning to release the training code in near future? :)

    Thank you.

    opened by maderix 10
  • Strange G_vgg loss curve

    Strange G_vgg loss curve

    Hello, thank you for posting this great work!

    I have retrained the model with a customized dataset, the results look great but the loss curves seem strange to me.

    image

    The adversary loss seems ok, I set the weights for D and G to 200 and 300, respectively, and the losses are approaching the equilibrium.

    However, the G_vgg loss, which consists of c_loss, s_loss, color_loss, tv_loss, reaches the bottom at around epoch 30 and then starts increasing. I looked at each individual loss among G_vgg_loss, only the s_loss is decreasing over time, and all others starts increasing after epoch 30. image

    Interestingly, the validation samples from epoch 100 is apparently better than the ones from epoch 30. Does anyone experience the same?

    opened by HaozhouPang 4
  • Run on CPU insted of GPU

    Run on CPU insted of GPU

    Hi, I Try To run this Project But I have little problem and when I try to start train phase this code is just only run on cpu but I install cudatoolkit and tensorflow-gpu. can u help me ?

    2020-12-06 18:00:45.278352: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA     [693/1792]
    2020-12-06 18:00:45.302721: E tensorflow/stream_executor/cuda/cuda_driver.cc:397] failed call to cuInit: CUDA_ERROR_NO_DEVICE                                                              
    2020-12-06 18:00:45.302766: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:158] retrieving CUDA diagnostic information for host: host-name
    2020-12-06 18:00:45.302775: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:165] hostname: host-name
    2020-12-06 18:00:45.302816: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] libcuda reported version is: 450.66.0
    2020-12-06 18:00:45.302853: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:193] kernel reported version is: 450.66.0                                                                 2020-12-06 18:00:45.302863: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:300] kernel version seems to match DSO: 450.66.0
    
    npy file loaded -------  vgg19_weight/vgg19.npy
    ##### Information #####
    # gan type :  lsgan
    # light :  False
    # dataset :  Hayao
    # max dataset number :  6656
    # batch_size :  12
    # epoch :  101
    # init_epoch :  10
    # training image size [H, W] :  [256, 256]
    # g_adv_weight,d_adv_weight,con_weight,sty_weight,color_weight,tv_weight :  300.0 300.0 1.5 2.5 10.0 1.0
    # init_lr,g_lr,d_lr :  0.0002 2e-05 4e-05
    # training_rate G -- D: 1 : 1
    build model finished: 0.138872s
    build model finished: 0.130571s
    build model finished: 0.120662s
    build model finished: 0.127711s
    build model finished: 0.123440s
    G:
    ---------
    Variables: name (type shape) [size]
    ---------
    generator/G_MODEL/A/Conv/weights:0 (float32_ref 7x7x3x32) [4704, bytes: 18816]
    generator/G_MODEL/A/LayerNorm/beta:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/A/LayerNorm/gamma:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/A/Conv_1/weights:0 (float32_ref 3x3x32x64) [18432, bytes: 73728]
    generator/G_MODEL/A/LayerNorm_1/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/LayerNorm_1/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/Conv_2/weights:0 (float32_ref 3x3x64x64) [36864, bytes: 147456]
    generator/G_MODEL/A/LayerNorm_2/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/LayerNorm_2/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/B/Conv/weights:0 (float32_ref 3x3x64x128) [73728, bytes: 294912]
    generator/G_MODEL/B/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/B/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/B/Conv_1/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/B/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]                                                                                                        [649/1792]generator/G_MODEL/B/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/Conv/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/C/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/r1/Conv/weights:0 (float32_ref 1x1x128x256) [32768, bytes: 131072]
    generator/G_MODEL/C/r1/LayerNorm/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/LayerNorm/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/r1/w:0 (float32_ref 3x3x256x1) [2304, bytes: 9216]
    generator/G_MODEL/C/r1/r1/bias:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/1/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/1/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/Conv_1/weights:0 (float32_ref 1x1x256x256) [65536, bytes: 262144]
    generator/G_MODEL/C/r1/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r2/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r2/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/r2/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r2/r2/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r2/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r2/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r3/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r3/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/r3/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r3/r3/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r3/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r3/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r4/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r4/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/r4/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r4/r4/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r4/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r4/2/gamma:0 (float32_ref 256) [256, bytes: 1024]                                                                                                             [605/1792]generator/G_MODEL/C/Conv_1/weights:0 (float32_ref 3x3x256x128) [294912, bytes: 1179648]
    generator/G_MODEL/C/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/Conv/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/D/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/Conv_1/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/D/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/E/Conv/weights:0 (float32_ref 3x3x128x64) [73728, bytes: 294912]
    generator/G_MODEL/E/LayerNorm/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/LayerNorm/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/Conv_1/weights:0 (float32_ref 3x3x64x64) [36864, bytes: 147456]
    generator/G_MODEL/E/LayerNorm_1/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/LayerNorm_1/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/Conv_2/weights:0 (float32_ref 7x7x64x32) [100352, bytes: 401408]
    generator/G_MODEL/E/LayerNorm_2/beta:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/E/LayerNorm_2/gamma:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/out_layer/Conv/weights:0 (float32_ref 1x1x32x3) [96, bytes: 384]
    Total size of variables: 2143552
    Total bytes of variables: 8574208
     [*] Reading checkpoints...
     [*] Failed to find a checkpoint
     [!] Load failed...
    Epoch:   0 Step:     0 /   554  time: 80.342747 s init_v_loss: 592.22143555  mean_v_loss: 592.22143555
    
    opened by amirzenoozi 3
  • Add ⭐️Weights and Biases⭐️ logging

    Add ⭐️Weights and Biases⭐️ logging

    Hey @TachibanaYoshino 👋, This PR aims to add basic Weights and Biases Metric Logging by appending to the existing codebase with minimal changes.

    The changes can be summarized as follows :-

    Add 3 extra arguments namely --use_wandb, --wandb_project and --wandb_entity which can be used to specify whether to use wandb, the name of the project to be used ("AnimeGANv2" by default) and name of the entity to be used.

    opened by SauravMaheshkar 2
  • Can I train a model by using multiple GPUs?

    Can I train a model by using multiple GPUs?

    Thank you for your awesome project. I think if i can using mulitple GPUs to traning, it's making things be more efficient. Hope to get some advice from you. Thanks.

    opened by MorningStarJ 2
  • typo on the folders

    typo on the folders

    Hello Author! your folder naming has a typo though It's kinda annoying to rename it again and again cuz I'm using google colab cuz I tried to use shinkai image image

    opened by IchimakiKasura 2
  • issue saving checkpoints of model

    issue saving checkpoints of model

    hello! when I try to train the model, I get the following error when the code tries to save the checkpoint:

    Traceback (most recent call last):
      File "main.py", line 115, in <module>
        main()
      File "main.py", line 107, in main
        gan.train()
      File "/content/drive/.shortcut-targets-by-id/1X8hfrOWE2KxmaJG4LFKH9ydVQ4BA7oyZ/cs7643-final-project/AnimeGANv2.py", line 302, in train
        self.save(self.checkpoint_dir, epoch)
      File "/content/drive/.shortcut-targets-by-id/1X8hfrOWE2KxmaJG4LFKH9ydVQ4BA7oyZ/cs7643-final-project/AnimeGANv2.py", line 341, in save
        self.saver.save(self.sess, os.path.join(checkpoint_dir, self.model_name + '.model'), global_step=step)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/saver.py", line 1186, in save
        save_relative_paths=self._save_relative_paths)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/checkpoint_management.py", line 231, in update_checkpoint_state_internal
        last_preserved_timestamp=last_preserved_timestamp)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/checkpoint_management.py", line 110, in generate_checkpoint_state_proto
        model_checkpoint_path = os.path.relpath(model_checkpoint_path, save_dir)
      File "/usr/lib/python3.7/posixpath.py", line 475, in relpath
        start_list = [x for x in abspath(start).split(sep) if x]
      File "/usr/lib/python3.7/posixpath.py", line 383, in abspath
        cwd = os.getcwd()
    FileNotFoundError: [Errno 2] No such file or directory
    

    I mounted my google drive into colab and am using colab to train the model. When I check my checkpoint folder, I have two files there but it appears that I am missing the checkpoint binary file and the .meta file. Any idea why this could be happening? image

    opened by wooae 1
  • Cannot understand rgb2yuv function code

    Cannot understand rgb2yuv function code

    def rgb2yuv(rgb):
        """
        Convert RGB image into YUV https://en.wikipedia.org/wiki/YUV
        """
        rgb = (rgb + 1.0)/2.0
        return tf.image.rgb_to_yuv(rgb)
    

    tf.image.rgb_to_yuv(rgb) do the op: rgb_to_yuv so,I can't understand what this line of code means: "rgb = (rgb + 1.0)/2.0"

    opened by wan-h 1
  • What attributed to the better performance of the model compared to your earlier model?

    What attributed to the better performance of the model compared to your earlier model?

    Hi, thanks for sharing your work. What in your opinion was the key to achieving better performance compared to your earlier model (v1) and/or other models?

    I've roughly seen the code of this repo but I can't figure it out.

    opened by xiankgx 1
  • How to use 512 x 512 or higher-definition pictures for training

    How to use 512 x 512 or higher-definition pictures for training

    I want to use 512 x 512 or higher resolution images, my plan is as follows:

    1. ffmpeg extracts 1080 * 1080 pictures, then sacle to 512 x 512
    2. python edge_smooth.py --dataset xxxx --img_size 512
    3. python train.py --dataset xxxx --epoch 101 --init_epoch 10 But I see that the pictures in train_photo under the dataset will also be used for training, so do the pictures in train_photo need to be updated to 512 x 512?
    opened by mjgaga 0
  • Could you share how you get the improvements that you mentioned in the readme?

    Could you share how you get the improvements that you mentioned in the readme?

    Hi, Could you share how you get these 3 improvements that you mentioned in the readme?


    1. Solve the problem of high-frequency artifacts in the generated image.

    2. It is easy to train and directly achieve the effects in the paper.

    3. Further reduce the number of parameters of the generator network. (generator size: 8.17 Mb), The lite version has a smaller generator model.


    opened by kasim0226 0
  • How to train face model?

    How to train face model?

    Is the training method the same as the training landscape photos? The difference is the data set. As long as the human face data is aligned, plus the animation face alignment, is that right?

    opened by baixinping618 0
Owner
CC
AI Algorithm Engineer
CC
ACV is a python library that provides explanations for any machine learning model or data.

ACV is a python library that provides explanations for any machine learning model or data. It gives local rule-based explanations for any model or data and different Shapley Values for tree-based mod

Salim Amoukou 85 Dec 27, 2022
Learning Compatible Embeddings, ICCV 2021

LCE Learning Compatible Embeddings, ICCV 2021 by Qiang Meng, Chixiang Zhang, Xiaoqiang Xu and Feng Zhou Paper: Arxiv We cannot release source codes pu

Qiang Meng 25 Dec 17, 2022
This is an open source python repository for various python tests

Welcome to Py-tests This is an open source python repository for various python tests. This is in response to the hacktoberfest2021 challenge. It is a

Yada Martins Tisan 3 Oct 31, 2021
Quantum-enhanced transformer neural network

Example of a Quantum-enhanced transformer neural network Get the code: git clone https://github.com/rdisipio/qtransformer.git cd qtransformer Create

Riccardo Di Sipio 61 Nov 08, 2022
An Object Oriented Programming (OOP) interface for Ontology Web language (OWL) ontologies.

Enabling a developer to use Ontology Web Language (OWL) along with its reasoning capabilities in an Object Oriented Programming (OOP) paradigm, by pro

TheEngineRoom-UniGe 7 Sep 23, 2022
Voice control for Garry's Mod

WIP: Talonvoice GMod integrations Very work in progress voice control demo for Garry's Mod. HOWTO Install https://talonvoice.com/ Press https://i.imgu

Meta Construct 5 Nov 15, 2022
Code of paper: "DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks"

DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks Abstract: Adversarial training has been proven to

倪仕文 (Shiwen Ni) 58 Nov 10, 2022
Data, model training, and evaluation code for "PubTables-1M: Towards a universal dataset and metrics for training and evaluating table extraction models".

PubTables-1M This repository contains training and evaluation code for the paper "PubTables-1M: Towards a universal dataset and metrics for training a

Microsoft 365 Jan 04, 2023
Scripts for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation and a convolutional neural network (CNN) for image classification

About subwAI subwAI - a project for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation

82 Jan 01, 2023
PyTorch framework for Deep Learning research and development.

Accelerated DL & RL PyTorch framework for Deep Learning research and development. It was developed with a focus on reproducibility, fast experimentati

Catalyst-Team 29 Jul 13, 2022
CUDA Python Low-level Bindings

CUDA Python Low-level Bindings

NVIDIA Corporation 529 Jan 03, 2023
Python tools for 3D face: 3DMM, Mesh processing(transform, camera, light, render), 3D face representations.

face3d: Python tools for processing 3D face Introduction This project implements some basic functions related to 3D faces. You can use this to process

Yao Feng 2.3k Dec 30, 2022
Self-supervised learning optimally robust representations for domain generalization.

OptDom: Learning Optimal Representations for Domain Generalization This repository contains the official implementation for Optimal Representations fo

Yangjun Ruan 18 Aug 25, 2022
Most popular metrics used to evaluate object detection algorithms.

Most popular metrics used to evaluate object detection algorithms.

Rafael Padilla 4.4k Dec 25, 2022
Scene-Text-Detection-and-Recognition (Pytorch)

Scene-Text-Detection-and-Recognition (Pytorch) Competition URL: https://tbrain.t

Gi-Luen Huang 9 Jan 02, 2023
PyTorch-based framework for Deep Hedging

PFHedge: Deep Hedging in PyTorch PFHedge is a PyTorch-based framework for Deep Hedging. PFHedge Documentation Neural Network Architecture for Efficien

139 Dec 30, 2022
pytorchのスライス代入操作をonnxに変換する際にScatterNDならないようにするサンプル

pytorch_remove_ScatterND pytorchのスライス代入操作をonnxに変換する際にScatterNDならないようにするサンプル。 スライスしたtensorにそのまま代入してしまうとScatterNDになるため、計算結果をcatで新しいtensorにする。 python ver

2 Dec 01, 2022
PyTorch implementation of DeepDream algorithm

neural-dream This is a PyTorch implementation of DeepDream. The code is based on neural-style-pt. Here we DeepDream a photograph of the Golden Gate Br

121 Nov 05, 2022
Pseudo-Visual Speech Denoising

Pseudo-Visual Speech Denoising This code is for our paper titled: Visual Speech Enhancement Without A Real Visual Stream published at WACV 2021. Autho

Sindhu 94 Oct 22, 2022
一个免费开源一键搭建的通用验证码识别平台,大部分常见的中英数验证码识别都没啥问题。

captcha_server 一个免费开源一键搭建的通用验证码识别平台,大部分常见的中英数验证码识别都没啥问题。 使用方法 python = 3.8 以上环境 pip install -r requirements.txt -i https://pypi.douban.com/simple gun

Sml2h3 189 Dec 02, 2022