SwinIR: Image Restoration Using Swin Transformer

Overview

SwinIR: Image Restoration Using Swin Transformer

This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Window Transformer (arxiv, supp). SwinIR ahcieves state-of-the-art performance in

  • bicubic/lighweight/real-world image SR
  • grayscale/color image denoising
  • JPEG compression artifact reduction

🚀 🚀 🚀 News:


Image restoration is a long-standing low-level vision problem that aims to restore high-quality images from low-quality images (e.g., downscaled, noisy and compressed images). While state-of-the-art image restoration methods are based on convolutional neural networks, few attempts have been made with Transformers which show impressive performance on high-level vision tasks. In this paper, we propose a strong baseline model SwinIR for image restoration based on the Swin Transformer. SwinIR consists of three parts: shallow feature extraction, deep feature extraction and high-quality image reconstruction. In particular, the deep feature extraction module is composed of several residual Swin Transformer blocks (RSTB), each of which has several Swin Transformer layers together with a residual connection. We conduct experiments on three representative tasks: image super-resolution (including classical, lightweight and real-world image super-resolution), image denoising (including grayscale and color image denoising) and JPEG compression artifact reduction. Experimental results demonstrate that SwinIR outperforms state-of-the-art methods on different tasks by up to 0.14~0.45dB, while the total number of parameters can be reduced by up to 67%.

Contents

  1. Training
  2. Testing
  3. Results
  4. Citation
  5. License and Acknowledgement

Training

Used training and testing sets can be downloaded as follows:

Task Training Set Testing Set
classical/lightweight image SR DIV2K (800 training images) or DIV2K +Flickr2K (2650 images) Set5 + Set14 + BSD100 + Urban100 + Manga109 download all
real-world image SR SwinIR-M (middle size): DIV2K (800 training images) +Flickr2K (2650 images) + OST (10324 images, sky,water,grass,mountain,building,plant,animal)
SwinIR-L (large size): DIV2K + Flickr2K + OST + WED(4744 images) + FFHQ (first 2000 images, face) + Manga109 (manga) + SCUT-CTW1500 (first 100 training images, texts)

*We use the degradation model proposed in BSRGAN, ICCV2021
RealSRSet
color/grayscale image denoising DIV2K (800 training images) + Flickr2K (2650 images) + BSD500 (400 training&testing images) + WED(4744 images) grayscale: Set12 + BSD68 + Urban100
color: CBSD68 + Kodak24 + McMaster + Urban100 download all
JPEG compression artifact reduction DIV2K (800 training images) + Flickr2K (2650 images) + BSD500 (400 training&testing images) + WED(4744 images) grayscale: Classic5 +LIVE1 download all

The training code will be put in KAIR.

Testing (without preparing datasets)

For your convience, we provide some example datasets (~20Mb) in /testsets. If you just want codes, downloading models/network_swinir.py, utils/util_calculate_psnr_ssim.py and main_test_swinir.py is enough. Download pretrained models and put them in model_zoo/swinir, then run the following commands. All visual results of SwinIR can be downloaded here.

# 001 Classical Image Super-Resolution (middle size)
# Note that --training_patch_size is just used to differentiate two different settings in Table 2 of the paper. Images are NOT tested patch by patch.
# (setting1: when model is trained on DIV2K and with training_patch_size=48)
python main_test_swinir.py --task classical_sr --scale 2 --training_patch_size 48 --model_path model_zoo/swinir/001_classicalSR_DIV2K_s48w8_SwinIR-M_x2.pth --folder_lq testsets/Set5/LR_bicubic/X2 --folder_gt testsets/Set5/HR
python main_test_swinir.py --task classical_sr --scale 3 --training_patch_size 48 --model_path model_zoo/swinir/001_classicalSR_DIV2K_s48w8_SwinIR-M_x3.pth --folder_lq testsets/Set5/LR_bicubic/X3 --folder_gt testsets/Set5/HR
python main_test_swinir.py --task classical_sr --scale 4 --training_patch_size 48 --model_path model_zoo/swinir/001_classicalSR_DIV2K_s48w8_SwinIR-M_x4.pth --folder_lq testsets/Set5/LR_bicubic/X4 --folder_gt testsets/Set5/HR
python main_test_swinir.py --task classical_sr --scale 8 --training_patch_size 48 --model_path model_zoo/swinir/001_classicalSR_DIV2K_s48w8_SwinIR-M_x8.pth --folder_lq testsets/Set5/LR_bicubic/X8 --folder_gt testsets/Set5/HR

# (setting2: when model is trained on DIV2K+Flickr2K and with training_patch_size=64)
python main_test_swinir.py --task classical_sr --scale 2 --training_patch_size 64 --model_path model_zoo/swinir/001_classicalSR_DF2K_s64w8_SwinIR-M_x2.pth --folder_lq testsets/Set5/LR_bicubic/X2 --folder_gt testsets/Set5/HR
python main_test_swinir.py --task classical_sr --scale 3 --training_patch_size 64 --model_path model_zoo/swinir/001_classicalSR_DF2K_s64w8_SwinIR-M_x3.pth --folder_lq testsets/Set5/LR_bicubic/X3 --folder_gt testsets/Set5/HR
python main_test_swinir.py --task classical_sr --scale 4 --training_patch_size 64 --model_path model_zoo/swinir/001_classicalSR_DF2K_s64w8_SwinIR-M_x4.pth --folder_lq testsets/Set5/LR_bicubic/X4 --folder_gt testsets/Set5/HR
python main_test_swinir.py --task classical_sr --scale 8 --training_patch_size 64 --model_path model_zoo/swinir/001_classicalSR_DF2K_s64w8_SwinIR-M_x8.pth --folder_lq testsets/Set5/LR_bicubic/X8 --folder_gt testsets/Set5/HR


# 002 Lightweight Image Super-Resolution (small size)
python main_test_swinir.py --task lightweight_sr --scale 2 --model_path model_zoo/swinir/002_lightweightSR_DIV2K_s64w8_SwinIR-S_x2.pth --folder_lq testsets/Set5/LR_bicubic/X2 --folder_gt testsets/Set5/HR
python main_test_swinir.py --task lightweight_sr --scale 3 --model_path model_zoo/swinir/002_lightweightSR_DIV2K_s64w8_SwinIR-S_x3.pth --folder_lq testsets/Set5/LR_bicubic/X3 --folder_gt testsets/Set5/HR
python main_test_swinir.py --task lightweight_sr --scale 4 --model_path model_zoo/swinir/002_lightweightSR_DIV2K_s64w8_SwinIR-S_x4.pth --folder_lq testsets/Set5/LR_bicubic/X4 --folder_gt testsets/Set5/HR


# 003 Real-World Image Super-Resolution
# (middle size)
python main_test_swinir.py --task real_sr --scale 4 --model_path model_zoo/swinir/003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth --folder_lq testsets/RealSRSet+5images

# (larger size + trained on more datasets)
# python main_test_swinir.py --task real_sr --scale 4 --large_model --model_path model_zoo/swinir/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN.pth --folder_lq testsets/RealSRSet+5images


# 004 Grayscale Image Deoising (middle size)
python main_test_swinir.py --task gray_dn --noise 15 --model_path model_zoo/swinir/004_grayDN_DFWB_s128w8_SwinIR-M_noise15.pth --folder_gt testsets/Set12
python main_test_swinir.py --task gray_dn --noise 25 --model_path model_zoo/swinir/004_grayDN_DFWB_s128w8_SwinIR-M_noise25.pth --folder_gt testsets/Set12
python main_test_swinir.py --task gray_dn --noise 50 --model_path model_zoo/swinir/004_grayDN_DFWB_s128w8_SwinIR-M_noise50.pth --folder_gt testsets/Set12


# 005 Color Image Deoising (middle size)
python main_test_swinir.py --task color_dn --noise 15 --model_path model_zoo/swinir/005_colorDN_DFWB_s128w8_SwinIR-M_noise15.pth --folder_gt testsets/McMaster
python main_test_swinir.py --task color_dn --noise 25 --model_path model_zoo/swinir/005_colorDN_DFWB_s128w8_SwinIR-M_noise25.pth --folder_gt testsets/McMaster
python main_test_swinir.py --task color_dn --noise 50 --model_path model_zoo/swinir/005_colorDN_DFWB_s128w8_SwinIR-M_noise50.pth --folder_gt testsets/McMaster


# 006 JPEG Compression Artifact Reduction (middle size, using window_size=7 because JPEG encoding uses 8x8 blocks)
python main_test_swinir.py --task jpeg_car --jpeg 10 --model_path model_zoo/swinir/006_CAR_DFWB_s126w7_SwinIR-M_jpeg10.pth --folder_gt testsets/classic5
python main_test_swinir.py --task jpeg_car --jpeg 20 --model_path model_zoo/swinir/006_CAR_DFWB_s126w7_SwinIR-M_jpeg20.pth --folder_gt testsets/classic5
python main_test_swinir.py --task jpeg_car --jpeg 30 --model_path model_zoo/swinir/006_CAR_DFWB_s126w7_SwinIR-M_jpeg30.pth --folder_gt testsets/classic5
python main_test_swinir.py --task jpeg_car --jpeg 40 --model_path model_zoo/swinir/006_CAR_DFWB_s126w7_SwinIR-M_jpeg40.pth --folder_gt testsets/classic5

*Large size real-world image SR model (003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN.pth) will be released later.


Results

We achieved state-of-the-art performance on classical/lightweight/real-world image SR, grayscale/color image denoising and JPEG compression artifact reduction. Detailed results can be found in the paper.

Classical Image Super-Resolution (click me)

Lightweight Image Super-Resolution

Real-World Image Super-Resolution

Grayscale Image Deoising

Color Image Deoising

JPEG Compression Artifact Reduction

Citation

@article{liang2021swinir,
    title={SwinIR: Image Restoration Using Swin Transformer},
    author={Liang, Jingyun and Cao, Jiezhang and Sun, Guolei and Zhang, Kai and Van Gool, Luc and Timofte, Radu},
    journal={arXiv preprint arXiv:2108.10257}, 
    year={2021}
}

License and Acknowledgement

This project is released under the Apache 2.0 license. The codes are heavily based on Swin Transformer. We also refer to codes in KAIR and BasicSR. Please also follow their licenses. Thanks for their awesome works.

Comments
  • Why SwinIR can be directly (not patch by patch) tested on images with arbitrary sizes?

    Why SwinIR can be directly (not patch by patch) tested on images with arbitrary sizes?

    In my knowledge, the input in transformer must be fixed resolution, in test time, often take patch overlap method to test image in transformer.in your code, I want to know how the method you take, and the idea. I saw that like any resolution can be feed in swinIR? how to do it? Looking forward your reply, thanku.!

    solved ✅ 
    opened by jiaaihhy 17
  • one question

    one question

    hi,this is great work. I want to use this network for single image deraining, and what parts of this code can I modify? Or do you have any good suggestions? thanks!

    opened by sherlybe 16
  • about test

    about test

    在swinIR模型中,有img_size这个参数,例如为128, 在SwinLayer时,是input_resolution=(128, 128), 比如我在测试的时候,我的输入图像不是(128, 128) 那么计算attention的时候 有一个判断, if self.input_resolution == x_size, else attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device))。我想请问一下如果图片大小不等于self.input_resolution=(128,128)时, 加入的参数 mask 这个是什么mask

    opened by jiaaihhy 15
  • About training

    About training

    Thank you for your work. I tried to train SwinIR but in my process of training swinir, I found that although the small swinir training is smooth, the loss often suddenly doubles when dim change to 180. Because of the memory problem, my batch_zie=16 lr=1e-4, may I have any special skills to let Is the training stable?

    solved ✅ 
    opened by shengkelong 15
  • The input size during test

    The input size during test

    Hi, Jingyun, nice work! I just wonder why SwinIR function needs to set the 'img_size'. It is somehow kind of inconvenient, especially for test, since we usually want to test on different sizes of images, right? Is there any particular reason for this? Since Swin Transformer does not need this because they use padding operations. Besides, are there any requirements of the input size, i.e., must be the multiple of a number, or something else? Thanks.

    opened by IceClear 13
  • Training time of SwinIR; Impact of learning rate (fix the lr to 1e-5 for x4 fine-tuning is slightly better)

    Training time of SwinIR; Impact of learning rate (fix the lr to 1e-5 for x4 fine-tuning is slightly better)

    In my opinion, the transformer costs much memory. And the paper pointed out that although swinir has fewer parameters, its speed is much slower than RCAN. So I'm curious about the cost of training, thank you.

    good first issue solved ✅ 
    opened by shengkelong 11
  • Comparison with IPT

    Comparison with IPT

    Hi,

    Thanks for sharing this interesting work. Table. 6, CBSD68, sigma=50 shows that IPT achieves 28.39 PSNR. However, the original paper of IPT shows that it can achieves 29.88 (in their Table. 2). Is there any difference with these two settings?

    good first issue solved ✅ 
    opened by yifanjiang19 9
  • Questions about the training effiency

    Questions about the training effiency

    Thanks for releasing the code of SiwnIR, which is a really great work for low-level vision tasks.

    However, when I train the SwinIR model with the guidance provided in the repo, I find the training efficiency is relatively low.

    Specifically, the GPU utilization rate keeps 0 for a while from time to time (run 14 seconds and sleep 14 seconds). When the GPU utilization rate is 0, the CPU utilization is also 0. It's worth noting that I use the DDP training on 8 TITAN-RTX GPU cards with the default batch_size. I train the classic SR task with DIV2K dataset on X2 scale. After half-day training, The epoch, iteration and PSNR on Set5 are about 1500, 42000 and 35.73dB, respectively. So, it will takes about 5 days to finish the 500k iterations, far exceeding the 2 days reported in the README.

    Could you please help me to figure out the reason for training efficiency?

    opened by XiaoqiangZhou 8
  • runtimeerror when using other dataset?

    runtimeerror when using other dataset?

    Hi. I want to train the model with my own dataset. However, it keeps reporting RuntimeError: stack expects each tensor to be equal size, but got [3, 256, 256] at entry 0 and [3, 256, 252] at entry 1 Do I have any wrong setting? Thanks.

    The part of json: "datasets": { "train": { "name": "train_dataset" // just name , "dataset_type": "sr" // "dncnn" | "dnpatch" | "fdncnn" | "ffdnet" | "sr" | "srmd" | "dpsr" | "plain" | "plainpatch" | "jpeg" , "dataroot_H": "HR" // path of H training dataset. DIV2K (800 training images) , "dataroot_L": "LR" // path of L training dataset

      , "H_size": 256                   // 96/144|192/384 | 128/192/256/512. LR patch size is set to 48 or 64 when compared with RCAN or RRDB.
    
      , "dataloader_shuffle": true
      , "dataloader_num_workers": 16
      , "dataloader_batch_size": 8      // batch size 1 | 16 | 32 | 48 | 64 | 128. Total batch size =4x8=32 in SwinIR
    }
    , "test": {
      "name": "test_dataset"            // just name
      , "dataset_type": "sr"         // "dncnn" | "dnpatch" | "fdncnn" | "ffdnet" | "sr" | "srmd" | "dpsr" | "plain" | "plainpatch" | "jpeg"
      , "dataroot_H": "testsets/Set5/HR"  // path of H testing dataset
      , "dataroot_L": "testsets/Set5/LR_bicubic/X4"              // path of L testing dataset
    
    }
    

    }

    , "netG": { "net_type": "swinir" , "upscale": 4 // 2 | 3 | 4 | 8 , "in_chans": 3 , "img_size": 64 // For fair comparison, LR patch size is set to 48 or 64 when compared with RCAN or RRDB. , "window_size": 8
    , "img_range": 1.0 , "depths": [6, 6, 6, 6, 6, 6] , "embed_dim": 180 , "num_heads": [6, 6, 6, 6, 6, 6] , "mlp_ratio": 2 , "upsampler": "pixelshuffle" // "pixelshuffle" | "pixelshuffledirect" | "nearest+conv" | null , "resi_connection": "1conv" // "1conv" | "3conv"

    , "init_type": "default"
    

    }

    opened by hcleung3325 7
  • Trained model from KAIR (40000_optimizerG.pth) gives error on testing

    Trained model from KAIR (40000_optimizerG.pth) gives error on testing

    Instead of using pre-trained models, I trained the KAIR code and used the generated model for testing.

    The KAIR training code produced 3 models:
    40000_optimizerG.pth
    40000_G.pth
    40000_E.pth
    

    Using these models, for testing the code in this repository, I am getting error:

    (pytorch-gpu) C:\Users\Downloads\SwinIR-main>python main_test_swinir.py --task classical_sr --scale 2 --training_patch_size 48 --folder_lq testsets/Set5/LR_bicubic/X2 --folder_gt testsets/Set5/HR
    loading model from model_zoo/swinir/40000_optimizerG.pth
    Traceback (most recent call last):
      File "C:\Users\Downloads\SwinIR-main\main_test_swinir.py", line 253, in <module>
        main()
      File "C:\Users\Downloads\SwinIR-main\main_test_swinir.py", line 42, in main
        model = define_model(args)
      File "C:\Users\Downloads\SwinIR-main\main_test_swinir.py", line 174, in define_model
        model.load_state_dict(pretrained_model[param_key_g] if param_key_g in pretrained_model.keys() else pretrained_model, strict=True)
      File "C:\Users\anaconda3\envs\pytorch-gpu\lib\site-packages\torch\nn\modules\module.py", line 1406, in load_state_dict
        raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    RuntimeError: Error(s) in loading state_dict for SwinIR:
            Missing key(s) in state_dict: "conv_first.weight", "conv_first.bias", "patch_embed.norm.weight", "patch_embed.norm.bias", "layers.0.residual_group.blocks.0.norm1.weight", "layers.0.residual_group.blocks.0.norm1.bias", "layers.0.residual_group.blocks.0.attn.relative_position_bias_table",
    

    40000_G.pth 40000_E.pth are testing fine

    opened by paragon1234 6
  • Can gan be finetuned on own dataset?

    Can gan be finetuned on own dataset?

    When I try to set pretrained models (003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth) paths in KAIR's train file;

    , "path": { "root": "superresolution" // "denoising" | "superresolution" | "dejpeg" , "pretrained_netG": null // path of pretrained model , "pretrained_netD": null // path of pretrained model , "pretrained_netE": null // path of pretrained model }

    it starts to train from scratch anyway. And when I copy from model_zoo right into /superresolution/swinir_sr_realworld_x4_gan/models/ thats not working either.

    opened by betterftr 6
  • how to fix the error in loading trained xxxx_E.pth: tensors size cant match ?

    how to fix the error in loading trained xxxx_E.pth: tensors size cant match ?

    I finetuned the classical SwinIR model, then load the xxxx_E.pth for testing. I met that problem:

    RuntimeError: Error(s) in loading state_dict for SwinIR: size mismatch for layers.0.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([4, 64, 64]) from checkpoint, the shape in current m odel is torch.Size([64, 64, 64]).

    how to fix it?

    opened by kelvennn 0
  • PlayTorch Demo not working (playtorch 0.1.2, iOS 16.1)

    PlayTorch Demo not working (playtorch 0.1.2, iOS 16.1)

    Tried to open the PlayTorch demo on my iPhone XR (iOS 16.1) and got a "ViewPropTypes has been removed from React Native" error. PlayTorch v. 0.1.2.

    Screenshot 2022-10-20 at 2 08 09 PM

    IMG_8729

    opened by Stl5n0 0
  • All the datasets for denoising traning are different format

    All the datasets for denoising traning are different format

    Hi, I find that there are .png .jpg .bmp formats of training set pics, and I see that this may related to the windows size, so can you tell me if we should continue the window size as 8 or 7? And can we just put them into one single folder?

    opened by fisher75 0
  • TensorRt models?

    TensorRt models?

    Thank you devs for great project. Unfortunately I had not success with converting to onnx and with trtexec converter from nvidia. Could you please, @JingyunLiang share tensorrt model? (realsr Large?) or someone else if succssed with conversion. Thank you very much in advance

    opened by zelenooki87 0
Releases(v0.0)
Owner
Jingyun Liang
PhD Student at Computer Vision Lab, ETH Zurich
Jingyun Liang
Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646

[TCSVT] Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization LPN [Paper] NEWs Prerequisites Python 3.6 GPU Memory = 8G Numpy 1.

46 Dec 14, 2022
Pytorch implementation of Learning with Opponent-Learning Awareness

Pytorch implementation of Learning with Opponent-Learning Awareness using DiCE

Alexis David Jacq 82 Sep 15, 2022
A C implementation for creating 2D voronoi diagrams

Branch OSX/Linux Windows master dev jc_voronoi A fast C/C++ header only implementation for creating 2D Voronoi diagrams from a point set Uses Fortune'

Mathias Westerdahl 481 Dec 29, 2022
BARF: Bundle-Adjusting Neural Radiance Fields 🤮 (ICCV 2021 oral)

BARF 🤮 : Bundle-Adjusting Neural Radiance Fields Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey IEEE International Conference on Comp

Chen-Hsuan Lin 539 Dec 28, 2022
Do Neural Networks for Segmentation Understand Insideness?

This is part of the code to reproduce the results of the paper Do Neural Networks for Segmentation Understand Insideness? [pdf] by K. Villalobos (*),

biolins 0 Mar 20, 2021
Train the HRNet model on ImageNet

High-resolution networks (HRNets) for Image classification News [2021/01/20] Add some stronger ImageNet pretrained models, e.g., the HRNet_W48_C_ssld_

HRNet 866 Jan 04, 2023
Development kit for MIT Scene Parsing Benchmark

Development Kit for MIT Scene Parsing Benchmark [NEW!] Our PyTorch implementation is released in the following repository: https://github.com/hangzhao

MIT CSAIL Computer Vision 424 Dec 01, 2022
An off-line judger supporting distributed problem repositories

Thaw 中文 | English Thaw is an off-line judger supporting distributed problem repositories. Everyone can use Thaw release problems with license on GitHu

countercurrent_time 2 Jan 09, 2022
This reporistory contains the test-dev data of the paper "xGQA: Cross-lingual Visual Question Answering".

This reporistory contains the test-dev data of the paper "xGQA: Cross-lingual Visual Question Answering".

AdapterHub 18 Dec 09, 2022
Repository For Programmers Seeking a platform to show their skills

Programming-Nerds Repository For Programmers Seeking Pull Requests In hacktoberfest ❓ What's Hacktoberfest 2021? Hacktoberfest is the easiest way to g

42 Oct 29, 2022
Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing

Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing Paper Introduction Multi-task indoor scene understanding is widely considered a

62 Dec 05, 2022
Project ArXiv Citation Network

Project ArXiv Citation Network Overview This project involved the analysis of the ArXiv citation network. Usage The complete code of this project is i

Dennis Núñez-Fernández 5 Oct 20, 2022
Use Python, OpenCV, and MediaPipe to control a keyboard with facial gestures

CheekyKeys A Face-Computer Interface CheekyKeys lets you control your keyboard using your face. View a fuller demo and more background on the project

69 Nov 09, 2022
Sign-to-Speech for Sign Language Understanding: A case study of Nigerian Sign Language

Sign-to-Speech for Sign Language Understanding: A case study of Nigerian Sign Language This repository contains the code, model, and deployment config

16 Oct 23, 2022
Labels4Free: Unsupervised Segmentation using StyleGAN

Labels4Free: Unsupervised Segmentation using StyleGAN ICCV 2021 Figure: Some segmentation masks predicted by Labels4Free Framework on real and synthet

70 Dec 23, 2022
A faster pytorch implementation of faster r-cnn

A Faster Pytorch Implementation of Faster R-CNN Write at the beginning [05/29/2020] This repo was initaited about two years ago, developed as the firs

Jianwei Yang 7.1k Jan 01, 2023
Official PyTorch implementation of "AASIST: Audio Anti-Spoofing using Integrated Spectro-Temporal Graph Attention Networks"

AASIST This repository provides the overall framework for training and evaluating audio anti-spoofing systems proposed in 'AASIST: Audio Anti-Spoofing

Clova AI Research 56 Jan 02, 2023
Learning from Synthetic Shadows for Shadow Detection and Removal [Inoue+, IEEE TCSVT 2020].

Learning from Synthetic Shadows for Shadow Detection and Removal (IEEE TCSVT 2020) Overview This repo is for the paper "Learning from Synthetic Shadow

Naoto Inoue 67 Dec 28, 2022
Code for our ACL 2021 paper "One2Set: Generating Diverse Keyphrases as a Set"

One2Set This repository contains the code for our ACL 2021 paper “One2Set: Generating Diverse Keyphrases as a Set”. Our implementation is built on the

Jiacheng Ye 63 Jan 05, 2023
TorchIO is a Medical image preprocessing and augmentation toolkit for deep learning. Part of the PyTorch Ecosystem.

Medical image preprocessing and augmentation toolkit for deep learning. Part of the PyTorch Ecosystem.

Fernando Pérez-García 1.6k Jan 06, 2023