[NeurIPS'21 Spotlight] PyTorch code for our paper "Aligned Structured Sparsity Learning for Efficient Image Super-Resolution"

Overview

ASSL

This repository is for a new network pruning method (Aligned Structured Sparsity Learning, ASSL) for efficient single image super-resolution (SR), introduced in our NeurIPS 2021 Spotlight paper:

Aligned Structured Sparsity Learning for Efficient Image Super-Resolution [Camera Ready]
Yulun Zhang*, Huan Wang*, Can Qin, and Yun Fu (*Contribute Equally)
Northeastern University, Boston, MA, USA

Stay tuned!

You might also like...
Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy" (ICLR 2022 Spotlight)

About Code release for Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy (ICLR 2022 Spotlight)

PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

PyTorch code for our paper
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

PyTorch code for our ECCV 2018 paper
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code accompanying our paper on Maximum Entropy Generators for Energy-Based Models

Maximum Entropy Generators for Energy-Based Models All experiments have tensorboard visualizations for samples / density / train curves etc. To run th

Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.
Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.

Machine Learning Sleep Schedule Tracker What is it? Convolutional neural network web app trained to track our infant’s sleep schedule using our Google

Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Official implementation of our paper
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

Comments
  • Could you share the code with me?

    Could you share the code with me?

    @MingSun-Tse Thanks for your excellent work. I read the paper ,and I want to learn the details. Could you share the paper with me? Thank you very much!!

    opened by ciwei123 3
  • Why simply use the first constrained layer as pruning template for all constrained layers?

    Why simply use the first constrained layer as pruning template for all constrained layers?

    From the observation of training results, the hard mask's weights between the constrained layers are not exactly aligned. https://github.com/MingSun-Tse/ASSL/blob/a564556c8b578c2ee86d135044f088bfeaafc707/src/pruner/utils.py#L71

    opened by yumath 2
  • Questions about implementation detail

    Questions about implementation detail

    hello , I have some questiones about implementation details.

    Data are obtained using the HR-LR data pairs obtained by the down-sampling code provided in BasicSR. The training data was DF2K (900 DIV2K + 2650 Flickr2K), and the test data was Set5.

    I run this command to prune the EDSR_16_256 model to EDSR_16_48. Only the pruning ratio and storage path name are modified compared to the command provided by the official.

    Prune from 256 to 48, pr=0.8125, x2, ASSL

    python main.py --model LEDSR --scale 2 --patch_size 96 --ext sep --dir_data /home/notebook/data/group_cpfs/wurongyuan/data/data
    --data_train DF2K --data_test DF2K --data_range 1-3550/3551-3555 --chop --save_results --n_resblocks 16 --n_feats 256
    --method ASSL --wn --stage_pr [0-1000:0.8125] --skip_layers *mean*,*tail*
    --same_pruned_wg_layers model.head.0,model.body.16,*body.2 --reg_upper_limit 0.5 --reg_granularity_prune 0.0001
    --update_reg_interval 20 --stabilize_reg_interval 43150 --pre_train pretrained_models/LEDSR_F256R16BIX2_DF2K_M311.pt
    --same_pruned_wg_criterion reg --save main/SR/LEDSR_F256R16BIX2_DF2K_ASSL_0.8125_RGP0.0001_RUL0.5_Pretrain_06011101 Results model_just_finished_prune ---> 33.739dB fine-tuning after one epoch ---> 37.781dB fine-tuning after 756 epoch ---> 37.940dB

    The result (37.940dB) I obtained with the code provided by the official is still a certain gap from the result in the paper (38.12dB). I should have overlooked some details.

    I also compared L1-norm method provided in the code. Prune from 256 to 48, pr=0.8125, x2, L1

    python main.py --model LEDSR --scale 2 --patch_size 96 --ext sep --dir_data /home/notebook/data/group_cpfs/wurongyuan/data/data
    --data_train DF2K --data_test DF2K --data_range 1-3550/3551-3555 --chop --save_results --n_resblocks 16 --n_feats 256
    --method L1 --wn --stage_pr [0-1000:0.8125] --skip_layers *mean*,*tail*
    --same_pruned_wg_layers model.head.0,model.body.16,*body.2 --reg_upper_limit 0.5 --reg_granularity_prune 0.0001
    --update_reg_interval 20 --stabilize_reg_interval 43150 --pre_train pretrained_models/LEDSR_F256R16BIX2_DF2K_M311.pt
    --same_pruned_wg_criterion reg --save main/SR/LEDSR_F256R16BIX2_DF2K_L1_0.8125_06011101

    Results

    model_just_finished_prune ---> 13.427dB fine-tuning after one epoch ---> 33.202dB fine-tuning after 756 epoch ---> 37.933dB

    The difference between the results of L1-norm method and those of ASSL seems negligible at this pruning ratio (256->48)

    Is there something I missed? Looking forward to your reply! >-<

    opened by wurongyuan 2
  • Questions on Data Preparation

    Questions on Data Preparation

    Hello and thanks for your amazing work! When I try to reproduce the paper results, I met some trouble binarizing the DF2K data:

    data/DF2K/bin/DF2K_train_LR_bicubic/X4/3548x4.pt does not exist. Now making binary...
    Direct pt file without name or image
    data/DF2K/bin/DF2K_train_LR_bicubic/X4/3549x4.pt does not exist. Now making binary...
    Direct pt file without name or image
    data/DF2K/bin/DF2K_train_LR_bicubic/X4/3550x4.pt does not exist. Now making binary...
    Direct pt file without name or image
    data/DF2K/bin/DF2K_train_HR/3551.pt does not exist. Now making binary...
    Traceback (most recent call last):
    ...
    FileNotFoundError: No such file: '/home/nfs_data/shixiangsheng/projects/ModelCompression/Prune/ASSL/src/data/DF2K/DF2K_train_HR/3551.png'
    

    I created dirs like this: ----data |__DF2K |__DF2K_train_HR |__DF2K_train_LR_bicubic

    I put '0001.png' - '0900.png' from ./data/DIV2K/DIV2K_train_HR and '000001.png' - '002650.png' (renamed to '0901.png' - '3550.png') from .data/Flickr2K/Flickr2K_HR to ./DF2K/DF2K_train_HR. As for downsampled images, I created folders named in ['X2', 'X3', 'X4'] under ./DF2K/DF2K_train_LR_bicubic and copied related images from DIV2K_train_LR_bicubic and Flickr2K_LR_bicubic (with images renamed as '0001x_.png' to '3550x_.png'). At the first and second stages of binarization (binarizing HR images and X4 LR images), it seems OK, but then the above error emerged. It's kind of weird since the total training images are 900 + 2650 and I have no idea why it returned to binarize the HR images after binarizing X4 LR images. I'm new to SR and have tried to look up for data preparation of DF2K in other SR repos, but in vain. I wonder how you actually get DF2K images binarized. Thanks for your help in advance XD

    opened by YouCaiJun98 0
Releases(v0.1)
Owner
Huan Wang
B.E. and M.S. graduate from Zhejiang University, China. Now Ph.D. candidate at Northeastern, USA. I work on interpretable model compression and daydreaming.
Huan Wang
(ICCV 2021 Oral) Re-distributing Biased Pseudo Labels for Semi-supervised Semantic Segmentation: A Baseline Investigation.

DARS Code release for the paper "Re-distributing Biased Pseudo Labels for Semi-supervised Semantic Segmentation: A Baseline Investigation", ICCV 2021

CVMI Lab 58 Jan 01, 2023
Position detection system of mobile robot in the warehouse enviroment

Autonomous-Forklift-System About | GUI | Tests | Starting | License | Author | 🎯 About An application that run the autonomous forklift paletization a

Kamil Goś 1 Nov 24, 2021
Supercharging Imbalanced Data Learning WithCausal Representation Transfer

ECRT: Energy-based Causal Representation Transfer Code for Supercharging Imbalanced Data Learning With Energy-basedContrastive Representation Transfer

Zidi Xiu 11 May 02, 2022
Single Image Super-Resolution (SISR) with SRResNet, EDSR and SRGAN

Single Image Super-Resolution (SISR) with SRResNet, EDSR and SRGAN Introduction Image super-resolution (SR) is the process of recovering high-resoluti

8 Apr 15, 2022
Notebooks for my "Deep Learning with TensorFlow 2 and Keras" course

Deep Learning with TensorFlow 2 and Keras – Notebooks This project accompanies my Deep Learning with TensorFlow 2 and Keras trainings. It contains the

Aurélien Geron 1.9k Dec 15, 2022
Re-implememtation of MAE (Masked Autoencoders Are Scalable Vision Learners) using PyTorch.

mae-repo PyTorch re-implememtation of "masked autoencoders are scalable vision learners". In this repo, it heavily borrows codes from codebase https:/

Peng Qiao 1 Dec 14, 2021
This is the dataset and code release of the OpenRooms Dataset.

This is the dataset and code release of the OpenRooms Dataset.

Visual Intelligence Lab of UCSD 95 Jan 08, 2023
[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation

EPro-PnP EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation In CVPR 2022 (Oral). [paper] Hanshen

同济大学智能汽车研究所综合感知研究组 ( Comprehensive Perception Research Group under Institute of Intelligent Vehicles, School of Automotive Studies, Tongji University) 842 Jan 04, 2023
PyTorch implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels"

Contrast to Divide: self-supervised pre-training for learning with noisy labels This is an official implementation of "Contrast to Divide: self-superv

55 Nov 23, 2022
🙄 Difficult algorithm, Simple code.

🎉TensorFlow2.0-Examples🎉! "Talk is cheap, show me the code." ----- Linus Torvalds Created by YunYang1994 This tutorial was designed for easily divin

1.7k Dec 25, 2022
WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L

HeadPoseEstimation-WHENet-yolov4-onnx-openvino ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L 1. Usage $ git clone htt

Katsuya Hyodo 49 Sep 21, 2022
Code of the paper "Part Detector Discovery in Deep Convolutional Neural Networks" by Marcel Simon, Erik Rodner and Joachim Denzler

Part Detector Discovery This is the code used in our paper "Part Detector Discovery in Deep Convolutional Neural Networks" by Marcel Simon, Erik Rodne

Computer Vision Group Jena 17 Feb 22, 2022
VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning

    VarCLR: Variable Representation Pre-training via Contrastive Learning New: Paper accepted by ICSE 2022. Preprint at arXiv! This repository contain

squaresLab 32 Oct 24, 2022
HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR. CVPR 2022

HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR. CVPR 2022 [Project page | Video] Getting sta

51 Nov 29, 2022
OpenGAN: Open-Set Recognition via Open Data Generation

OpenGAN: Open-Set Recognition via Open Data Generation ICCV 2021 (oral) Real-world machine learning systems need to analyze novel testing data that di

Shu Kong 90 Jan 06, 2023
Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented at RAI 2021.

Can Active Learning Preemptively Mitigate Fairness Issues? Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented a

ElementAI 7 Aug 12, 2022
Unofficial keras(tensorflow) implementation of MAE model from Masked Autoencoders Are Scalable Vision Learners

MAE-keras Unofficial keras(tensorflow) implementation of MAE model described in 'Masked Autoencoders Are Scalable Vision Learners'. This work has been

Yewon 11 Jun 12, 2022
Official implementation of "UCTransNet: Rethinking the Skip Connections in U-Net from a Channel-wise Perspective with Transformer"

[AAAI2022] UCTransNet This repo is the official implementation of "UCTransNet: Rethinking the Skip Connections in U-Net from a Channel-wise Perspectiv

Haonan Wang 199 Jan 03, 2023
Customised to detect objects automatically by a given model file(onnx)

LabelImg LabelImg is a graphical image annotation tool. It is written in Python and uses Qt for its graphical interface. Annotations are saved as XML

Heeone Lee 1 Jun 07, 2022
《Dual-Resolution Correspondence Network》(NeurIPS 2020)

Dual-Resolution Correspondence Network Dual-Resolution Correspondence Network, NeurIPS 2020 Dependency All dependencies are included in asset/dualrcne

Active Vision Laboratory 45 Nov 21, 2022