code for generating data set ES-ImageNet with corresponding training code

Overview

es-imagenet-master

image

code for generating data set ES-ImageNet with corresponding training code

dataset generator

  • some codes of ODG algorithm
  • The variables to be modified include datapath (data storage path after transformation, which needs to be created before transformation) and root_Path (root directory of training set before transformation)
file name function
traconvert.py converting training set of ISLVRC 2012 into event stream using ODG
trainlabel_dir.txt It stores the corresponding relationship between the class name and label of the original Imagenet file
trainlabel.txt It is generated during transformation and stores the label of training set
valconvert.py Transformation code for test set.
valorigin.txt Original test label, need and valconvert.py Put it in the same folder
vallabel.txt It is generated during transformation and stores the label of training set.

dataset usage

  • codes are in ./datasets
  • some traing examples are provided for ES-imagenet in ./example An example code for easily using this dataset based on Pytorch
from __future__ import print_function
import sys
sys.path.append("..")
from datasets.es_imagenet_new import ESImagenet_Dataset
import torch.nn as nn
import torch

data_path = None #TODO:modify 
train_dataset = ESImagenet_Dataset(mode='train',data_set_path=data_path)
test_dataset = ESImagenet_Dataset(mode='test',data_set_path=data_path)

train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
test_sampler  = torch.utils.data.distributed.DistributedSampler(test_dataset)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=False, num_workers=1,pin_memory=True,drop_last=True,sampler=train_sampler)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False, num_workers=1,pin_memory=True)

for batch_idx, (inputs, targets) in enumerate(train_loader)
  pass
  # input = [batchsize,time,channel,width,height]
  
for batch_idx, (inputs, targets) in enumerate(test_loader):
  pass
  # input = [batchsize,time,channel,width,height]

training example and benchmarks

Requirements

  • Python >= 3.5
  • Pytorch >= 1.7
  • CUDA >=10.0
  • TenosrBoradX(optional)

Train the baseline models

$ cd example

$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 example_ES_res18.py #LIAF/LIF-ResNet-18
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 example_ES_res34.py #LIAF/LIF-ResNet-34
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 compare_ES_3DCNN34.py #3DCNN-ResNet-34
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 compare_ES_3DCNN18.py #3DCNN-ResNet-18
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 compare_ES_2DCNN34.py #2DCNN-ResNet-34 
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 compare_ES_2DCNN18.py #2DCNN-ResNet-18
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 compare_CONVLSTM.py #ConvLSTM (no used in paper)
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 example_ES_res50.py #LIAF/LIF-ResNet-50 (no used in paper)

** note:** To select LIF mode, change the config files under /LIAFnet : self.actFun= torch.nn.LeakyReLU(0.2, inplace=False) #nexttest:selu to self.actFun= LIAF.LIFactFun.apply

baseline / Benchmark

Network layer Type Test Acc/% # of Para FP32+/GFLOPs FP32x/GFLOPs
ResNet18 2D-CNN 41.030 11.68M 1.575 1.770
ResNet18 3D-CNN 38.050 28.56M 12.082 12.493
ResNet18 LIF 39.894 11.69M 12.668 0.269
ResNet18 LIAF 42.544 11.69M 12.668 14.159
ResNet34 2D-CNN 42.736 21.79M 3.211 3.611
ResNet34 3D-CNN 39.410 48.22M 20.671 21.411
ResNet34 LIF 43.424 21.80M 25.783 0.288
ResNet18+imagenet-pretrain (a) LIF 43.74 11.69M 12.668 0.269
ResNet34 LIAF 47.466 21.80M 25.783 28.901
ResNet18+self-pretrain LIAF 50.54 11.69M 12.668 14.159
ResNet18+imagenet-pretrain (b) LIAF 52.25 11.69M 12.668 14.159
ResNet34+imagenet-pretrain (c) LIAF 51.83 21.80M 25.783 28.901

Note: model (a), (b) and (c) are stored in ./pretrained_model

Download

  • The datasets ES-ImageNet (100GB) for this study can be download in the Tsinghua Cloud or Openl

  • The converted event-frame version (40GB) can be found in Tsinghua Cloud

Citation

If you use this for research, please cite. Here is an example BibTeX entry:

@misc{lin2021esimagenet,
    title={ES-ImageNet: A Million Event-Stream Classification Dataset for Spiking Neural Networks},
    author={Yihan Lin and Wei Ding and Shaohua Qiang and Lei Deng and Guoqi Li},
    year={2021},
    eprint={2110.12211},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
You might also like...
Code for the paper
Code for the paper "A Study of Face Obfuscation in ImageNet"

A Study of Face Obfuscation in ImageNet Code for the paper: A Study of Face Obfuscation in ImageNet Kaiyu Yang, Jacqueline Yau, Li Fei-Fei, Jia Deng,

Code for Active Learning at The ImageNet Scale.

Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.

[ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, Zhangyang Wang

Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective [PDF] Wuyang Chen, Xinyu Gong, Zhangyang Wang In ICLR 2

A PyTorch re-implementation of the paper 'Exploring Simple Siamese Representation Learning'. Reproduced the 67.8% Top1 Acc on ImageNet.

Exploring simple siamese representation learning This is a PyTorch re-implementation of the SimSiam paper on ImageNet dataset. The results match that

(ImageNet pretrained models) The official pytorch implemention of the TPAMI paper
(ImageNet pretrained models) The official pytorch implemention of the TPAMI paper "Res2Net: A New Multi-scale Backbone Architecture"

Res2Net The official pytorch implemention of the paper "Res2Net: A New Multi-scale Backbone Architecture" Our paper is accepted by IEEE Transactions o

Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet
Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet

Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:ImageNet无限制对抗攻击 决赛第四名(team name: Advers)

transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛
transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛

transfer_adv CVPR-2021 AIC-VI: unrestricted Adversarial Attacks on ImageNet CVPR2021 安全AI挑战者计划第六期赛道2:ImageNet无限制对抗攻击 介绍 : 深度神经网络已经在各种视觉识别问题上取得了最先进的性能。

Official Pytorch Implementation of:
Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(2021) paper

ImageNet-21K Pretraining for the Masses Paper | Pretrained models Official PyTorch Implementation Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelni

A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

Comments
  • Cannot find validation dataset

    Cannot find validation dataset

    Hello,

    Thanks for the open-sourced code. However, I had trouble finding the validation set. I directly download the frame set in your cloud server. However, I direct uncompress the file and I didn't find the validation dataset. Also, your dataset_generator/vallabel.txt is empty. How can I find the validation index file and the dataset?

    Thanks.

    opened by yhhhli 4
Releases(1.1.0)
Owner
Ordinarabbit
Phd student of CBICR, Tsinghua University
Ordinarabbit
This is the code for HOI Transformer

HOI Transformer Code for CVPR 2021 accepted paper End-to-End Human Object Interaction Detection with HOI Transformer. Reproduction We recomend you to

BigBangEpoch 124 Dec 29, 2022
Convolutional neural network that analyzes self-generated images in a variety of languages to find etymological similarities

This project is a convolutional neural network (CNN) that analyzes self-generated images in a variety of languages to find etymological similarities. Specifically, the goal is to prove that computer

1 Feb 03, 2022
Replication attempt for the Protein Folding Model

RGN2-Replica (WIP) To eventually become an unofficial working Pytorch implementation of RGN2, an state of the art model for MSA-less Protein Folding f

Eric Alcaide 36 Nov 29, 2022
Code for EMNLP 2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training"

SCAPT-ABSA Code for EMNLP2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training" Overvie

Zhengyan Li 66 Dec 04, 2022
The mini-MusicNet dataset

mini-MusicNet A music-domain dataset for multi-label classification Music transcription is sequence-to-sequence prediction problem: given an audio per

John Thickstun 4 Nov 09, 2022
GAN encoders in PyTorch that could match PGGAN, StyleGAN v1/v2, and BigGAN. Code also integrates the implementation of these GANs.

MTV-TSA: Adaptable GAN Encoders for Image Reconstruction via Multi-type Latent Vectors with Two-scale Attentions. This is the official code release fo

owl 37 Dec 24, 2022
Unofficial keras(tensorflow) implementation of MAE model from Masked Autoencoders Are Scalable Vision Learners

MAE-keras Unofficial keras(tensorflow) implementation of MAE model described in 'Masked Autoencoders Are Scalable Vision Learners'. This work has been

Yewon 11 Jun 12, 2022
codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification

DLCF-DCA codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification. submitted t

15 Aug 30, 2022
Self-Supervised Monocular DepthEstimation with Internal Feature Fusion(arXiv), BMVC2021

DIFFNet This repo is for Self-Supervised Monocular DepthEstimation with Internal Feature Fusion(arXiv), BMVC2021 A new backbone for self-supervised de

Hang 94 Dec 25, 2022
Non-Attentive-Tacotron - This is Pytorch Implementation of Google's Non-attentive Tacotron.

Non-attentive Tacotron - PyTorch Implementation This is Pytorch Implementation of Google's Non-attentive Tacotron, text-to-speech system. There is som

Jounghee Kim 46 Dec 19, 2022
CVNets: A library for training computer vision networks

CVNets: A library for training computer vision networks This repository contains the source code for training computer vision models. Specifically, it

Apple 1.1k Jan 03, 2023
PyTorch implementation of the ideas presented in the paper Interaction Grounded Learning (IGL)

Interaction Grounded Learning This repository contains a simple PyTorch implementation of the ideas presented in the paper Interaction Grounded Learni

Arthur Juliani 4 Aug 31, 2022
Fine-tuning StyleGAN2 for Cartoon Face Generation

Cartoon-StyleGAN 🙃 : Fine-tuning StyleGAN2 for Cartoon Face Generation Abstract Recent studies have shown remarkable success in the unsupervised imag

Jihye Back 520 Jan 04, 2023
Multi Task Vision and Language

12-in-1: Multi-Task Vision and Language Representation Learning Please cite the following if you use this code. Code and pre-trained models for 12-in-

Facebook Research 712 Dec 19, 2022
Sign Language Translation with Transformers (COLING'2020, ECCV'20 SLRTP Workshop)

transformer-slt This repository gathers data and code supporting the experiments in the paper Better Sign Language Translation with STMC-Transformer.

Kayo Yin 107 Dec 27, 2022
Training vision models with full-batch gradient descent and regularization

Stochastic Training is Not Necessary for Generalization -- Training competitive vision models without stochasticity This repository implements trainin

Jonas Geiping 32 Jan 06, 2023
Official Datasets and Implementation from our Paper "Video Class Agnostic Segmentation in Autonomous Driving".

Video Class Agnostic Segmentation [Method Paper] [Benchmark Paper] [Project] [Demo] Official Datasets and Implementation from our Paper "Video Class A

Mennatullah Siam 26 Oct 24, 2022
Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)

N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Gra

32 Dec 26, 2022
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Libo Qin 25 Sep 06, 2022
An implementation of the paper "A Neural Algorithm of Artistic Style"

A Neural Algorithm of Artistic Style implementation - Neural Style Transfer This is an implementation of the research paper "A Neural Algorithm of Art

Srijarko Roy 27 Sep 20, 2022