The codes and related files to reproduce the results for Image Similarity Challenge Track 1.

Overview

ISC-Track1-Submission

The codes and related files to reproduce the results for Image Similarity Challenge Track 1.

Required dependencies

To begin with, you should install the following packages with the specified versions in Python, Anaconda. Other versions may work but please do NOT try. For instance, cuda 11.0 has some bugs which bring very bad results. The hardware chosen is Nvidia Tesla V100 and Intel CPU. Other hardware, such as A100, may work but please do NOT try. The stability is not guaranteed, for instance, the Ampere architecture is not suitable and some instability is observed. Please do NOT use AMD CPU, such as EPYC, we observe some instability on DGX server.

  • python 3.7.10
  • pytorch 1.7.1 with cuda 10.1
  • faiss-gpu 1.7.1 with cuda 10.1
  • h5py 3.4.0
  • pandas 1.3.3
  • sklearn 1.0
  • skimage 0.18.3
  • PIL 8.3.2
  • cv2 4.5.3.56
  • numpy 1.16.0
  • torchvision 0.8.2 with cuda 10.1
  • augly 0.1.4
  • selectivesearch 0.4
  • face-recognition 1.3.0 (with dlib of gpu-version)
  • tqdm 4.62.3
  • requests 2.26.0
  • seaborn 0.11.2
  • mkl 2.4.0
  • loguru 0.5.3

Note: Some unimportant packages may be missing, please install them using pip directly when an error occurs.

Pre-trained models

We use three pre-trained models. They are all pre-trained on ImageNet unsupervisedly. To be convenient, we first directly give the pre-trained models as follows, then also the training codes are given.

The first backbone: ResNet-50; The second backbone: ResNet-152; The third backbone: ResNet-50-IBN.

For ResNet-50, we do not pre-train it by ourselves. It is directly downloaded from here. It is supplied by Facebook Research, and the project is Barlow Twins. You should rename it to resnet50_bar.pth.

For ResNet-152 and ResNet-50-IBN, we use the official codes of Momentum2-teacher. We only change the backbone to ResNet-152 and ResNet-50-IBN. It takes about 2 weeks to pre-train the ResNet-152, and 1 week to pre-train the ResNet-50-IBN on 8 V100 GPUs. To be convenient, we supply the whole pre-training codes in the Pretrain folder. The related readme file is also given in that folder.

It should be noted that pre-training processing plays a very important role in our algorithm. Therefore, if you want to reproduce the pre-trained results, please do NOT change the number of GPUs, the batch size, and other related hyper-parameters.

Training

For training, we generate 11 datasets. For each dataset, 3 models with different backbones are trained. Each training takes about/less than 1 day on 4 V100 GPUs (bigger backbone takes longer and smaller backbone takes shorter). The whole training codes, including how to generate training datasets and the link to the generated datasets, are given in the Training folder. For more details, please refer to the readme file in that folder.

Test

To test the performance of the trained model, we perform multi-scale, multi-model, and multi-part testing and ensemble all the scores to get the final score. To be efficient, 33 V100 GPUs are suggested to use. The time for extracting all query images' features using 33 V100 GPUs is about 3 hours. Also extracting and storing training and reference images' features take a lot of time. Please be patient and prepare enough storage to reproduce the testing process. We give all the information to generate our final results in the Test folder. Please reproduce the results according to the readme file in that folder.

Owner
Wenhao Wang
I am a student from Beihang University. My research interests include person re-identification, unsupervised domain adaptation, and domain generalization.
Wenhao Wang
Implementation of ConvMixer in TensorFlow and Keras

ConvMixer ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on

Sayan Nath 8 Oct 03, 2022
A port of muP to JAX/Haiku

MUP for Haiku This is a (very preliminary) port of Yang and Hu et al.'s μP repo to Haiku and JAX. It's not feature complete, and I'm very open to sugg

18 Dec 30, 2022
PyTorch implementations of algorithms for density estimation

pytorch-flows A PyTorch implementations of Masked Autoregressive Flow and some other invertible transformations from Glow: Generative Flow with Invert

Ilya Kostrikov 546 Dec 05, 2022
Repository for Driving Style Recognition algorithms for Autonomous Vehicles

Driving Style Recognition Using Interval Type-2 Fuzzy Inference System and Multiple Experts Decision Making Created by Iago Pachêco Gomes at USP - ICM

Iago Gomes 9 Nov 28, 2022
WSDM2022 Challenge - Large scale temporal graph link prediction

WSDM 2022 Large-scale Temporal Graph Link Prediction - Baseline and Initial Test Set WSDM Cup Website link Link to this challenge This branch offers A

Deep Graph Library 34 Dec 29, 2022
TensorFlow implementation of the algorithm in the paper "Decoupled Low-light Image Enhancement"

Decoupled Low-light Image Enhancement Shijie Hao1,2*, Xu Han1,2, Yanrong Guo1,2 & Meng Wang1,2 1Key Laboratory of Knowledge Engineering with Big Data

17 Apr 25, 2022
PyTorch implementation of the Value Iteration Networks (VIN) (NIPS '16 best paper)

Value Iteration Networks in PyTorch Tamar, A., Wu, Y., Thomas, G., Levine, S., and Abbeel, P. Value Iteration Networks. Neural Information Processing

LEI TAI 75 Nov 24, 2022
🧠 A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016

Deep CORAL A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation. B Sun, K Saenko, ECCV 2016' Deep CORAL can learn

Andy Hsu 200 Dec 25, 2022
An elaborate and exhaustive paper list for Named Entity Recognition (NER)

Named-Entity-Recognition-NER-Papers by Pengfei Liu, Jinlan Fu and other contributors. An elaborate and exhaustive paper list for Named Entity Recognit

Pengfei Liu 388 Dec 18, 2022
The FIRST GANs-based omics-to-omics translation framework

OmiTrans Please also have a look at our multi-omics multi-task DL freamwork 👀 : OmiEmbed The FIRST GANs-based omics-to-omics translation framework Xi

Xiaoyu Zhang 6 Dec 14, 2022
Thermal Control of Laser Powder Bed Fusion using Deep Reinforcement Learning

This repository is the implementation of the paper "Thermal Control of Laser Powder Bed Fusion Using Deep Reinforcement Learning", linked here. The project makes use of the Deep Reinforcement Library

BaratiLab 11 Dec 27, 2022
Doge-Prediction - Coding Club prediction ig

Doge-Prediction Coding Club prediction ig Basically: Create an application that

1 Jan 10, 2022
Qlib is an AI-oriented quantitative investment platform

Qlib is an AI-oriented quantitative investment platform, which aims to realize the potential, empower the research, and create the value of AI technologies in quantitative investment.

Microsoft 10.1k Dec 30, 2022
PyTorch implementation of Spiking Neural Networks trained on surrogate gradient & BPTT using snntorch.

snn-localization repo PyTorch implementation of Spiking Neural Networks trained on surrogate gradient & BPTT using snntorch. Install Dependencies Orig

Sami BARCHID 1 Jan 06, 2022
Racing line optimization algorithm in python that uses Particle Swarm Optimization.

Racing Line Optimization with PSO This repository contains a racing line optimization algorithm in python that uses Particle Swarm Optimization. Requi

Parsa Dahesh 6 Dec 14, 2022
Official implementation of the paper Momentum Capsule Networks (MoCapsNet)

Momentum Capsule Network Official implementation of the paper Momentum Capsule Networks (MoCapsNet). Abstract Capsule networks are a class of neural n

8 Oct 20, 2022
Sign-to-Speech for Sign Language Understanding: A case study of Nigerian Sign Language

Sign-to-Speech for Sign Language Understanding: A case study of Nigerian Sign Language This repository contains the code, model, and deployment config

16 Oct 23, 2022
This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF).

VaxNeRF Paper | Google Colab This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF). This codebase is implemented using JAX, buildin

naruya 132 Nov 21, 2022
Official code of Team Yao at Multi-Modal-Fact-Verification-2022

Official code of Team Yao at Multi-Modal-Fact-Verification-2022 A Multi-Modal Fact Verification dataset released as part of the De-Factify workshop in

Wei-Yao Wang 11 Nov 15, 2022
Codes and Data Processing Files for our paper.

Code Scripts and Processing Files for EEG Sleep Staging Paper 1. Folder Tree ./src_preprocess (data preprocessing files for SHHS and Sleep EDF) sleepE

Chaoqi Yang 18 Dec 12, 2022