Official PyTorch implementation of Learning Intra-Batch Connections for Deep Metric Learning (ICML 2021) published at International Conference on Machine Learning

Overview

About

This repository the official PyTorch implementation of Learning Intra-Batch Connections for Deep Metric Learning. The config files contain the same parameters as used in the paper.

We use torch 1.7.1 and torchvision 0.6.0. While the training and inference should be able to be done correctly with the newer versions of the libraries, be aware that at times the network trained and tested using versions might diverge or reach lower results. We provide a evironment.yaml file to create a corresponding conda environment.

We also support mixed-precision training via Nvidia Apex and describe how to use it in usage.

As in the paper we support training on 4 datasets: CUB-200-2011, CARS 196, Stanford Online Products and In-Shop datasets.

The majority of experiments are done using ResNet50. We provide support for the entire family of ResNet and DenseNet as well as BN-Inception.

Set up

  1. Clone and enter this repository:

     git clone https://github.com/dvl-tum/intra_batch.git
    
     cd intra_batch
    
  2. Create an Anaconda environment for this project: To set up a conda environment containing all used packages, please fist install anaconda and then run

    1.   conda env create -f environment.yml
      
    2.  conda activate intra_batch_dml
      
    3.  pip install torch-scatter==2.0.5 -f https://pytorch-geometric.com/whl/torch-1.5.0+cu102.html
      
    4. If you want to use Apex, please follow the installation instructions on https://github.com/NVIDIA/apex
  3. Download datasets: Make a data directory by typing

     mkdir data
    

    Then download the datasets using the following links and unzip them in the data directory:

    We also provide a parser for Stanford Online Products and In-Shop datastes. You can find dem in the dataset/ directory. The datasets are expected to be structured as dataset/images/class/, where dataset is either CUB-200-2011, CARS, Stanford_Online_Products or In_shop and class are the classes of a given dataset. Example for CUB-200-2011:

         CUB_200_2011/images/001
         CUB_200_2011/images/002
         CUB_200_2011/images/003
         ...
         CUB_200_2011/images/200
    
  4. Download our models: Please download the pretrained weights by using

     wget https://vision.in.tum.de/webshare/u/seidensc/intra_batch_connections/best_weights.zip
    

    and unzip them.

Usage

You can find config files for training and testing on each of the datasets in the config/ directory. For training and testing, you will have to input which one you want to use (see below). You will only be able to adapt some basic variables over the command line. For all others please refer to the yaml file directly.

Testing

To test to networks choose one of the config files for testing, e.g., config_cars_test.yaml to evaluate the performance on Cars196 and run:

python train.py --config_path config_cars_test.yaml --dataset_path <path to dataset> 

The default dataset path is data.

Training

To train a network choose one of the config files for training like config_cars_train.yaml to train on Cars196 and run:

python train.py --config_path config_cars_train.yaml --dataset_path <path to dataset> --net_type <net type you want to use>

Again, if you don't specify anything, the default setting will be used. For the net type you have the following options:

resnet18, resnet32, resnet50, resnet101, resnet152, densenet121, densenet161, densenet16, densenet201, bn_inception

If you want to use apex add --is_apex 1 to the command.

Results

[email protected] [email protected] [email protected] [email protected] NMI
CUB-200-2011 70.3 80.3 87.6 92.7 73.2
Cars196 88.1 93.3 96.2 98.2 74.8
[email protected] [email protected] [email protected] NMI
Stanford Online Products 81.4 91.3 95.9 92.6
[email protected] [email protected] [email protected] [email protected]
In-Shop 92.8 98.5 99.1 99.2

Citation

If you find this code useful, please consider citing the following paper:

@inproceedings{DBLP:conf/icml/SeidenschwarzEL21,
  author    = {Jenny Seidenschwarz and
               Ismail Elezi and
               Laura Leal{-}Taix{\'{e}}},
  title     = {Learning Intra-Batch Connections for Deep Metric Learning},
  booktitle = {Proceedings of the 38th International Conference on Machine Learning,
               {ICML} 2021, 18-24 July 2021, Virtual Event},
  series    = {Proceedings of Machine Learning Research},
  volume    = {139},
  pages     = {9410--9421},
  publisher = {{PMLR}},
  year      = {2021},
}
Owner
Dynamic Vision and Learning Group
Dynamic Vision and Learning Group
Character Grounding and Re-Identification in Story of Videos and Text Descriptions

Character in Story Identification Network (CiSIN) This project hosts the code for our paper. Youngjae Yu, Jongseok Kim, Heeseung Yun, Jiwan Chung and

8 Dec 09, 2022
Xview3 solution - XView3 challenge, 2nd place solution

Xview3, 2nd place solution https://iuu.xview.us/ test split aggregate score publ

Selim Seferbekov 24 Nov 23, 2022
Implementation of the federated dual coordinate descent (FedDCD) method.

FedDCD.jl Implementation of the federated dual coordinate descent (FedDCD) method. Installation To install, just call Pkg.add("https://github.com/Zhen

Zhenan Fan 6 Sep 21, 2022
Improving Non-autoregressive Generation with Mixup Training

MIST Training MIST TRAIN_FILE=/your/path/to/train.json VALID_FILE=/your/path/to/valid.json OUTPUT_DIR=/your/path/to/save_checkpoints CACHE_DIR=/your/p

7 Nov 22, 2022
Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers Results results on COCO val Backbone Method Lr Schd PQ Config Download

155 Dec 20, 2022
Code for "Long-tailed Distribution Adaptation"

Long-tailed Distribution Adaptation (Accepted in ACM MM2021) This project is built upon BBN. Installation pip install -r requirements.txt Usage Traini

Zhiliang Peng 10 May 18, 2022
Repository for MeshTalk supplemental material and code once the (already approved) 16 GHS captures our lab will make publicly available are released.

meshtalk This repository contains code to run MeshTalk for face animation from audio. If you use MeshTalk, please cite @inproceedings{richard2021mesht

Meta Research 221 Jan 06, 2023
Use Python, OpenCV, and MediaPipe to control a keyboard with facial gestures

CheekyKeys A Face-Computer Interface CheekyKeys lets you control your keyboard using your face. View a fuller demo and more background on the project

69 Nov 09, 2022
This repository contains the code for: RerrFact model for SciVer shared task

RerrFact This repository contains the code for: RerrFact model for SciVer shared task. Setup for Inference 1. Download SciFact database Download the S

Ashish Rana 1 May 22, 2022
Code implementation of Data Efficient Stagewise Knowledge Distillation paper.

Data Efficient Stagewise Knowledge Distillation Table of Contents Data Efficient Stagewise Knowledge Distillation Table of Contents Requirements Image

IvLabs 112 Dec 02, 2022
Multimodal commodity image retrieval 多模态商品图像检索

Multimodal commodity image retrieval 多模态商品图像检索 Not finished yet... introduce explain:The specific description of the project and the product image dat

hongjie 8 Nov 25, 2022
A self-supervised 3D representation learning framework named viewpoint bottleneck.

Pointly-supervised 3D Scene Parsing with Viewpoint Bottleneck Paper Created by Liyi Luo, Beiwen Tian, Hao Zhao and Guyue Zhou from Institute for AI In

63 Aug 11, 2022
ImageNet-CoG is a benchmark for concept generalization. It provides a full evaluation framework for pre-trained visual representations which measure how well they generalize to unseen concepts.

The ImageNet-CoG Benchmark Project Website Paper (arXiv) Code repository for the ImageNet-CoG Benchmark introduced in the paper "Concept Generalizatio

NAVER 23 Oct 09, 2022
For visualizing the dair-v2x-i dataset

3D Detection & Tracking Viewer The project is based on hailanyi/3D-Detection-Tracking-Viewer and is modified, you can find the original version of the

34 Dec 29, 2022
Code accompanying "Learning What To Do by Simulating the Past", ICLR 2021.

Learning What To Do by Simulating the Past This repository contains code that implements the Deep Reward Learning by Simulating the Past (Deep RSLP) a

Center for Human-Compatible AI 24 Aug 07, 2021
Physics-informed Neural Operator for Learning Partial Differential Equation

PINO Physics-informed Neural Operator for Learning Partial Differential Equation Abstract: Machine learning methods have recently shown promise in sol

107 Jan 02, 2023
《K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters》(2020)

K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters This repository is the implementation of the paper "K-Adapter: Infusing Knowledge

Microsoft 118 Dec 13, 2022
Deep Reinforcement Learning based autonomous navigation for quadcopters using PPO algorithm.

PPO-based Autonomous Navigation for Quadcopters This repository contains an implementation of Proximal Policy Optimization (PPO) for autonomous naviga

Bilal Kabas 16 Nov 11, 2022
Code for the paper "Implicit Representations of Meaning in Neural Language Models"

Implicit Representations of Meaning in Neural Language Models Preliminaries Create and set up a conda environment as follows: conda create -n state-pr

Belinda Li 39 Nov 03, 2022
Videocaptioning.pytorch - A simple implementation of video captioning

pytorch implementation of video captioning recommend installing pytorch and pyth

Yiyu Wang 2 Jan 01, 2022