Multi-modal Vision Transformers Excel at Class-agnostic Object Detection

Overview

MViTs Excel at Class-agnostic Object Detection

PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC

Multi-modal Vision Transformers Excel at Class-agnostic Object Detection

Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer and Ming-Hsuan Yang

Paper: https://arxiv.org/abs/2111.11430


main figure

Abstract: What constitutes an object? This has been a long-standing question in computer vision. Towards this goal, numerous learning-free and learning-based approaches have been developed to score objectness. However, they generally do not scale well across new domains and for unseen objects. In this paper, we advocate that existing methods lack a top-down supervision signal governed by human-understandable semantics. To bridge this gap, we explore recent Multi-modal Vision Transformers (MViT) that have been trained with aligned image-text pairs. Our extensive experiments across various domains and novel objects show the state-of-the-art performance of MViTs to localize generic objects in images. Based on these findings, we develop an efficient and flexible MViT architecture using multi-scale feature processing and deformable self-attention that can adaptively generate proposals given a specific language query. We show the significance of MViT proposals in a diverse range of applications including open-world object detection, salient and camouflage object detection, supervised and self-supervised detection tasks. Further, MViTs offer enhanced interactability with intelligible text queries.


Architecture overview of MViTs used in this work

Architecture overview


Results


Class-agnostic OD performance of MViTs in comparison with uni-modal detector (RetinaNet) on several datasets. MViTs show consistently good results on all datasets.

Results


Enhanced Interactability: Effect of using different intuitive text queries on the MDef-DETR class-agnostic OD performance. Combining detections from multiple queries captures varying aspects of objectness.

Results


Generalization to Rare/Novel Classes: MDef-DETR class-agnostic OD performance on rarely and frequently occurring categories in the pretraining captions. The numbers on top of the bars indicate occurrences of the corresponding category in the training dataset. The MViT achieves good recall values even for the classes with no or very few occurrences.

Results


Open-world Object Detection: Effect of using class-agnostic OD proposals from MDef-DETR for pseudo labelling of unknowns in Open World Detector (ORE).

Results


Pretraining for Class-aware Object Detection: Effect of using MDef-DETR proposals for pre-training of DETReg instead of Selective Search proposals.

Results


Evaluation

The provided codebase contains the pre-computed detections for all datasets using ours MDef-DETR model. The provided directory structure is as follows,

-> README.md
-> LICENSE
-> get_eval_metrics.py
-> get_multi_dataset_eval_metrics.py
-> data
    -> voc2007
        -> combined.pkl
    -> coco
        -> combined.pkl
    -> kitti
        -> combined.pkl
    -> kitchen
        -> combined.pkl
    -> cliaprt
        -> combined.pkl
    -> comic
        -> combined.pkl
    -> watercolor
        -> combined.pkl
    -> dota
        -> combined.pkl

Where combined.pkl contains the combined detections from multiple intutive text queries for corresponding datasets. (Refer Section 5.1: Enhanced Interactability for more details)

Download the annotations for all datasets and arrange them as shown below. Note that the script expect COCO annotations in standard COCO format & annotations of all other datasets in VOC format.

...
...
-> data
    -> voc2007
        -> combined.pkl
        -> Annotations
    -> coco
        -> combined.pkl
        -> instances_val2017_filt.json
    -> kitti
        -> combined.pkl
        -> Annotations
        ...
    -> kitchen
        -> combined.pkl
        -> Annotations
    -> cliaprt
        -> combined.pkl
        -> Annotations
    -> comic
        -> combined.pkl
        -> Annotations
    -> watercolor
        -> combined.pkl
        -> Annotations
    -> dota
        -> combined.pkl
        -> Annotations

Once the above mentioned directory structure is created, follow the following steps to calculate the metrics.

  1. Install numpy
$ pip install numpy
  1. Calculate metrics
$ python get_multi_dataset_eval_metrics.py

The calculated metrics will be stored in a data.csv file in the same directory.


Citation

If you use our work, please consider citing:

@article{Maaz2021Multimodal,
    title={Multi-modal Transformers Excel at Class-agnostic Object Detection},
    author={Muhammad Maaz and Hanoona Rasheed and Salman Khan and Fahad Shahbaz Khan and Rao Muhammad Anwer and Ming-Hsuan Yang},
    journal={ArXiv 2111.11430},
    year={2021}
}

Contact

Should you have any question, please contact [email protected] or [email protected]

🚀 Note: The repository contains the minimum evaluation code. The complete training and inference scripts along with pretrained models will be released soon. Stay Tuned!

Comments
  • aligning image text pairs

    aligning image text pairs

    I have a question on the paper: you train on aligned image-text pairs. How do you create this alignment? is it the same way as in MDeTr? I did not fully understand from the paper, especially for non-natural images like satellite images or medical images.

    opened by nikky4D 6
  • Loading checkpoints for inference

    Loading checkpoints for inference

    Which checkpoints in drive link you provided will load correctly in default MDefDETR model without any errors? Im getting missing/unexpected keys errors.

    documentation 
    opened by KaleemW 4
  • Is EMA used in this work?

    Is EMA used in this work?

    Hello author, thanks for your great work. I raise a question about the usage of Exponential Moving Average (EMA) in this paper, hoping you can provide me with some clues. It seems that this paper does not detail in this part. As far as I know, MDETR uses it and evaluate use the EMA model. So I wonder is it used in this work? If it is actually used, why should we evaluate by the EMA model rather than the original one?

    opened by JacobYuan7 4
  • one of the variables needed for gradient computation has been modified by an inplace operation

    one of the variables needed for gradient computation has been modified by an inplace operation

    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [2, 20]] is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

    This error will terminate the training procedure when training mdef_detr using the PyTorch environment as you advise(torch==1.8.0+cu111).

    And I found the variables of 'transformer.text_encoder.pooler.dense.weight' does not have grad. This may be the main reason for this error.

    opened by xushilin1 2
  • Loading the Faster RCNN checkpoint

    Loading the Faster RCNN checkpoint

    Greetings

    The readme states: (Feb 01, 2022) Training codes for MDef-DETR and MDef-DETR minus Language models are released -> training/README.md Instructions to use class-agnostic object detection behavior of MDef-DETR on different applications are released -> applications/README.md All the pretrained models (MDef-DETR, Def-DETR, MDETR, DETReg, Faster-RCNN, RetinaNet, ORE, and others), along with the instructions to reproduce the results are released -> this link

    Following the link to the google drive, only provides me with the model weight for the Faster-RCNN, but not with instructions on how to load it and which framework to use. I have tried creating a Faster-RCNN-resnet101 model with pytorch, but when I load the model weight, it states that the layer names does not match. Any guidance would be much appreciated.

    Best regards Martin

    uni-modal-detectors 
    opened by MartinPedersenpp 2
  • Need to understand how to import weights

    Need to understand how to import weights

    Hello,

    Firstly, I'd like to congratulate you for bringing this amazing work. Class agnostic object detection is much needed currently in the industry and this would be a great way to solve the problem.

    I wanted to test your model on some custom data. However, I cannot import pre-trained weights from the link you have provided. I can see the zip file but I couldn't find a way to import them. I'm using OpenCV to import weights. It is asking me to have a config file as well as .weights file.

    Could you please help me which library to use to import weights when I'm working on a jupyter notebook?

    Thank you,

    opened by abhi-vellala 2
  • pretrain data download

    pretrain data download

    if is it possible to split pretrain data into multiple seperate zip files。 I download data from google drive : https://drive.google.com/drive/folders/1-3kAsyZIVFbNelRXrF93Y5tMgOypv2jV i cannot download this data because of google drive time limit(less than 1 hours) and my limit network bandwidth。

    documentation 
    opened by zhouxingguang 1
  • Training code release

    Training code release

    This pull request adds

    • Training codes for MDef-DETR and MDef-DETR minus Language models
    • Instructions to use class-agnostic object detection behavior of MDef-DETR on different applications
    • All the pre-trained models (MDef-DETR, Def-DETR, MDETR, DETReg, Faster-RCNN, RetinaNet, ORE, and others), along with the instructions to reproduce the results
    opened by mmaaz60 0
  • Questions about your training procedure?

    Questions about your training procedure?

    To my understanding, I think you use image-text pairs as inputs and only bbox annotations as supervision signals without any class labels, does it right?

    opened by GYslchen 1
  • Questions about your pretrained model

    Questions about your pretrained model

    Does the pre-trained model you provide cover the categories on LVIS data? If I want to do open-world object detection on the LVIS dataset, can I directly use your pre-trained model to generate the proposals or should I need to filter the dataset so that it doesn't contain any object in the LVIS dataset?

    opened by chengsilin 1
  • how to generate 'tokens_positive'  ann from detector dataset like object365?

    how to generate 'tokens_positive' ann from detector dataset like object365?

    I found 'tokens_positive' was used in your ann file. could you please release the code of how to process detect data like coco to get the 'tokens_positive' ann results?

    documentation 
    opened by zhouxingguang 1
Releases(v1.0)
  • v1.0(Feb 1, 2022)

    • Training codes for MDef-DETR and MDef-DETR minus Language models are released -> training/README.md
    • Instructions to use class-agnostic object detection behavior of MDef-DETR on different applications are released -> applications/README.md
    • All the pretrained models (MDef-DETR, Def-DETR, MDETR, DETReg, Faster-RCNN, RetinaNet, ORE, and others), along with the instructions to reproduce the results are released -> this link
    Source code(tar.gz)
    Source code(zip)
  • v0.1(Nov 25, 2021)

    Evaluation Code & Pre-trained Models

    • Releases evaluation code for MDef-DETR and 'MDef-DETR w/o Language Branch' model
    • Releases the pre-trained weights for both models
    • Releases the pre-computed predictions for both the models
    Source code(tar.gz)
    Source code(zip)
Owner
Muhammad Maaz
An Electrical Engineer with experience in Computer Vision software development. Skilled in Machine Learning, Deep Learning and Computer Vision.
Muhammad Maaz
Implementation of self-attention mechanisms for general purpose. Focused on computer vision modules. Ongoing repository.

Self-attention building blocks for computer vision applications in PyTorch Implementation of self attention mechanisms for computer vision in PyTorch

AI Summer 962 Dec 23, 2022
[ICCV 2021] Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation

MAED: Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation Getting Started Our codes are implemented and tested with pyth

ZiNiU WaN 176 Dec 15, 2022
✔️ Visual, reactive testing library for Julia. Time machine included.

PlutoTest.jl (alpha release) Visual, reactive testing library for Julia A macro @test that you can use to verify your code's correctness. But instead

Pluto 68 Dec 20, 2022
PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN in PyTorch PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. * All samples in READM

Taehoon Kim 1k Jan 04, 2023
On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks

On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks We provide the code (in PyTorch) and datasets for our paper "On Size-Orient

Zemin Liu 4 Jun 18, 2022
A computer vision pipeline to identify the "icons" in Christian paintings

Christian-Iconography A computer vision pipeline to identify the "icons" in Christian paintings. A bit about iconography. Iconography is related to id

Rishab Mudliar 3 Jul 30, 2022
Husein pet projects in here!

project-suka-suka Husein pet projects in here! List of projects mysejahtera-density. Generate resolution points using meshgrid and request each points

HUSEIN ZOLKEPLI 47 Dec 09, 2022
Implementation of Self-supervised Graph-level Representation Learning with Local and Global Structure (ICML 2021).

Self-supervised Graph-level Representation Learning with Local and Global Structure Introduction This project is an implementation of ``Self-supervise

MilaGraph 50 Dec 09, 2022
LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant Self-At

OxCSML (Oxford Computational Statistics and Machine Learning) 50 Dec 28, 2022
VOLO: Vision Outlooker for Visual Recognition

VOLO: Vision Outlooker for Visual Recognition, arxiv This is a PyTorch implementation of our paper. We present Vision Outlooker (VOLO). We show that o

Sea AI Lab 876 Dec 09, 2022
Code for Efficient Visual Pretraining with Contrastive Detection

Code for DetCon This repository contains code for the ICCV 2021 paper "Efficient Visual Pretraining with Contrastive Detection" by Olivier J. Hénaff,

DeepMind 56 Nov 13, 2022
A fast python implementation of Ray Tracing in One Weekend using python and Taichi

ray-tracing-one-weekend-taichi A fast python implementation of Ray Tracing in One Weekend using python and Taichi. Taichi is a simple "Domain specific

157 Dec 26, 2022
Cmsc11 arcade - Final Project for CMSC11

cmsc11_arcade Final Project for CMSC11 Developers: Limson, Mark Vincent Peñafiel

Gregory 1 Jan 18, 2022
deep-prae

Deep Probabilistic Accelerated Evaluation (Deep-PrAE) Our work presents an efficient rare event simulation methodology for black box autonomy using Im

Safe AI Lab 4 Apr 17, 2021
A PyTorch Toolbox for Face Recognition

FaceX-Zoo FaceX-Zoo is a PyTorch toolbox for face recognition. It provides a training module with various supervisory heads and backbones towards stat

JDAI-CV 1.6k Jan 06, 2023
Face Mask Detector by live camera using tensorflow-keras, openCV and Python

Face Mask Detector 😷 by Live Camera Detecting masked or unmasked faces by live camera with percentange of mask occupation About Project: This an Arti

Karan Shingde 2 Apr 04, 2022
Training deep models using anime, illustration images.

animeface deep models for anime images. Datasets anime-face-dataset Anime faces collected from Getchu.com. Based on Mckinsey666's dataset. 63.6K image

Tomoya Sawada 61 Dec 25, 2022
A very short and easy implementation of Quantile Regression DQN

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
Learning Optical Flow from a Few Matches (CVPR 2021)

Learning Optical Flow from a Few Matches This repository contains the source code for our paper: Learning Optical Flow from a Few Matches CVPR 2021 Sh

Shihao Jiang (Zac) 159 Dec 16, 2022