Shared Attention for Multi-label Zero-shot Learning

Overview

Shared Attention for Multi-label Zero-shot Learning

Overview

This repository contains the implementation of Shared Attention for Multi-label Zero-shot Learning.

In this work, we address zero-shot multi-label learning for recognition all (un)seen labels using a shared multi-attention method with a novel training mechanism.

Image


Prerequisites

  • Python 3.x
  • TensorFlow 1.8.0
  • sklearn
  • matplotlib
  • skimage
  • scipy==1.4.1

Data Preparation

Please download and extract the vgg_19 model (http://download.tensorflow.org/models/vgg_19_2016_08_28.tar.gz) in ./model/vgg_19. Make sure the extract model is named vgg_19.ckpt

NUS-WIDE

  1. Please download NUS-WIDE images and meta-data into ./data/NUS-WIDE folder according to the instructions within the folders ./data/NUS-WIDE and ./data/NUS-WIDE/Flickr.

  2. To extract features into TensorFlow storage format, please run:

python ./extract_data/extract_full_NUS_WIDE_images_VGG_feature_2_TFRecord.py			#`data_set` == `Train`: create NUS_WIDE_Train_full_feature_ZLIB.tfrecords
python ./extract_data/extract_full_NUS_WIDE_images_VGG_feature_2_TFRecord.py			#`data_set` == `Test`: create NUS_WIDE_Test_full_feature_ZLIB.tfrecords

Please change the data_set variable in the script to Train and Test to extract NUS_WIDE_Train_full_feature_ZLIB.tfrecords and NUS_WIDE_Test_full_feature_ZLIB.tfrecords.

Open Images

  1. Please download Open Images urls and annotation into ./data/OpenImages folder according to the instructions within the folders ./data/OpenImages/2017_11 and ./data/OpenImages/2018_04.

  2. To crawl images from the web, please run the script:

python ./download_imgs/asyn_image_downloader.py 					#`data_set` == `train`: download images into `./image_data/train/`
python ./download_imgs/asyn_image_downloader.py 					#`data_set` == `validation`: download images into `./image_data/validation/`
python ./download_imgs/asyn_image_downloader.py 					#`data_set` == `test`: download images into `./image_data/test/`

Please change the data_set variable in the script to train, validation, and test to download different data splits.

  1. To extract features into TensorFlow storage format, please run:
python ./extract_data/extract_images_VGG_feature_2_TFRecord.py						#`data_set` == `train`: create train_feature_2018_04_ZLIB.tfrecords
python ./extract_data/extract_images_VGG_feature_2_TFRecord.py						#`data_set` == `validation`: create validation_feature_2018_04_ZLIB.tfrecords
python ./extract_data/extract_test_seen_unseen_images_VGG_feature_2_TFRecord.py			        #`data_set` == `test`:  create OI_seen_unseen_test_feature_2018_04_ZLIB.tfrecords

Please change the data_set variable in the extract_images_VGG_feature_2_TFRecord.py script to train, and validation to extract features from different data splits.


Training and Evaluation

NUS-WIDE

  1. To train and evaluate zero-shot learning model on full NUS-WIDE dataset, please run:
python ./zeroshot_experiments/NUS_WIDE_zs_rank_Visual_Word_Attention.py

Open Images

  1. To train our framework, please run:
python ./multilabel_experiments/OpenImage_rank_Visual_Word_Attention.py				#create a model checkpoint in `./results`
  1. To evaluate zero-shot performance, please run:
python ./zeroshot_experiments/OpenImage_evaluate_top_multi_label.py					#set `evaluation_path` to the model checkpoint created in step 1) above

Please set the evaluation_path variable to the model checkpoint created in step 1) above


Model Checkpoint

We also include the checkpoint of the zero-shot model on NUS-WIDE for fast evaluation (./results/release_zs_NUS_WIDE_log_GPU_7_1587185916d2570488/)


Citation

If this code is helpful for your research, we would appreciate if you cite the work:

@article{Huynh-LESA:CVPR20,
  author = {D.~Huynh and E.~Elhamifar},
  title = {A Shared Multi-Attention Framework for Multi-Label Zero-Shot Learning},
  journal = {{IEEE} Conference on Computer Vision and Pattern Recognition},
  year = {2020}}
Owner
dathuynh
Ph.D. candidate at Northeastern University
dathuynh
Relative Human dataset, CVPR 2022

Relative Human (RH) contains multi-person in-the-wild RGB images with rich human annotations, including: Depth layers (DLs): relative depth relationsh

Yu Sun 112 Dec 02, 2022
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

ELECTRA Introduction ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using

Google Research 2.1k Dec 28, 2022
Learning to Initialize Neural Networks for Stable and Efficient Training

GradInit This repository hosts the code for experiments in the paper, GradInit: Learning to Initialize Neural Networks for Stable and Efficient Traini

Chen Zhu 124 Dec 30, 2022
Code for "Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search"

Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search This is an implementation for our paper Contextual Non-Loca

Tencent YouTu Research 50 Dec 03, 2022
Evolutionary Scale Modeling (esm): Pretrained language models for proteins

Evolutionary Scale Modeling This repository contains code and pre-trained weights for Transformer protein language models from Facebook AI Research, i

Meta Research 1.6k Jan 09, 2023
[ICCV'21] Learning Conditional Knowledge Distillation for Degraded-Reference Image Quality Assessment

CKDN The official implementation of the ICCV2021 paper "Learning Conditional Knowledge Distillation for Degraded-Reference Image Quality Assessment" O

Multimedia Research 50 Dec 13, 2022
Deep Reinforced Attention Regression for Partial Sketch Based Image Retrieval.

DARP-SBIR Intro This repository contains the source code implementation for ICDM submission paper Deep Reinforced Attention Regression for Partial Ske

2 Jan 09, 2022
Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation

Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation Introduction 📋 Official implementation of Explainable Robust Learnin

JeongEun Park 6 Apr 19, 2022
This is a beginner-friendly repo to make a collection of some unique and awesome projects. Everyone in the community can benefit & get inspired by the amazing projects present over here.

Awesome-Projects-Collection Quality over Quantity :) What to do? Add some unique and amazing projects as per your favourite tech stack for the communi

Rohan Sharma 178 Jan 01, 2023
This repository is a basic Machine Learning train & validation Template (Using PyTorch)

pytorch_ml_template This repository is a basic Machine Learning train & validation Template (Using PyTorch) TODO Markdown 사용법 Build Docker 사용법 Anacond

1 Sep 15, 2022
ICCV2021 - A New Journey from SDRTV to HDRTV.

ICCV2021 - A New Journey from SDRTV to HDRTV.

XyChen 82 Dec 27, 2022
YOLOv3 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices

Ultralytics 9.3k Jan 07, 2023
A curated list of awesome resources related to Semantic Search🔎 and Semantic Similarity tasks.

A curated list of awesome resources related to Semantic Search🔎 and Semantic Similarity tasks.

224 Jan 04, 2023
Code used to generate the results appearing in "Train longer, generalize better: closing the generalization gap in large batch training of neural networks"

Train longer, generalize better - Big batch training This is a code repository used to generate the results appearing in "Train longer, generalize bet

Elad Hoffer 145 Sep 16, 2022
Can we do Customers Segmentation using PHP and Unsupervized Machine Learning ? Yes we can ! 🤡

Customers Segmentation using PHP and Rubix ML PHP Library Can we do Customers Segmentation using PHP and Unsupervized Machine Learning ? Yes we can !

Mickaël Andrieu 11 Oct 08, 2022
Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation.

SAFA: Structure Aware Face Animation (3DV2021) Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation. Getting Started

QiulinW 122 Dec 23, 2022
competitions-v2

Codabench (formerly Codalab Competitions v2) Installation $ cp .env_sample .env $ docker-compose up -d $ docker-compose exec django ./manage.py migrat

CodaLab 21 Dec 02, 2022
🎁 3,000,000+ Unsplash images made available for research and machine learning

The Unsplash Dataset The Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of

Unsplash 2k Jan 03, 2023
The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient.

You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient (paper) @misc{zhang2021compress,

46 Dec 07, 2022
This repository contains the implementation of the HealthGen model, a generative model to synthesize realistic EHR time series data with missingness

HealthGen: Conditional EHR Time Series Generation This repository contains the implementation of the HealthGen model, a generative model to synthesize

0 Jan 20, 2022