[ICCV 2021 Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos

Overview

Just Ask: Learning to Answer Questions from Millions of Narrated Videos

WebpageDemoPaper

PWC PWC PWC PWC PWC

This repository provides the code for our paper, including:

  • Data downloading instructions, including our released iVQA and HowToVQA69M datasets
  • Data preprocessing and feature extraction scripts, as well as preprocessed data and features
  • VideoQA automatic generation pipeline
  • Training scripts and pretrained checkpoints, both for pretraining and downstream VideoQA datasets
  • Evaluation scripts

Paths and Requirements

Fill the empty paths in the file global_parameters.py.

To install requirements, run:

pip install -r requirements.txt

Quick Start

If you wish to start VideoQA training or inference quickly.

For downstream datasets

To download pretrained checkpoints, pre-processed data and features, run:

bash download/download_checkpoints.sh <DEFAULT_CKPT_DIR>
bash download/download_downstream.sh <DEFAULT_DATASET_DIR>

This requires having about 8Gb free in DEFAULT_CKPT_DIR and 3.6Gb free in DEFAULT_DATASET_DIR.

For HowToVQA69M Pretraining

If you want to reproduce the pretraining, download HowToVQA69M:

bash download/download_howtovqa.sh <DEFAULT_DATASET_DIR>

This requires having about 6Gb free in DEFAULT_DATASET_DIR. You will also need to download features for videos from HowTo100M from the data providers in HOWTO_FEATURES_PATH.

Long Start

If you wish to reproduce the data preprocessing, video feature extraction or HowToVQA69M generation procedure.

Download Raw Data

Click for details...

The following folders should be created in DEFAULT_DATASET_DIR, and should also contain a video subfolder containing the videos downloaded from each dataset.

HowToVQA69M: We provide the HowToVQA69M dataset at this link. The HowToVQA69M folder should contain howtovqa.pkl, train_howtovqa.csv and val_howtovqa.csv.

iVQA: We provide the iVQA dataset at this link. The iVQA folder should contain train.csv, val.csv and test.csv.

MSRVTT-QA: Download it from the data providers. The MSRVTT-QA folder should contain train_qa.json, val_qa.json, test_qa.json, and also train_val_videodatainfo.json and test_videodatainfo.json. The two last files are from the MSR-VTT dataset, and are used to filter out video IDs in HowTo100M that are in the validation and test sets of MSRVTT-QA.

MSVD-QA: Download it from the data providers. The MSVD-QA folder should contain train_qa.json, val_qa.json, test_qa.json and youtube_mapping.txt. The last file is used to filter out videos IDs in HowTo100M that are in the validation and test sets of MSVD-QA.

ActivityNet-QA: Download it from the data providers. The ActivityNet-QA folder should contain train_q.json, train_a.json, val_q.json, val_a.json, test_q.json and test_a.json.

How2QA: Download it from the data providers. The How2QA folder should contain how2QA_train_release.csv and how2QA_val_release.csv.

HowTo100M: Download it from the data providers. The HowTo100M folder should contain caption_howto100m_with_stopwords.pkl and s3d_features.csv. Note that for the VQA-T pretraining on HowTo100M baseline, we also do zero-shot validation on YouCook2 and MSR-VTT video retrieval. We followed MIL-NCE for the preprocessing of these datasets. You should have in the YouCook2 folder a pickle file with processed data and features youcook_unpooled_val.pkl, and in the MSR-VTT folder a file of processed data MSRVTT_JSFUSION_test.csv and a file of features msrvtt_test_unpooled_s3d_features.pth.

Data Preprocessing

Click for details...

VideoQA: To process data for each VideoQA dataset, use:

python preproc/preproc_ivqa.py
python preproc/preproc_msrvttqa.py
python preproc/preproc_msvdqa.py
python preproc/preproc_activitynetqa.py
python preproc/preproc_how2qa.py

This will save train, validation and test dataframe files (train.csv, val.csv, test.csv), and the vocabulary map (vocab.json) in the open-ended setting, in each dataset folder. Note that the How2QA preprocessing script should be used after feature extraction (see below) and will also merge features into one file.

HowTo100M: To preprocess HowTo100M by removing potential intersection with the validation and test sets of VideoQA datasets, and removing repetition in the ASR data, use:

python preproc/howto100m_remove_intersec.py
python preproc/howto100m_remove_repet.py

This will save caption_howto100m_sw_nointersec.pickle, caption_howto100m_sw_nointersec_norepeat.pickle and s3d_features_nointersec.csv in HOWTO_PATH.

Extract video features

Click for details...

We provide in the extract folder the code to extract features with the S3D feature extractor. It requires downloading the S3D model weights available at this repository. The s3d_howto100m.pth checkpoint and s3d_dict.npy dictionary should be in DEFAULT_MODEL_DIR.

Extraction: You should prepare for each dataset a csv with columns video_path (typically in the form of <dataset_path>/video/<video_path>), and feature_path (typically in the form of <dataset_path>/features/<video_path>.npy). Then use (you may launch this script on multiple GPUs to fasten the extraction process):

python extract/extract.py --csv <csv_path>

Merging: To merge the extracted features into a single file for each VideoQA dataset, use (for ActivityNet-QA that contains long videos, add --pad 120):

python extract/merge_features.py --folder <features_path> \
--output_path <DEFAULT_DATASET_DIR>/s3d.pth --dataset <dataset>

For HowTo100M, the features should be stored in HOWTO_FEATURES_PATH, one file per video. SSD_PATH should preferably on a SSD disk for optimized on-the-fly reading operation time during pretraining.

HowToVQA69M Generation

Click for details...

This requires downloading the pretrained BRNN model weights from Punctuator2. The INTERSPEECH-T-BRNN.pcl file should be in DEFAULT_MODEL_DIR.

Punctuating: First, we punctuate the speech data at the video level and split the video into clips temporally aligned with infered sentences (you may launch this script on multiple CPUs to fasten the process):

python videoqa_generation/punctuate.py

Merging infered speech sentences: Second, we merge the punctuated data into one file:

python videoqa_generation/merge_punctuations.py

Extracting answers: Third, we extract answers from speech transcripts. This requires having cloned this repository in QG_REPO_DIR. Then use (you may launch this script on multiple GPUs to fasten the process):

python videoqa_generation/extract_answers.py

Merging extracted answers: Fourth, we merge the extracted answers into one file:

python videoqa_generation/merge_answers.py

Generating questions: Fifth, we generate questions pairs from speech and extracted answers. Use (you may launch this script on multiple GPUs to fasten the process):

python videoqa_generation/generate_questions.py

Merging generated question-answer pairs: Finally, we merge the generated question-answer pairs into one file (this will save howtovqa.pkl, train_howtovqa.csv and val_howtovqa.csv):

python videoqa_generation/merge_qas.py

Training

Pretraining

DistilBERT tokenizer and model checkpoints will be automatically downloaded from Hugging Face in DEFAULT_MODEL_DIR/transformers.

Training VQA-T on HowToVQA69M: To train on HowToVQA69M with contrastive loss and MLM loss (it takes less than 48H on 8 NVIDIA Tesla V100), run:

python main_howtovqa.py --dataset="howtovqa" --epochs=10 --checkpoint_dir="pthowtovqa" \
--batch_size=128 --batch_size_val=256 --n_pair=32 --freq_display=10

Note that it runs a validation once per epoch, which consists in retrieving answer within the batch, given video and question.

Baselines: The pretraining of QA-T on HowToVQA69M is done with the previous command complemented with --baseline qa. To train VQA-T on HowTo100M with MLM and cross-modal matching objectives (it takes less than 2 days on 8 NVIDIA Tesla V100), run:

python main_htm.py --dataset="howto100m" --epochs=10 --checkpoint_dir="pthtm" \ 
--batch_size=128 --batch_size_val=3500 --n_pair=32 --freq_display=10

Note that the previous command runs a zero-shot video retrieval validation on YouCook2 and MSR-VTT once per epoch.

Training on downstream VideoQA datasets

Finetuning: To finetune a pretrained model on a downstream VideoQA dataset (for MSRVTT-QA, which is the largest downstream dataset, it takes less than 4 hours on 4 NVIDIA Tesla V100), run:

python main_videoqa.py --checkpoint_dir=ft<dataset> --dataset=<dataset> --lr=0.00001 \ 
--pretrain_path=<CKPT_PATH>

Training from scratch: VQA-T trained from scratch is simply obtained by running the previous script with no pretrain_path set.

Available checkpoints

Training data iVQA MSRVTT-QA MSVD-QA ActivityNet-QA How2QA url size
HowToVQA69M 12.2 2.9 7.5 12.2 51.1 Drive 600MB
HowToVQA69M + iVQA 35.4 Drive 600MB
HowToVQA69M + MSRVTT-QA 41.5 Drive 600MB
HowToVQA69M + MSVD-QA 43.6 Drive 600MB
HowToVQA69M + ActivityNet-QA 38.9 Drive 600MB
HowToVQA69M + How2QA 84.4 Drive 600MB

Inference

Evaluating on downstream VideoQA datasets

VQA-T To evaluate VQA-T on a downstream VideoQA dataset, run (for zero-shot VideoQA, simply use the checkpoint trained on HowToVQA69M only):

python main_videoqa.py --checkpoint_dir=ft<dataset> --dataset=<dataset> \ 
--pretrain_path=<CKPT_PATH> --test 1

Baselines In the case of QA-T, use the command above with the corresponding checkpoint and add --baseline qa. In the case of Zero-Shot VideoQA for VQA-T pretrained on HowTo100M, run:

python eval_videoqa_cm.py --checkpoint_dir=pthtmzeroshot<dataset> --dataset=<dataset> \ 
--pretrain_path=<CKPT_PATH>

Detailed evaluation

Using a trained checkpoint, to perform evaluation segmented per question type and answer quartile, use:

python eval_videoqa.py --dataset <dataset> --pretrain_path <CKPT_PATH>

VideoQA Demo

Using a trained checkpoint, you can also run a VideoQA example with a video file of your choice, and the question of your choice. For that, use (the dataset indicated here is only used for the definition of the answer vocabulary):

python demo_videoqa.py --dataset <dataset> --pretrain_path <CKPT_PATH> \ 
--question_example <question> --video_example <video_path>

Note that we also host an online demo at this link.

Misc.

In the folder misc, you can find a notebook with code for the plots and data statistics showed in the paper.

You can also find there the html code used for iVQA data collection on Amazon Mechanical Turk.

Moreover, you can find the manually evaluated samples from generated data at this link.

Finally, you can find the html and python code for the online demo.

Acknowledgements

The video feature extraction code is inspired by this repository. The model implementation of our multi-modal transformer (as well as the masked language modeling setup) is inspired by Hugging Face. The comparison with Heilman et al was done using the original Java implementation.

Citation

If you found this work useful, consider giving this repository a star and citing our paper as followed:

@InProceedings{Yang_2021_ICCV,
    author    = {Yang, Antoine and Miech, Antoine and Sivic, Josef and Laptev, Ivan and Schmid, Cordelia},
    title     = {Just Ask: Learning To Answer Questions From Millions of Narrated Videos},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {1686-1697}
}
Owner
Antoine Yang
PhD Student in Computer Vision and Machine Learning, focusing on learning multimodal video representations using vision and language
Antoine Yang
Demonstrational Session git repo for H SAF User Workshop (28/1)

5th H SAF User Workshop The 5th H SAF User Workshop supported by EUMeTrain will be held in online in January 24-28 2022. This repository contains inst

H SAF 4 Aug 04, 2022
Text-to-Image generation

Generate vivid Images for Any (Chinese) text CogView is a pretrained (4B-param) transformer for text-to-image generation in general domain. Read our p

THUDM 1.3k Dec 29, 2022
bio_inspired_min_nets_improve_the_performance_and_robustness_of_deep_networks

Code Submission for: Bio-inspired Min-Nets Improve the Performance and Robustness of Deep Networks Run with docker To build a docker environment, chan

0 Dec 09, 2021
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video Project Page | Paper NeuralRecon: Real-Time Coherent 3D Reconstruction from Mon

ZJU3DV 1.4k Dec 30, 2022
This is the dataset for testing the robustness of various VO/VIO methods

KAIST VIO dataset This is the dataset for testing the robustness of various VO/VIO methods You can download the whole dataset on KAIST VIO dataset Ind

1 Sep 01, 2022
A two-stage U-Net for high-fidelity denoising of historical recordings

A two-stage U-Net for high-fidelity denoising of historical recordings Official repository of the paper (not submitted yet): E. Moliner and V. Välimäk

Eloi Moliner Juanpere 57 Jan 05, 2023
This code provides various models combining dilated convolutions with residual networks

Overview This code provides various models combining dilated convolutions with residual networks. Our models can achieve better performance with less

Fisher Yu 1.1k Dec 30, 2022
ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral)

ILVR + ADM This is the implementation of ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral). This repository is h

Jooyoung Choi 225 Dec 28, 2022
Code accompanying our NeurIPS 2021 traffic4cast challenge

Traffic forecasting on traffic movie snippets This repo contains all code to reproduce our approach to the IARAI Traffic4cast 2021 challenge. In the c

Nina Wiedemann 2 Aug 09, 2022
Object detection on multiple datasets with an automatically learned unified label space.

Simple multi-dataset detection An object detector trained on multiple large-scale datasets with a unified label space; Winning solution of E

Xingyi Zhou 407 Dec 30, 2022
Pretrained models for Jax/Haiku; MobileNet, ResNet, VGG, Xception.

Pre-trained image classification models for Jax/Haiku Jax/Haiku Applications are deep learning models that are made available alongside pre-trained we

Alper Baris CELIK 14 Dec 20, 2022
Graph-total-spanning-trees - A Python script to get total number of Spanning Trees in a Graph

Total number of Spanning Trees in a Graph This is a python script just written f

Mehdi I. 0 Jul 18, 2022
【steal piano】GitHub偷情分析工具!

【steal piano】GitHub偷情分析工具! 你是否有这样的困扰,有一天你的仓库被很多人加了star,但是你却不知道这些人都是从哪来的? 别担心,GitHub偷情分析工具帮你轻松解决问题! 原理 GitHub偷情分析工具透过分析star的时间以及他们之间的follow关系,可以推测出每个st

黄巍 442 Dec 21, 2022
A repo that contains all the mesh keys needed for mesh backend, along with a code example of how to use them in python

Mesh-Keys A repo that contains all the mesh keys needed for mesh backend, along with a code example of how to use them in python Have been seeing alot

Joseph 53 Dec 13, 2022
Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022)

Pop-Out Motion Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022) Jihyun Lee*, Minhyuk Sung*, Hyunjin Kim, Tae-Ky

Jihyun Lee 88 Nov 22, 2022
Retrieve and analysis data from SDSS (Sloan Digital Sky Survey)

Author: Behrouz Safari License: MIT sdss A python package for retrieving and analysing data from SDSS (Sloan Digital Sky Survey) Installation Install

Behrouz 3 Oct 28, 2022
An unofficial styleguide and best practices summary for PyTorch

A PyTorch Tools, best practices & Styleguide This is not an official style guide for PyTorch. This document summarizes best practices from more than a

IgorSusmelj 1.5k Jan 05, 2023
3D Generative Adversarial Network

Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling This repository contains pre-trained models and sampling

Chengkai Zhang 791 Dec 20, 2022
Western-3DSlicer-Modules - Point-Set Registrations for Ultrasound Probe Calibrations

Point-Set Registrations for Ultrasound Probe Calibrations -Undergraduate Thesis-

Matteo Tanzi 0 May 04, 2022
FIRM-AFL is the first high-throughput greybox fuzzer for IoT firmware.

FIRM-AFL FIRM-AFL is the first high-throughput greybox fuzzer for IoT firmware. FIRM-AFL addresses two fundamental problems in IoT fuzzing. First, it

356 Dec 23, 2022