This is the code for our paper "Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text"

Related tags

Deep Learningiconary
Overview

Iconary

This is the code for our paper "Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text". It includes the datasets, models we trained, and our training/evaluations scripts.

Install

Install python >= 3.6 and pytorch >= 1.7.0. This project has been tested with torch==1.7.1, but later versions might work.

Then install the extra requirements:

pip install -r requirements

Finally add the top-level directory to PYTHONPATH:

cd iconary
export PYTHONPATH=`pwd`

Data

Datasets will be downloaded and cached automatically as needed, file_paths.py shows where the files will be stored. By defaults, datasets are stored in ~/data/iconary.

If you want to download the data manually, the dataest can be downloaded here:

We release the complete datasets without held-out labels since computing the automatic metrics for both the Guesser and Drawer requires the entire game to be known. Models should only be trained on the train set and researchers should avoid looking/evaluating on the test sets as much as possible.

Models

We release the following models on S3:

Guesser:

  • TGuesser: s3://ai2-vision-iconary/public-models/tguesser-3b/
  • w/T5-Large: s3://ai2-vision-iconary/public-models/tguesser-large/
  • w/T5-Base: s3://ai2-vision-iconary/public-models/tguesser-base/

Drawer:

  • TDrawer: s3://ai2-vision-iconary/public-models/tdrawer-large/
  • w/T5-Base: s3://ai2-vision-iconary/public-models/tdrawer-base/

To use these models, download the entire directory. For example:

mkdir -p models
aws s3 cp --recursive s3://ai2-vision-iconary/public-models/tguesser-base models/tguesser-base

Train

Guesser

Train TGuesser with:

python iconary/experiments/train_guesser.py --pretrained_model t5-base --output_dir models/tguesser-base

Note our full model use --pretrained_model t5-b3, but that requries a >16GB RAM GPU to run.

Drawing

Train TDrawer with:

python iconary/experiments/train_drawer.py --pretrained_model t5-base --output_dir models/tdrawer-base --grad_accumulation 2

Note our full model use --pretrained_model t5-large, but that requires a >16GB RAM GPU to run.

Automatic Evaluation

These scripts generate drawings/guesses for games in human/human games, and computes automatic metrics from those drawings/guesses. Note our generation scripts will use all GPUs that they can find with torch.cuda.device_count(), to control where it runs use the CUDA_VISIBLE_DEVICES environment variable.

Guesser

To compute automatic metrics for the Guesser, first generate guesses as:

python iconary/experiments/generate_guesses.py path/to/model --dataset ood-valid --output_file guesses.json --unk_boost 2.0

Note that most of our evaluations are done using --unk_boost 2.0 which implements rare-word boosting.

This script will report our automatic metrics, but they can also be re-computed using:

python iconary/experiments/eval_guesses.py guesses.json

Drawer

Generate drawings with:

python iconary/experiments/generate_drawings.py path/to/model --dataset ood-valid --output_file drawings.json

This script will report our automatic metrics, but they can also be re-computed using:

python iconary/experiments/eval_drawings.py drawings.json

Human/AI Evaluation

Our code for running human/AI games is not currently released, if you are interested in running your own trials contact us and we can help you follow our human/AI setup.

Cite

If you use this work, please cite:

"Iconary: A Pictionary-Based Game for Testing MultimodalCommunication with Drawings and Text". Christopher Clark, Jordi Salvador, Dustin Schwenk, Derrick Bonafilia, Mark Yatskar, Eric Kolve, Alvaro Herrasti, Jonghyun Choi, Sachin Mehta, Sam Skjonsberg, Carissa Schoenick, Aaron Sarnat, Hannaneh Hajishirzi, Aniruddha Kembhavi, Oren Etzioni, Ali Farhadi. In EMNLP 2021.

This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Developed By Google!

Machine Learning Hand Detector This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Dev

Popstar Idhant 3 Feb 25, 2022
Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"

M3D-VTON: A Monocular-to-3D Virtual Try-On Network Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network" Paper | Suppl

109 Dec 29, 2022
Label Hallucination for Few-Shot Classification

Label Hallucination for Few-Shot Classification This repo covers the implementation of the following paper: Label Hallucination for Few-Shot Classific

Yiren Jian 13 Nov 13, 2022
Model Serving Made Easy

The easiest way to build Machine Learning APIs BentoML makes moving trained ML models to production easy: Package models trained with any ML framework

BentoML 4.4k Jan 08, 2023
Video Background Music Generation with Controllable Music Transformer (ACM MM 2021 Oral)

CMT Code for paper Video Background Music Generation with Controllable Music Transformer (ACM MM 2021 Best Paper Award) [Paper] [Site] Directory Struc

Zhaokai Wang 198 Dec 27, 2022
Multi-Person Extreme Motion Prediction

Multi-Person Extreme Motion Prediction Implementation for paper Wen Guo, Xiaoyu Bie, Xavier Alameda-Pineda, Francesc Moreno-Noguer, Multi-Person Extre

GUO-W 38 Nov 15, 2022
September-Assistant - Open-source Windows Voice Assistant

September - Windows Assistant September is an open-source Windows personal assis

The Nithin Balaji 9 Nov 22, 2022
👨‍💻 run nanosaur in simulation with Gazebo/Ingnition

🦕 👨‍💻 nanosaur_gazebo nanosaur The smallest NVIDIA Jetson dinosaur robot, open-source, fully 3D printable, based on ROS2 & Isaac ROS. Designed & ma

nanosaur 9 Jul 19, 2022
Multi Task Vision and Language

12-in-1: Multi-Task Vision and Language Representation Learning Please cite the following if you use this code. Code and pre-trained models for 12-in-

Facebook Research 712 Dec 19, 2022
PyTorch implementation of the Crafting Better Contrastive Views for Siamese Representation Learning

Crafting Better Contrastive Views for Siamese Representation Learning This is the official PyTorch implementation of the ContrastiveCrop paper: @artic

249 Dec 28, 2022
A python3 tool to take a 360 degree survey of the RF spectrum (hamlib + rotctld + RTL-SDR/HackRF)

RF Light House (rflh) A python script to use a rotor and a SDR device (RTL-SDR or HackRF One) to measure the RF level around and get a data set and be

Pavel Milanes (CO7WT) 11 Dec 13, 2022
Lorien: A Unified Infrastructure for Efficient Deep Learning Workloads Delivery

Lorien: A Unified Infrastructure for Efficient Deep Learning Workloads Delivery Lorien is an infrastructure to massively explore/benchmark the best sc

Amazon Web Services - Labs 45 Dec 12, 2022
Torch implementation of "Enhanced Deep Residual Networks for Single Image Super-Resolution"

NTIRE2017 Super-resolution Challenge: SNU_CVLab Introduction This is our project repository for CVPR 2017 Workshop (2nd NTIRE). We, Team SNU_CVLab, (B

Bee Lim 625 Dec 30, 2022
PSGAN running with ncnn⚡妆容迁移/仿妆⚡Imitation Makeup/Makeup Transfer⚡

PSGAN running with ncnn⚡妆容迁移/仿妆⚡Imitation Makeup/Makeup Transfer⚡

WuJinxuan 144 Dec 26, 2022
A framework for analyzing computer vision models with simulated data

3DB: A framework for analyzing computer vision models with simulated data Paper Quickstart guide Blog post Installation Follow instructions on: https:

3DB 112 Jan 01, 2023
The code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning"

The Code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning" Setting up and using the repo Get the dataset. Follow

4 Apr 20, 2022
Photo2cartoon - 人像卡通化探索项目 (photo-to-cartoon translation project)

人像卡通化 (Photo to Cartoon) 中文版 | English Version 该项目为小视科技卡通肖像探索项目。您可使用微信扫描下方二维码或搜索“AI卡通秀”小程序体验卡通化效果。

Minivision_AI 3.5k Dec 30, 2022
A PyTorch Implementation of ViT (Vision Transformer)

ViT - Vision Transformer This is an implementation of ViT - Vision Transformer by Google Research Team through the paper "An Image is Worth 16x16 Word

Quan Nguyen 7 May 11, 2022
Code for "Modeling Indirect Illumination for Inverse Rendering", CVPR 2022

Modeling Indirect Illumination for Inverse Rendering Project Page | Paper | Data Preparation Set up the python environment conda create -n invrender p

ZJU3DV 116 Jan 03, 2023
A flexible submap-based framework towards spatio-temporally consistent volumetric mapping and scene understanding.

Panoptic Mapping This package contains panoptic_mapping, a general framework for semantic volumetric mapping. We provide, among other, a submap-based

ETHZ ASL 194 Dec 20, 2022