Reference implementation for Structured Prediction with Deep Value Networks

Related tags

Deep Learningdvn
Overview

Deep Value Network (DVN)

This code is a python reference implementation of DVNs introduced in

Deep Value Networks Learn to Evaluate and Iteratively Refine Structured Outputs. Michael Gygli, Mohammad Norouzi, Anelia Angelova. ICML 2017. PDF

Note: This code implements the multi-layer perceptron version used for the multi-label classification experiments only (Section 5.1). The segmentation code was written while inside Google and thus not available.

Requirements

To run this code you need to have tensorflow, numpy, liac-arff, scikit-learn and torchfile installed. Install with

pip install -r requirements.txt

Playing around with a pre-trained Value Net

The pre-trained model for the Bibtex dataset is included in this repository. This allows you do play around with it and it's predictions, using our jupyter notebook.

Replicating the experiments in the paper

Bibtex

To replicate the numbers for bibtex provided in the paper, run:

import reproduce_results
# Reproduce results on the bibtex dataset
reproduce_results.run_bibtex()

By default, the model weights and logs are stored to ./bibtex_dvn. You can monitor the process using tensorboard with

tensorboard --logdir ./bibtex_dvn/

In order to understand the training process two quantities are important:

  1. loss: The loss in estimating the true value of an output hypothesis
  2. gt_f1_scores: The true f1 scores of the generated output hypothesis.

As training progresses, the generated output hypothesis should get better and better. As such, the validation performance reported here closely matches the performance of the test set. The curve should look something like this: Training curve

Bookmarks

For Bookmarks the splits are not provided on http://mulan.sourceforge.net/datasets-mlc.html. Thus, we use the splits provided by SPEN. To get the data, run:

cd mlc_datasets
wget http://www.cics.umass.edu/~belanger/icml_mlc_data.tar.gz
tar -xvf icml_mlc_data.tar.gz
cd ..

Then, you can reproduce the results with

import reproduce_results
# Reproduce results on the bookmarks dataset
reproduce_results.run_bookmarks()

The model weights and logs are stored to ./bookmarks_dvn/.

Contributors

Michael Gygli, Mohammad Norouzi, Anelia Angelova

Code by Michael Gygli

Owner
Michael Gygli
Computer Vision and Artificial Intelligence Researcher, PhD
Michael Gygli
Code release for Hu et al. Segmentation from Natural Language Expressions. in ECCV, 2016

Segmentation from Natural Language Expressions This repository contains the code for the following paper: R. Hu, M. Rohrbach, T. Darrell, Segmentation

Ronghang Hu 88 May 24, 2022
Explainability for Vision Transformers (in PyTorch)

Explainability for Vision Transformers (in PyTorch) This repository implements methods for explainability in Vision Transformers

Jacob Gildenblat 442 Jan 04, 2023
Syed Waqas Zamir 906 Dec 30, 2022
Information Gain Filtration (IGF) is a method for filtering domain-specific data during language model finetuning. IGF shows significant improvements over baseline fine-tuning without data filtration.

Information Gain Filtration Information Gain Filtration (IGF) is a method for filtering domain-specific data during language model finetuning. IGF sho

4 Jul 28, 2022
PN-Net a neural field-based framework for depth estimation from single-view RGB images.

PN-Net We present a neural field-based framework for depth estimation from single-view RGB images. Rather than representing a 2D depth map as a single

1 Oct 02, 2021
An easy-to-use app to visualise attentions of various VQA models.

Ask Me Anything: A tool for visualising Visual Question Answering (AMA) An easy-to-use app to visualise attentions of various VQA models. Please click

Apoorve 37 Nov 13, 2022
Assginment for UofT CSC420: Intro to Image Understanding

Run the code Open edge_detection.ipynb in google colab. Upload image1.jpg,image2.jpg and my_image.jpg to '/content/drive/My Drive'. chooose 'Run all'

Ziyi-Zhou 1 Feb 24, 2022
PyTorch implementation of deep GRAph Contrastive rEpresentation learning (GRACE).

GRACE The official PyTorch implementation of deep GRAph Contrastive rEpresentation learning (GRACE). For a thorough resource collection of self-superv

Big Data and Multi-modal Computing Group, CRIPAC 186 Dec 27, 2022
Creating predictive checklists from data using integer programming.

Learning Optimal Predictive Checklists A Python package to learn simple predictive checklists from data subject to customizable constraints. For more

Healthy ML 5 Apr 19, 2022
TargetAllDomainObjects - A python wrapper to run a command on against all users/computers/DCs of a Windows Domain

TargetAllDomainObjects A python wrapper to run a command on against all users/co

Podalirius 19 Dec 13, 2022
PyTorch implementation of HDN(Homography Decomposition Networks) for planar object tracking

Homography Decomposition Networks for Planar Object Tracking This project is the offical PyTorch implementation of HDN(Homography Decomposition Networ

CaptainHook 48 Dec 15, 2022
RAMA: Rapid algorithm for multicut problem

RAMA: Rapid algorithm for multicut problem Solves multicut (correlation clustering) problems orders of magnitude faster than CPU based solvers without

Paul Swoboda 60 Dec 13, 2022
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022
This repository implements WGAN_GP.

Image_WGAN_GP This repository implements WGAN_GP. Image_WGAN_GP This repository uses wgan to generate mnist and fashionmnist pictures. Firstly, you ca

Lieon 6 Dec 10, 2021
PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM

Quasi-Recurrent Neural Network (QRNN) for PyTorch Updated to support multi-GPU environments via DataParallel - see the the multigpu_dataparallel.py ex

Salesforce 1.3k Dec 28, 2022
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
Hyperparameters tuning and features selection are two common steps in every machine learning pipeline.

shap-hypetune A python package for simultaneous Hyperparameters Tuning and Features Selection for Gradient Boosting Models. Overview Hyperparameters t

Marco Cerliani 422 Jan 08, 2023
:boar: :bear: Deep Learning based Python Library for Stock Market Prediction and Modelling

bulbea "Deep Learning based Python Library for Stock Market Prediction and Modelling." Table of Contents Installation Usage Documentation Dependencies

Achilles Rasquinha 1.8k Jan 05, 2023
Python package facilitating the use of Bayesian Deep Learning methods with Variational Inference for PyTorch

PyVarInf PyVarInf provides facilities to easily train your PyTorch neural network models using variational inference. Bayesian Deep Learning with Vari

342 Dec 02, 2022
Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant graph def to mobile devices

Intro Real-time object detection and classification. Paper: version 1, version 2. Read more about YOLO (in darknet) and download weight files here. In

Trieu 6.1k Dec 30, 2022