Towards the D-Optimal Online Experiment Design for Recommender Selection (KDD 2021)

Overview

Towards the D-Optimal Online Experiment Design for Recommender Selection (KDD 2021)

Contact [email protected] or [email protected] for questions.

Running code

Install packages

pip install -r requirements.txt 

Recommender

We use the recommenders implemented under our project for adversarial counterfactual learning published in NIPS 2020.

  • Step 1: clone the project to your local directory.

  • Step 2: pip install . to install the library.

Item features

The data ml-1m.zip is under the data folder. We need to generate the movies and users features before running the simulations.

cd data & unzip ml-1m.zip
cd ml-1m
python base_embed.py # This generates base movie and user features vector

Simulation

Assume you are in the project's main folder:

python run.py #This will runs all defined simulation routines defined in simulation.py

Optional argument:

usage: System Bandit Simulation [-h] [--dim DIM] [--topk TOPK] [--num_epochs NUM_EPOCHS] [--epsilon EPSILON] [--explore_step EXPLORE_STEP] [--feat_map {onehot,context,armed_context,onehot_context}]
                                [--algo {base,e_greedy,thomson,lin_ct,optimal}]

optional arguments:
  -h, --help            show this help message and exit
  --dim DIM
  --topk TOPK
  --num_epochs NUM_EPOCHS
  --epsilon EPSILON
  --explore_step EXPLORE_STEP
  --feat_map {onehot,context,armed_context,onehot_context}
  --algo {base,e_greedy,thomson,lin_ct,optimal}

Major class

Environment

This class implement the simulation logics described in our paper. For each user, we runs the get_epoch method, which returns an refreshed simulator based on the last interaction with the user.

Example:

float: """Return the reward given selected arm and the recommendations""" pass # Example usage BanditData = List[Tuple[int, float, Any]] data: BanditData = [] for uidx, recall_set in env.get_epoch(): arm = algo.predict() recommendations = bandit_ins.get_arm(arm).recommend(uidx, recall_set, top_k) reward = env.action(uidx, recommendations) data.append((arm, reward, None)) algo.update(data) algo.record_metric(data) ">
class Environment:
    def get_epoch(self, shuffle: bool = True):
        """Return updated environment iterator"""
        return EpochIter(self, shuffle)

    def action(self, uidx: int, recommendations: List[int]) -> float:
        """Return the reward given selected arm and the recommendations"""
        pass

# Example usage
BanditData = List[Tuple[int, float, Any]]
data: BanditData = []
for uidx, recall_set in env.get_epoch():
    arm = algo.predict()
    recommendations = bandit_ins.get_arm(arm).recommend(uidx, recall_set, top_k)
    reward = env.action(uidx, recommendations)
    data.append((arm, reward, None))
algo.update(data)
algo.record_metric(data) 

BanditAlgorithm

The BanditALgorithm implement the interfaces for any bandit algorithms evaluated in this project.

class BanditAlgorithm:
    def predict(self, *args, **kwds) -> int:
        """Return the estimated return for contextual bandit"""
        pass

    def update(self, data: BanditData):
        """Update the algorithms based on observed (action, reward, context)"""
        pass

    def record_metric(self, data: BanditData):
        """Record the cumulative performance metrics for this algorithm"""
        pass
Official implementation of the paper "Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering"

Light Field Networks Project Page | Paper | Data | Pretrained Models Vincent Sitzmann*, Semon Rezchikov*, William Freeman, Joshua Tenenbaum, Frédo Dur

Vincent Sitzmann 130 Dec 29, 2022
LSTM and QRNN Language Model Toolkit for PyTorch

LSTM and QRNN Language Model Toolkit This repository contains the code used for two Salesforce Research papers: Regularizing and Optimizing LSTM Langu

Salesforce 1.9k Jan 08, 2023
A Python type explainer!

typesplainer A Python typehint explainer! Available as a cli, as a website, as a vscode extension, as a vim extension Usage First, install the package

Typesplainer 79 Dec 01, 2022
RCT-ART is an NLP pipeline built with spaCy for converting clinical trial result sentences into tables through jointly extracting intervention, outcome and outcome measure entities and their relations.

Randomised controlled trial abstract result tabulator RCT-ART is an NLP pipeline built with spaCy for converting clinical trial result sentences into

2 Sep 16, 2022
Adversarial vulnerability of powerful near out-of-distribution detection

Adversarial vulnerability of powerful near out-of-distribution detection by Stanislav Fort In this repository we're collecting replications for the ke

Stanislav Fort 9 Aug 30, 2022
Code for the paper: Fighting Fake News: Image Splice Detection via Learned Self-Consistency

Fighting Fake News: Image Splice Detection via Learned Self-Consistency [paper] [website] Minyoung Huh *12, Andrew Liu *1, Andrew Owens1, Alexei A. Ef

minyoung huh (jacob) 174 Dec 09, 2022
Byzantine-robust decentralized learning via self-centered clipping

Byzantine-robust decentralized learning via self-centered clipping In this paper, we study the challenging task of Byzantine-robust decentralized trai

EPFL Machine Learning and Optimization Laboratory 4 Aug 27, 2022
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 21.1k Dec 29, 2022
Official PyTorch implementation of "Edge Rewiring Goes Neural: Boosting Network Resilience via Policy Gradient".

Edge Rewiring Goes Neural: Boosting Network Resilience via Policy Gradient This repository is the official PyTorch implementation of "Edge Rewiring Go

Shanchao Yang 4 Dec 12, 2022
Tensorflow implementation of MIRNet for Low-light image enhancement

MIRNet Tensorflow implementation of the MIRNet architecture as proposed by Learning Enriched Features for Real Image Restoration and Enhancement. Lanu

Soumik Rakshit 91 Jan 06, 2023
The official re-implementation of the Neurips 2021 paper, "Targeted Neural Dynamical Modeling".

Targeted Neural Dynamical Modeling Note: This is a re-implementation (in Tensorflow2) of the original TNDM model. We do not plan to further update the

6 Oct 05, 2022
An efficient PyTorch implementation of the evaluation metrics in recommender systems.

recsys_metrics An efficient PyTorch implementation of the evaluation metrics in recommender systems. Overview • Installation • How to use • Benchmark

Xingdong Zuo 12 Dec 02, 2022
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT

LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval This repository contains source code and pre-trained/fine-tun

Siqi 65 Dec 26, 2022
The code of Zero-shot learning for low-light image enhancement based on dual iteration

Zero-shot-dual-iter-LLE The code of Zero-shot learning for low-light image enhancement based on dual iteration. You can get the real night image tests

1 Mar 18, 2022
MIRACLE (Missing data Imputation Refinement And Causal LEarning)

MIRACLE (Missing data Imputation Refinement And Causal LEarning) Code Author: Trent Kyono This repository contains the code used for the "MIRACLE: Cau

van_der_Schaar \LAB 15 Dec 29, 2022
GAN example for Keras. Cuz MNIST is too small and there should be something more realistic.

Keras-GAN-Animeface-Character GAN example for Keras. Cuz MNIST is too small and there should an example on something more realistic. Some results Trai

160 Sep 20, 2022
TensorFlow (Python API) implementation of Neural Style

neural-style-tf This is a TensorFlow implementation of several techniques described in the papers: Image Style Transfer Using Convolutional Neural Net

Cameron 3.1k Jan 02, 2023
QI-Q RoboMaster2022 CV Algorithm

QI-Q RoboMaster2022 CV Algorithm

2 Jan 10, 2022
This repo contains research materials released by members of the Google Brain team in Tokyo.

Brain Tokyo Workshop 🧠 🗼 This repo contains research materials released by members of the Google Brain team in Tokyo. Past Projects Weight Agnostic

Google 1.2k Jan 02, 2023