Code and data of the ACL 2021 paper: Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

Overview

MetaAdaptRank

This repository provides the implementation of meta-learning to reweight synthetic weak supervision data described in the paper Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision.

CONTACT

For any question, please contact Si Sun by email [email protected] (respond to emails more quickly), we will try our best to solve :)

QUICKSTART

python 3.7
Pytorch 1.5.0

0/ Data Preparation

First download and prepare the following data into the data folder:

1 Contrastive Supervision Synthesis

1.1 Source-domain NLG training

  • We train two query generators (QG & ContrastQG) with the MS MARCO dataset using train_nlg.sh in the run_shells folder:

    bash prepro_nlg_dataset.sh
    
  • Optional arguments:

    --generator_mode            choices=['qg', 'contrastqg']
    --pretrain_generator_type   choices=['t5-small', 't5-base']
    --train_file                The path to the source-domain nlg training dataset
    --save_dir                  The path to save the checkpoints data; default: ../results
    

1.2 Target-domain NLG inference

  • The whole nlg inference pipline contains five steps:

    • 1.2.1/ Data preprocess
    • 1.2.2/ Seed query generation
    • 1.2.3/ BM25 subset retrieval
    • 1.2.4/ Contrastive doc pairs sampling
    • 1.2.5/ Contrastive query generation
  • 1.2.1/ Data preprocess. convert target-domain documents into the nlg format using prepro_nlg_dataset.sh in the preprocess folder:

    bash prepro_nlg_dataset.sh
    
  • Optional arguments:

    --dataset_name          choices=['clueweb09', 'robust04', 'trec-covid']
    --input_path            The path to the target dataset
    --output_path           The path to save the preprocess data; default: ../data/prepro_target_data
    
  • 1.2.2/ Seed query generation. utilize the trained QG model to generate seed queries for each target documents using nlg_inference.sh in the run_shells folder:

    bash nlg_inference.sh
    
  • Optional arguments:

    --generator_mode            choices='qg'
    --pretrain_generator_type   choices=['t5-small', 't5-base']
    --target_dataset_name       choices=['clueweb09', 'robust04', 'trec-covid']
    --generator_load_dir        The path to the pretrained QG checkpoints.
    
  • 1.2.3/ BM25 subset retrieval. utilize BM25 to retrieve document subset according to the seed queries using do_subset_retrieve.sh in the bm25_retriever folder:

    bash do_subset_retrieve.sh
    
  • Optional arguments:

    --dataset_name          choices=['clueweb09', 'robust04', 'trec-covid']
    --generator_folder      choices=['t5-small', 't5-base']
    
  • 1.2.4/ Contrastive doc pairs sampling. pairwise sample contrastive doc pairs from the BM25 retrieved subset using sample_contrast_pairs.sh in the preprocess folder:

    bash sample_contrast_pairs.sh
    
  • Optional arguments:

    --dataset_name          choices=['clueweb09', 'robust04', 'trec-covid']
    --generator_folder      choices=['t5-small', 't5-base']
    
  • 1.2.5/ Contrastive query generation. utilize the trained ContrastQG model to generate new queries based on contrastive document pairs using nlg_inference.sh in the run_shells folder:

    bash nlg_inference.sh
    
  • Optional arguments:

    --generator_mode            choices='contrastqg'
    --pretrain_generator_type   choices=['t5-small', 't5-base']
    --target_dataset_name       choices=['clueweb09', 'robust04', 'trec-covid']
    --generator_load_dir        The path to the pretrained ContrastQG checkpoints.
    

2 Meta Learning to Reweight

2.1 Data Preprocess

  • Prepare the contrastive synthetic supervision data (CTSyncSup) into the data/synthetic_data folder.

    • CTSyncSup_clueweb09
    • CTSyncSup_robust04
    • CTSyncSup_trec-covid

    >> example data format

  • Preprocess the target-domain datasets into the 5-fold cross-validation format using run_cv_preprocess.sh in the preprocess folder:

    bash run_cv_preprocess.sh
    
  • Optional arguments:

    --dataset_class         choices=['clueweb09', 'robust04', 'trec-covid']
    --input_path            The path to the target dataset
    --output_path           The path to save the preprocess data; default: ../data/prepro_target_data
    

2.2 Train and Test Models

  • The whole process of training and testing MetaAdaptRank contains three steps:

    • 2.2.1/ Meta-pretraining. The model is trained on synthetic weak supervision data, where the synthetic data are reweighted using meta-learning. The training fold of the target dataset is considered as target data that guides meta-reweighting.

    • 2.2.2/ Fine-tuning. The meta-pretrained model is continuously fine-tuned on the training folds of the target dataset.

    • 2.2.3/ Ensemble and Coor-Ascent. Coordinate Ascent is used to combine the last representation layers of all fine-tuned models, as LeToR features, with the retrieval scores from the base retriever.

  • 2.2.1/ Meta-pretraining using train_meta_bert.sh in the run_shells folder:

    bash train_meta_bert.sh
    

    Optional arguments for meta-pretraining:

    --cv_number             choices=[0, 1, 2, 3, 4]
    --pretrain_model_type   choices=['bert-base-cased', 'BiomedNLP-PubMedBERT-base-uncased-abstract']
    --train_dir             The path to the synthetic weak supervision data
    --target_dir            The path to the target dataset
    --save_dir              The path to save the output files and checkpoints; default: ../results
    

    Complete optional arguments can be seen in config.py in the scripts folder.

  • 2.2.2/ Fine-tuning using train_metafine_bert.sh in the run_shells folder:

    bash train_metafine_bert.sh
    

    Optional arguments for fine-tuning:

    --cv_number             choices=[0, 1, 2, 3, 4]
    --pretrain_model_type   choices=['bert-base-cased', 'BiomedNLP-PubMedBERT-base-uncased-abstract']
    --train_dir             The path to the target dataset
    --checkpoint_folder     The path to the checkpoint of the meta-pretrained model
    --save_dir              The path to save output files and checkpoint; default: ../results
    
  • 2.2.3/ Testing the fine-tuned model to collect LeToR features through test.sh in the run_shells folder:

    bash test.sh
    

    Optional arguments for testing:

    --cv_number             choices=[0, 1, 2, 3, 4]
    --pretrain_model_type   choices=['bert-base-cased', 'BiomedNLP-PubMedBERT-base-uncased-abstract']
    --target_dir            The path to the target evaluation dataset
    --checkpoint_folder     The path to the checkpoint of the fine-tuned model
    --save_dir              The path to save output files and the **features** file; default: ../results
    
  • 2.2.4/ Ensemble. Train and test five models for each fold of the target dataset (5-fold cross-validation), and then ensemble and convert their output features to coor-ascent format using combine_features.sh in the ensemble folder:

    bash combine_features.sh
    

    Optional arguments for ensemble:

    --qrel_path             The path to the qrels of the target dataset
    --result_fold_1         The path to the testing result folder of the first fold model
    --result_fold_2         The path to the testing result folder of the second fold model
    --result_fold_3         The path to the testing result folder of the third fold model
    --result_fold_4         The path to the testing result folder of the fourth fold model
    --result_fold_5         The path to the testing result folder of the fifth fold model
    --save_dir              The path to save the ensembled `features.txt` file; default: ../combined_features
    
  • 2.2.5/ Coor-Ascent. Run coordinate ascent using run_ranklib.sh in the ensemble folder:

    bash run_ranklib.sh
    

    Optional arguments for coor-ascent:

    --qrel_path             The path to the qrels of the target dataset
    --ranklib_path          The path to the ensembled features.
    

    The final evaluation results will be output in the ranklib_path.

Results

All TREC files listed in this paper can be found in Tsinghua Cloud.

Owner
THUNLP
Natural Language Processing Lab at Tsinghua University
THUNLP
Wordle Env: A Daily Word Environment for Reinforcement Learning

Wordle Env: A Daily Word Environment for Reinforcement Learning Setup Steps: git pull [email&#

2 Mar 28, 2022
Official implementation of the paper ``Unifying Nonlocal Blocks for Neural Networks'' (ICCV'21)

Spectral Nonlocal Block Overview Official implementation of the paper: Unifying Nonlocal Blocks for Neural Networks (ICCV'21) Spectral View of Nonloca

91 Dec 14, 2022
AI Virtual Calculator: This is a simple virtual calculator based on Artificial intelligence.

AI Virtual Calculator: This is a simple virtual calculator that works with gestures using OpenCV. We will use our hand in the air to click on the calc

Md. Rakibul Islam 1 Jan 13, 2022
Material del curso IIC2233 Programación Avanzada 📚

Contenidos Los contenidos se organizan según la semana del semestre en que nos encontremos, y según la semana que se destina para su estudio. Los cont

IIC2233 @ UC 72 Dec 23, 2022
Set of models for classifcation of 3D volumes

Classification models 3D Zoo - Keras and TF.Keras This repository contains 3D variants of popular CNN models for classification like ResNets, DenseNet

69 Dec 28, 2022
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Zhiqiang Shen 16 Nov 04, 2020
WatermarkRemoval-WDNet-WACV2021

WatermarkRemoval-WDNet-WACV2021 Thank you for your attention. Citation Please cite the related works in your publications if it helps your research: @

LUYI 63 Dec 05, 2022
pytorch bert intent classification and slot filling

pytorch_bert_intent_classification_and_slot_filling 基于pytorch的中文意图识别和槽位填充 说明 基本思路就是:分类+序列标注(命名实体识别)同时训练。 使用的预训练模型:hugging face上的chinese-bert-wwm-ext 依

西西嘛呦 33 Dec 15, 2022
Official code of Team Yao at Multi-Modal-Fact-Verification-2022

Official code of Team Yao at Multi-Modal-Fact-Verification-2022 A Multi-Modal Fact Verification dataset released as part of the De-Factify workshop in

Wei-Yao Wang 11 Nov 15, 2022
Server files for UltimateLabeling

UltimateLabeling server files Server files for UltimateLabeling. git clone https://github.com/alexandre01/UltimateLabeling_server.git cd UltimateLabel

Alexandre Carlier 4 Oct 10, 2022
Implementation for paper "Towards the Generalization of Contrastive Self-Supervised Learning"

Contrastive Self-Supervised Learning on CIFAR-10 Paper "Towards the Generalization of Contrastive Self-Supervised Learning", Weiran Huang, Mingyang Yi

Weiran Huang 13 Nov 30, 2022
The official implementation of "Rethink Dilated Convolution for Real-time Semantic Segmentation"

RegSeg The official implementation of "Rethink Dilated Convolution for Real-time Semantic Segmentation" Paper: arxiv D block Decoder Setup Install the

Roland 61 Dec 27, 2022
Pytorch implementation of Deep Recursive Residual Network for Super Resolution (DRRN)

DRRN-pytorch This is an unofficial implementation of "Deep Recursive Residual Network for Super Resolution (DRRN)", CVPR 2017 in Pytorch. [Paper] You

yun_yang 192 Dec 12, 2022
Learning to Identify Top Elo Ratings with A Dueling Bandits Approach

Learning to Identify Top Elo Ratings We propose two algorithms MaxIn-Elo and MaxIn-mElo to solve the top players identification on the transitive and

2 Jan 14, 2022
Data reduction pipeline for KOALA on the AAT.

KOALA KOALA, the Kilofibre Optical AAT Lenslet Array, is a wide-field, high efficiency, integral field unit used by the AAOmega spectrograph on the 3.

4 Sep 26, 2022
A simple algorithm for extracting tree height in sparse scene from point cloud data.

TREE HEIGHT EXTRACTION IN SPARSE SCENES BASED ON UAV REMOTE SENSING This is the offical python implementation of the paper "Tree Height Extraction in

6 Oct 28, 2022
Kohei's 5th place solution for xview3 challenge

xview3-kohei-solution Usage This repository assumes that the given data set is stored in the following locations: $ ls data/input/xview3/*.csv data/in

Kohei Ozaki 2 Jan 17, 2022
Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

41 Jan 04, 2023
[CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations

VirTex: Learning Visual Representations from Textual Annotations Karan Desai and Justin Johnson University of Michigan CVPR 2021 arxiv.org/abs/2006.06

Karan Desai 533 Dec 24, 2022
Learning Super-Features for Image Retrieval

Learning Super-Features for Image Retrieval This repository contains the code for running our FIRe model presented in our ICLR'22 paper: @inproceeding

NAVER 101 Dec 28, 2022