SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking

Related tags

Deep Learningsplade
Overview

SPLADE 🍴 + 🥄 = 🔎

This repository contains the weights for four models as well as the code for running inference for our two papers:

  • [v1]: SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking, Thibault Formal, Benjamin Piwowarski and Stéphane Clinchant. SIGIR21 short paper. link
  • [v2]: SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval, Thibault Formal, Benjamin Piwowarski, Carlos Lassance, and Stéphane Clinchant. arxiv. link

We also provide some scripts to run evaluation on the BEIR benchmark in the beir_evaluation folder, as well as training code in the training_with_sentence_transformers folder.

TL; DR
Recently, dense retrieval with approximate nearest neighbors search based on BERT has demonstrated its strength for first-stage retrieval, questioning the competitiveness of traditional sparse models like BM25. In this work, we have proposed SPLADE, a sparse model revisiting query/document expansion. Our approach relies on in-batch negatives, logarithmic activation and FLOPS regularization to learn effective and efficient sparse representations. SPLADE is an appealing candidate for first-stage retrieval: it rivals the latest state-of-the-art dense retrieval models, its training procedure is straightforward, and its efficiency (sparsity/FLOPS) can be controlled explicitly through the regularization such that it can be operated on inverted indexes. In reason of its simplicity, SPLADE is a solid basis for further improvements in this line of research.

splade: a spork that is sharp along one edge or both edges, enabling it to be used as a knife, a fork and a spoon.

Updates

  • 24/09/2021: add the weights for v2 version of SPLADE (max pooling and margin-MSE distillation training) + add scripts to evaluate the model on the BEIR benchmark.
  • 16/11/2021: add code for training SPLADE using the Sentence Transformers framework + update LICENSE to properly include BEIR and Sentence Transformers.

SPLADE

We give a brief overview of the model architecture and the training strategy. Please refer to the paper for further details ! You can also have a look at our blogpost for additional insights and examples ! Feel also free to contact us via Twitter or mail @ [email protected] !

SPLADE architecture (see below) is rather simple: queries/documents are fed to BERT, and we rely on the MLM head used for pre-training to actually predict term importance in BERT vocabulary space. Thus, the model implicitly learns expansion. We also added a log activation that greatly helped making the representations sparse. Relevance is computed via dot product.

SPLADE architecture

The model thus represents queries and documents in the vocabulary space. In order to make these representations sparse -- so that we can use an inverted index, we explicitly train the model with regularization on q/d representations (L1 or FLOPS) as shown below:

splade training

SPLADE learns how to balance between effectiveness (via the ranking loss) and efficiency (via the regularization loss). By controlling lambda, we can adjust the trade-off.

How to use the code for inference

  • See inference_SPLADE.ipynb and beir_evaluation/splade_beir.ipynb

Training Splade

  • See training_with_sentence_transformers folder

Requirements

Requirements can be found in requirements.txt. In order to get the weights, be sure to have git lfs installed.

Main Results on MS MARCO (dev set) and TREC DL 2019 passage ranking

  • Below is a table of results comparing SPLADE to several competing baselines:

res

  • One can adjust the regularization strength for SPLADE to reach the optimal tradeoff between performance and efficiency:

perf vs flops

Cite

Please cite our work as:

@inproceedings{Formal2021_splade,
 author = {Thibault Formal, Benjamin Piwowarski and Stéphane Clinchant},
 title = {{SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking}},
 booktitle = {Proc. of SIGIR},
 year = {2021},
}

License

SPLADE Copyright (c) 2021-present NAVER Corp.

SPLADE is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. (see license)

You should have received a copy of the license along with this work. If not, see http://creativecommons.org/licenses/by-nc-sa/4.0/ .

Comments
  • Evaluation on MSMARCO?

    Evaluation on MSMARCO?

    Hi, thanks for your very interesting work.

    Could you share how you evaluate to get the results here. Did you use inverted indexing or use this code? I am trying the later approach, but it is very slow on MSMARCO. Thank you

    opened by thongnt99 8
  • Cannot train SPLADEv2 to achieve the reported performance.

    Cannot train SPLADEv2 to achieve the reported performance.

    opened by namespace-Pt 6
  • FLOPs calculation

    FLOPs calculation

    I recently read your SPLADE paper and I think it's quite interesting. I have a question concerning FLOPs calculation in the paper.

    I think computing FLOPs for an inverted index involves the length of the activated posting lists(the overlapping terms in query and document). For example, a query a b c and a document c a e, since we must inspect the posting list of the overlapping terms a and c, the flops should at least be

    posting_length(a) + posting_length(c)
    

    because we perform summation for each entry in the posting list. However, in the paper you compute FLOPs by the probability that a, b, c are activated in the query and c, a, e are activated in the document. I think this may underestimate the flops of SPLADE because the less sparse the document, the longer posting lists in the inverted index.

    opened by namespace-Pt 6
  • move all source to splade/ module

    move all source to splade/ module

    Hi,

    I'd like to build client code that depends on SPLADE. Please would you consider this PR, which moves all source code into a splade folder, rather than a src/ folder. This appears to work satisfactorily for my use case.

    Craig

    opened by cmacdonald 2
  • configuration for splade++ results

    configuration for splade++ results

    Hi-- thanks for the nice work.

    I'm trying to index+retrieve using the naver/splade-cocondenser-ensembledistil model. Following the readme, I've done:

    export SPLADE_CONFIG_FULLPATH="config_default.yaml"
    python3 -m src.index \
      init_dict.model_type_or_dir=naver/splade-cocondenser-ensembledistil \ # <--- (from readme, using the new model)
      config.pretrained_no_yamlconfig=true \
      config.index_dir=experiments/pre-trained/index \
      index=msmarco  # <--- added
    
    export SPLADE_CONFIG_FULLPATH="config_default.yaml"
    python3 -m src.retrieve \
      init_dict.model_type_or_dir=naver/splade-cocondenser-ensembledistil \ # <--- (from readme, using the new model)
      config.pretrained_no_yamlconfig=true \
      config.index_dir=experiments/pre-trained/index \
      config.out_dir=experiments/pre-trained/out-dl19 \
      index=msmarco \  # <--- added
      retrieve_evaluate=msmarco # <--- added
    

    Everything runs just fine, but I'm getting rather poor results in the end:

    [email protected]: 0.18084248646927734
    recall ==> {'recall_5': 0.2665353390639923, 'recall_10': 0.3298710601719197, 'recall_15': 0.3694364851957974, 'recall_20': 0.3951050620821394, 'recall_30': 0.4270654250238777, 'recall_100': 0.5166069723018146, 'recall_200': 0.5560768863419291, 'recall_500': 0.606984240687679, 'recall_1000': 0.6402578796561604}
    

    I suspect it's a configuration problem on my end, but since the indexing process takes a bit of time, I thought I'd just ask before diving too far into the weeds: Is there a configuration file to use for the splade++ results, and how do I use it?

    Thanks!

    opened by seanmacavaney 2
  • Training by dot product and evaluation via inverted index?

    Training by dot product and evaluation via inverted index?

    Hey, I recently read your SPLADEv2 paper. That's so insightful! But I still have a few questions about it.

    1. Is the model trained with dot product similarity function included in the contrastive loss?
    2. Evaluation on MS MARCO is performed via inverted index backed by anserine?
    3. Evaluation on BEIR is implemented with sentencetransformer hence also via dot product?
    4. How much can you gurantee the sparsity of learned representation since it's softly regularized by L1 and FLOPS loss? Did you use a tuned threshold to ''zerofy'' ~0 value?
    opened by jordane95 2
  • Equation (1) and (4)

    Equation (1) and (4)

    In your paper, you said equation (1) is equivalent to the MLM prediction and E_j in equation (1) denotes the BERT input embedding for token j. If you use the default implementation of HuggingFace Transformers, E_j is not from the input layer but another embeddings matrix, which is called "decoder" in the "BertLMPredictionHead" (if you use BERT). Did you manually set the "decoder" weights to the input embedding weights?

    My other question is concerning equation (4). It computes the summation of the weights of the document/query terms. In the "forward" function of the Splade class (models.py) however, you use "torch.max" function. Can you explain this issue?

    opened by hguan6 2
  • When do you drop a term?

    When do you drop a term?

    I understand that the log-saturation function and regularization loss suppress the weights of the frequent terms. But when do you drop a term (setting the term weight to zero)? Is it when the logit is less or equal to zero, so that the log(1+ReLu(.)) function outputs zero?

    opened by hguan6 2
  • Benchmark Performance After Re-ranking?

    Benchmark Performance After Re-ranking?

    I'm curious if you've run your model with a "second-stage" reranker, on the BEIR benchmarks. Would you expect much benefit from this?

    Thank you, and excellent work!

    opened by mattare2 1
  • Initial pull request for efficient splade

    Initial pull request for efficient splade

    Initial pull request to add networks from https://dl.acm.org/doi/10.1145/3477495.3531833

    Networks are now available on huggingface as well:

    V) https://huggingface.co/naver/efficient-splade-V-large-doc https://huggingface.co/naver/efficient-splade-V-large-query

    VI) https://huggingface.co/naver/efficient-splade-VI-BT-large-doc https://huggingface.co/naver/efficient-splade-VI-BT-large-query

    Still need to add the links in the naverlabs website for the small and medium networks

    opened by cadurosar 0
  • Instructions on Using Pisa for Splade

    Instructions on Using Pisa for Splade

    Firstly, thanks for your series of amazing papers and well-organized code implementations.

    The two papers Wacky Weights in Learned Sparse Representations and the Revenge of Score-at-a-Time Query Evaluation and From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective show that using Pisa can make query retrieval much faster compared to using Anserini or code from the repo for Splade.

    The folder efficient_splade_pisa/ in the repo contains the instructions on using Pisa for Splade but the instructions are only for processed queries and indexes. If I only have a well-trained Splade model, how can I process the outputs of the Splade model (sparse vectors or its quantized version for Anserini) to make them suitable for Pisa? Can you provide more specific instructions on this?

    Best wishes

    opened by HansiZeng 1
  • Flops calcualtion

    Flops calcualtion

    Hello!

    I find that when I run flops, it always returns Nan.

    I see your last commit fixed "force new", and changed line 25 in transformer_evaluator.py to force_new=True, but in inverted_index.py line 23, seems that the self.n will return 0 if force_new is True.

    The flops no longer return nan after I remove the "force_new=True".

    Am I doing sth wrong here? And how should I get the correct flops..

    Thank you! Allen

    opened by wolu0901 2
Releases(v0.1.1)
  • v0.1.1(May 11, 2022)

  • v0.0.1(May 10, 2022)

    Release v0.0.1

    This release includes our initial raw version of the code

    • inference notebook and weights available
    • training is done via SentenceTransformers
    • evaluation is not available
    • we provide evaluation on the BEIR benchmark
    • the code is not really practical and every step is independent
    Source code(tar.gz)
    Source code(zip)
Owner
NAVER
NAVER
EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling

Frustratingly Simple Pretraining Alternatives to Masked Language Modeling This is the official implementation for "Frustratingly Simple Pretraining Al

Atsuki Yamaguchi 31 Nov 18, 2022
LAVT: Language-Aware Vision Transformer for Referring Image Segmentation

LAVT: Language-Aware Vision Transformer for Referring Image Segmentation Where we are ? 12.27 目前和原论文仍有1%左右得差距,但已经力压很多SOTA了 ckpt__448_epoch_25.pth mIoU

zichengsaber 60 Dec 11, 2022
implementation for paper "ShelfNet for fast semantic segmentation"

ShelfNet-lightweight for paper (ShelfNet for fast semantic segmentation) This repo contains implementation of ShelfNet-lightweight models for real-tim

Juntang Zhuang 252 Sep 16, 2022
Source code for Transformer-based Multi-task Learning for Disaster Tweet Categorisation (UCD's participation in TREC-IS 2020A, 2020B and 2021A).

Source code for "UCD participation in TREC-IS 2020A, 2020B and 2021A". *** update at: 2021/05/25 This repo so far relates to the following work: Trans

Congcong Wang 4 Oct 19, 2021
A setup script to generate ITK Python Wheels

ITK Python Package This project provides a setup.py script to build ITK Python binary packages and infrastructure to build ITK external module Python

Insight Software Consortium 59 Dec 14, 2022
Pytorch implementation of MaskGIT: Masked Generative Image Transformer

Pytorch implementation of MaskGIT: Masked Generative Image Transformer

Dominic Rampas 247 Dec 16, 2022
Pytorch implementation of "Neural Wireframe Renderer: Learning Wireframe to Image Translations"

Neural Wireframe Renderer: Learning Wireframe to Image Translations Pytorch implementation of ideas from the paper Neural Wireframe Renderer: Learning

Yuan Xue 7 Nov 14, 2022
PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis

Impersonator PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer an

SVIP Lab 1.7k Jan 06, 2023
A3C LSTM Atari with Pytorch plus A3G design

NEWLY ADDED A3G A NEW GPU/CPU ARCHITECTURE OF A3C FOR SUBSTANTIALLY ACCELERATED TRAINING!! RL A3C Pytorch NEWLY ADDED A3G!! New implementation of A3C

David Griffis 532 Jan 02, 2023
Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning

structshot Code and data for paper "Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning", Yi Yang and Arz

ASAPP Research 47 Dec 27, 2022
Chainer Implementation of Fully Convolutional Networks. (Training code to reproduce the original result is available.)

fcn - Fully Convolutional Networks Chainer implementation of Fully Convolutional Networks. Installation pip install fcn Inference Inference is done as

Kentaro Wada 218 Oct 27, 2022
2020 CCF大数据与计算智能大赛-非结构化商业文本信息中隐私信息识别-第7名方案

2020CCF-NER 2020 CCF大数据与计算智能大赛-非结构化商业文本信息中隐私信息识别-第7名方案 bert base + flat + crf + fgm + swa + pu learning策略 + clue数据集 = test1单模0.906 词向量

67 Oct 19, 2022
yufan 81 Dec 08, 2022
Repository for the NeurIPS 2021 paper: "Exploiting Domain-Specific Features to Enhance Domain Generalization".

meta-Domain Specific-Domain Invariant (mDSDI) Source code implementation for the paper: Manh-Ha Bui, Toan Tran, Anh Tuan Tran, Dinh Phung. "Exploiting

VinAI Research 12 Nov 25, 2022
Github for the conference paper GLOD-Gaussian Likelihood OOD detector

FOOD - Fast OOD Detector Pytorch implamentation of the confernce peper FOOD arxiv link. Abstract Deep neural networks (DNNs) perform well at classifyi

17 Jun 19, 2022
Vector Quantization, in Pytorch

Vector Quantization - Pytorch A vector quantization library originally transcribed from Deepmind's tensorflow implementation, made conveniently into a

Phil Wang 665 Jan 08, 2023
Official implementation for “Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior”

Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior. The code will release soon. Implementation Python3 PyTorch=1.0 NVIDIA GPU+

FengZhang 34 Dec 04, 2022
Image process framework based on plugin like imagej, it is esay to glue with scipy.ndimage, scikit-image, opencv, simpleitk, mayavi...and any libraries based on numpy

Introduction ImagePy is an open source image processing framework written in Python. Its UI interface, image data structure and table data structure a

ImagePy 1.2k Dec 29, 2022
A custom-designed Spider Robot trained to walk using Deep RL in a PyBullet Simulation

SpiderBot_DeepRL Title: Implementation of Single and Multi-Agent Deep Reinforcement Learning Algorithms for a Walking Spider Robot Authors(s): Arijit

Arijit Dasgupta 9 Jul 28, 2022
Implementation of SiameseXML (ICML 2021)

SiameseXML Code for SiameseXML: Siamese networks meet extreme classifiers with 100M labels Best Practices for features creation Adding sub-words on to

Extreme Classification 35 Nov 06, 2022