Few-Shot-Intent-Detection includes popular challenging intent detection datasets with/without OOS queries and state-of-the-art baselines and results.

Overview

Few-Shot-Intent-Detection

Few-Shot-Intent-Detection is a repository designed for few-shot intent detection with/without Out-of-Scope (OOS) intents. It includes popular challenging intent detection datasets and baselines. For more details of the new released OOS datasets, please check our paper.

Intent detection datasets

We process data based on previous published resources, all the data are in the same format as DNNC.

Dataset Description #Train #Valid #Test Processed Data Link
BANKING77 one banking domain with 77 intents 8622 1540 3080 Link
CLINC150 10 domains and 150 intents 15000 3000 4500 Link
HWU64 personal assistant with 64 intents and several domains 8954 1076 1076 Link
SNIPS snips voice platform with 7 intents 13084 700 700 Link
ATIS airline travel information system 4478 500 893 Link

Intent detection datasets with OOS queries

What is OOS queires:

OOD-OOS: i.e., out-of-domain OOS. General out-of-scope queries which are not supported by the dialog systems, also called out-of-domain OOS. For instance, requesting an online NBA/TV show service in a banking system.

ID-OOS: i.e., in-domain OOS. Out-of-scope queries which are more related to the in-scope intents, which makes the intent detection task more challenging. For instance, requesting a banking service that is not supported by the banking system.

Dataset Description #Train #Valid #Test #OOD-OOS-Train #OOD-OOS-Valid #OOD-OOS-Test #ID-OOS-Train #ID-OOS-Valid #ID-OOS-Test Processed Data Link
CLINC150 A dataset with general OOS-OOS queries 15000 3000 4500 100 100 1000 - - - Link
CLINC-Single-Domain-OOS Two domains with both general OOS-OOS queries and ID-OOS queries 500 500 500 - 200 1000 - 400 350 Link
BANKING77-OOS One banking domain with both general OOS-OOS queries and ID-OOS queries 5905 1506 2000 - 200 1000 2062 530 1080 Link

Data structure:

Datasets/
├── BANKING77
│   ├── train
│   ├── train_10
│   ├── train_5
│   ├── valid
│   └── test
├── CLINC150
│   ├── train
│   ├── train_10
│   ├── train_5
│   ├── valid
│   ├── test
│   ├── oos
│       ├──train
│       ├──valid
│       └──test
├── HWU64
│   ├── train
│   ├── train_10
│   ├── train_5
│   ├── valid
│   └── test
├── SNIPS
│   ├── train
│   ├── valid
│   └── test
├── ATIS
│   ├── train
│   ├── valid
│   └── test
├── BANKING77-OOS
│   ├── train
│   ├── valid
│   ├── test
│   ├── id-oos
│   │   ├──train
│   │   ├──valid
│   │   └──test
│   ├── ood-oos
│       ├──valid
│       └──test
├── CLINC-Single-Domain-OOS
│   ├── banking
│   │   ├── train
│   │   ├── valid
│   │   ├── test
│   │   ├── id-oos
│   │   │   ├──valid
│   │   │   └──test
│   │   ├── ood-oos
│   │       ├──valid
│   │       └──test
│   ├── credit_cards
│   │   ├── train
│   │   ├── valid
│   │   ├── test
│   │   ├── id-oos
│   │   │   ├──valid
│   │   │   └──test
│   │   ├── ood-oos
│   │       ├──valid
└── └──     └──test

Briefly describe the BANKING77-OOS dataset.

  • A dataset with a single banking domain, includes both general Out-of-Scope (OOD-OOS) queries and In-Domain but Out-of-Scope (ID-OOS) queries, where ID-OOS queries are semantically similar intents/queries with in-scope intents. BANKING77 originally includes 77 intents. BANKING77-OOS includes 50 in-scope intents in this dataset, and the ID-OOS queries are built up based on 27 held-out semantically similar in-scope intents.

Briefly describe the CLINC-Single-Domain-OOS dataset.

  • A dataset with two separate domains, i.e., the "Banking'' domain and the "Credit cards'' domain with both general Out-of-Scope (OOD-OOS) queries and In-Domain but Out-of-Scope (ID-OOS) queries, where ID-OOS queries are semantically similar intents/queries with in-scope intents. Each domain in CLINC150 originally includes 15 intents. Each domain in the new dataset includes ten in-scope intents in this dataset, and the ID-OOS queries are built up based on five held-out semantically similar in-scope intents.

Both datasets can be used to conduct intent detection with and without OOD-OOS and ID-OOS queries

You can easily load the processed data:

class IntentExample:
    def __init__(self, text, label, do_lower_case):
        self.original_text = text
        self.text = text
        self.label = label

        if do_lower_case:
            self.text = self.text.lower()
        
def load_intent_examples(file_path, do_lower_case=True):
    examples = []

    with open('{}/seq.in'.format(file_path), 'r', encoding="utf-8") as f_text, open('{}/label'.format(file_path), 'r', encoding="utf-8") as f_label:
        for text, label in zip(f_text, f_label):
            e = IntentExample(text.strip(), label.strip(), do_lower_case)
            examples.append(e)

    return examples

More details can check code for load data and do random sampling for few-shot learning.

State-of-the art models and baselines

DNNC

Download pre-trained RoBERTa NLI checkpoint:

wget https://storage.googleapis.com/sfr-dnnc-few-shot-intent/roberta_nli.zip

Access to public code: Link

CONVERT

Download pre-trained checkpoint:

wget https://github.com/connorbrinton/polyai-models/releases/download/v1.0/model.tar.gz

Access to public code:

wget https://github.com/connorbrinton/polyai-models/archive/refs/tags/v1.0.zip

CONVBERT

Download pre-trained checkpoints:

Step-1: install AWS CL2: e.g., install MacOS PKG

Step-2:

aws s3 cp s3://dialoglue/ --no-sign-request `Your_folder_name` --recursive

Then the checkpoints are downloaded into Your_folder_name

Few-shot intent detection baselines/leaderboard:

5-shot learning

Model BANKING77 CLICN150 HWU64
RoBERTa+Classifier (EMNLP 2020) 74.04 87.99 75.56
USE (ACL 2020 NLP4ConvAI) 76.29 87.82 77.79
CONVERT (ACL 2020 NLP4ConvAI) 75.32 89.22 76.95
USE+CONVERT (ACL 2020 NLP4ConvAI) 77.75 90.49 80.01
CONVBERT+MLM+Example+Observers (NAACL 2021) - - -
DNNC (EMNLP 2020) 80.40 91.02 80.46
CPFT (EMNLP 2021) 80.86 92.34 82.03

10-shot learning

Model BANKING77 CLICN150 HWU64
RoBERTa+Classifier (EMNLP 2020) 84.27 91.55 82.90
USE (ACL 2020 NLP4ConvAI) 84.23 90.85 83.75
CONVERT(ACL 2020 NLP4ConvAI) 83.32 92.62 82.65
USE+CONVERT (ACL 2020 NLP4ConvAI) 85.19 93.26 85.83
CONVBERT (ArXiv 2020) 83.63 92.10 83.77
CONVBERT+MLM (ArXiv 2020) 83.99 92.75 84.52
CONVBERT+MLM+Example+Observers (NAACL 2021) 85.95 93.97 86.28
DNNC (EMNLP 2020) 86.71 93.76 84.72
CPFT (EMNLP 2021) 87.20 94.18 87.13

Note: the 5-shot learning results of RoBERTa+Classifier, DNNC and CPFT, and the 10-shot learning results of all the models are reported by the paper authors.

Citation

Please cite our paper if you use above resources in your work:

@article{zhang2020discriminative,
  title={Discriminative nearest neighbor few-shot intent detection by transferring natural language inference},
  author={Zhang, Jian-Guo and Hashimoto, Kazuma and Liu, Wenhao and Wu, Chien-Sheng and Wan, Yao and Yu, Philip S and Socher, Richard and Xiong, Caiming},
  journal={EMNLP},
  pages={5064--5082},
  year={2020}
}

@article{zhang2021pretrained,
  title={Are Pretrained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection},
  author={Zhang, Jian-Guo and Hashimoto, Kazuma and Wan, Yao and Liu, Ye and Xiong, Caiming and Yu, Philip S},
  journal={arXiv preprint arXiv:2106.04564},
  year={2021}
}

@article{zhang2021few,
  title={Few-Shot Intent Detection via Contrastive Pre-Training and Fine-Tuning},
  author={Zhang, Jianguo and Bui, Trung and Yoon, Seunghyun and Chen, Xiang and Liu, Zhiwei and Xia, Congying and Tran, Quan Hung and Chang, Walter and Yu, Philip},
  journal={EMNLP},
  year={2021}
}
Owner
Jian-Guo Zhang
Jian-Guo Zhang
You Only Look One-level Feature (YOLOF), CVPR2021, Detectron2

You Only Look One-level Feature (YOLOF), CVPR2021 A simple, fast, and efficient object detector without FPN. This repo provides a neat implementation

qiang chen 273 Jan 03, 2023
A pytorch implementation of Reading Wikipedia to Answer Open-Domain Questions.

DrQA A pytorch implementation of the ACL 2017 paper Reading Wikipedia to Answer Open-Domain Questions (DrQA). Reading comprehension is a task to produ

Runqi Yang 394 Nov 08, 2022
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022
A PyTorch port of the Neural 3D Mesh Renderer

Neural 3D Mesh Renderer (CVPR 2018) This repo contains a PyTorch implementation of the paper Neural 3D Mesh Renderer by Hiroharu Kato, Yoshitaka Ushik

Daniilidis Group University of Pennsylvania 1k Jan 09, 2023
Official pytorch implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion"

DSPoint Official pytorch implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion" Coming soon, as soon as I finish a

Ziyao Zeng 14 Feb 26, 2022
Deep Illuminator is a data augmentation tool designed for image relighting. It can be used to easily and efficiently generate a wide range of illumination variants of a single image.

Deep Illuminator Deep Illuminator is a data augmentation tool designed for image relighting. It can be used to easily and efficiently generate a wide

George Chogovadze 52 Nov 29, 2022
Embodied Intelligence via Learning and Evolution

Embodied Intelligence via Learning and Evolution This is the code for the paper Embodied Intelligence via Learning and Evolution Agrim Gupta, Silvio S

Agrim Gupta 111 Dec 13, 2022
Efficient Sparse Attacks on Videos using Reinforcement Learning

EARL This repository provides a simple implementation of the work "Efficient Sparse Attacks on Videos using Reinforcement Learning" Example: Demo: Her

12 Dec 05, 2021
MRI reconstruction (e.g., QSM) using deep learning methods

deepMRI: Deep learning methods for MRI Authors: Yang Gao, Hongfu Sun This repo is devloped based on Pytorch (1.8 or later) and matlab (R2019a or later

Hongfu Sun 17 Dec 18, 2022
This code is for our paper "VTGAN: Semi-supervised Retinal Image Synthesis and Disease Prediction using Vision Transformers"

ICCV Workshop 2021 VTGAN This code is for our paper "VTGAN: Semi-supervised Retinal Image Synthesis and Disease Prediction using Vision Transformers"

Sharif Amit Kamran 25 Dec 08, 2022
WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

WarpDrive is a flexible, lightweight, and easy-to-use open-source reinforcement learning (RL) framework that implements end-to-end multi-agent RL on a single GPU (Graphics Processing Unit).

Salesforce 334 Jan 06, 2023
BT-Unet: A-Self-supervised-learning-framework-for-biomedical-image-segmentation-using-Barlow-Twins

BT-Unet: A-Self-supervised-learning-framework-for-biomedical-image-segmentation-using-Barlow-Twins Deep learning has brought most profound contributio

Narinder Singh Punn 12 Dec 04, 2022
Code for WSDM 2022 paper, Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation.

DuoRec Code for WSDM 2022 paper, Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation. Usage Download datasets fr

Qrh 46 Dec 19, 2022
Portfolio Optimization and Quantitative Strategic Asset Allocation in Python

Riskfolio-Lib Quantitative Strategic Asset Allocation, Easy for Everyone. Description Riskfolio-Lib is a library for making quantitative strategic ass

Riskfolio 1.7k Jan 07, 2023
TRIQ implementation

TRIQ Implementation TF-Keras implementation of TRIQ as described in Transformer for Image Quality Assessment. Installation Clone this repository. Inst

Junyong You 115 Dec 30, 2022
This is the repository for CVPR2021 Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales

Intro This is the repository for CVPR2021 Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales Vehicle Sam

39 Jul 21, 2022
minimizer-space de Bruijn graphs (mdBG) for whole genome assembly

rust-mdbg: Minimizer-space de Bruijn graphs (mdBG) for whole-genome assembly rust-mdbg is an ultra-fast minimizer-space de Bruijn graph (mdBG) impleme

Barış Ekim 148 Dec 01, 2022
Hierarchical Memory Matching Network for Video Object Segmentation (ICCV 2021)

Hierarchical Memory Matching Network for Video Object Segmentation Hongje Seong, Seoung Wug Oh, Joon-Young Lee, Seongwon Lee, Suhyeon Lee, Euntai Kim

Hongje Seong 72 Dec 14, 2022
Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2002.11798)

Representation Robustness Evaluations Our implementation is based on code from MadryLab's robustness package and Devon Hjelm's Deep InfoMax. For all t

Sicheng 19 Dec 07, 2022
This repo includes our code for evaluating and improving transferability in domain generalization (NeurIPS 2021)

Transferability for domain generalization This repo is for evaluating and improving transferability in domain generalization (NeurIPS 2021), based on

gordon 9 Nov 29, 2022