SIMULEVAL A General Evaluation Toolkit for Simultaneous Translation

Overview

SimulEval

SimulEval is a general evaluation framework for simultaneous translation on text and speech.

Requirement

  • python >= 3.7.0

Installation

git clone [email protected]:fairinternal/SimulEval.git
cd SimulEval
pip install -e .

Quick Start

Following is the evaluation of a dummy agent which operates wait-k (k = 3) policy and generates random words until the length of the generated words is the same as the number of all the source words. A tutorial can be found here.

cd examples
simuleval \
  --agent dummy/dummy_waitk_text_agent.py \
  --source data/src.txt \
  --target data/tgt.txt

License

SimulEval is licensed under Creative Commons BY-SA 4.0.

Citation

Please cite as:

@inproceedings{simuleval2020,
  title = {Simuleval: An evaluation toolkit for simultaneous translation},
  author = {Xutai Ma, Mohammad Javad Dousti, Changhan Wang, Jiatao Gu, Juan Pino},
  booktitle = {Proceedings of the EMNLP},
  year = {2020},
}
Comments
  • Length Adaptive Average Lagging (LAAL)

    Length Adaptive Average Lagging (LAAL)

    Implementation of the Length Adaptive Average Lagging (LAAL) as proposed in CUNI-KIT System for Simultaneous Speech Translation Task at IWSLT 2022 (https://arxiv.org/abs/2204.06028). The name was suggested in Over-Generation Cannot Be Rewarded: Length-Adaptive Average Lagging for Simultaneous Speech Translation (https://arxiv.org/abs/2206.05807).

    CLA Signed Merged 
    opened by pe-trik 7
  • Several bug and format bug fix

    Several bug and format bug fix

    • Change github test plan to pytest on each individual files.
    • Fix the bug when submitting the slurm jobs
    • Fix the bug when infinite loop happens if an empty segment is produced
    • Formatting with black
    CLA Signed Merged 
    opened by xutaima 6
  • Length Adaptive Average Lagging (LAAL)

    Length Adaptive Average Lagging (LAAL)

    Implementation of the Length Adaptive Average Lagging (LAAL) as proposed in CUNI-KIT System for Simultaneous Speech Translation Task at IWSLT 2022 (https://arxiv.org/abs/2204.06028). The name was suggested in Over-Generation Cannot Be Rewarded: Length-Adaptive Average Lagging for Simultaneous Speech Translation (https://arxiv.org/abs/2206.05807).

    opened by pe-trik 3
  • Error during Simul ST(MuST-C) Evaluation

    Error during Simul ST(MuST-C) Evaluation

    Following the steps in this guide, I got this error when trying to evaluate the simultaneous speech-to-text model.

    2021-03-12 13:03:29 | ERROR    | simuleval.util.agent_builder | No 'Agent' class found in /project_scratch/nmt_shdata/speech_translation/simultaneous/fairseq-master/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py
    

    This commit in the fairseq repo changed the superclass of the ST Agent, hence this error(due to the following if statement)

    https://github.com/facebookresearch/SimulEval/blob/927072ebb7df83023adcc206b7458bf9c9aad3a2/simuleval/utils/agent_finder.py#L71


    Environment

    • python 3.8.5
    • torch - 1.7.0
    • OS: Linux
    • fairseq and SimulEval - master codes
    • Scripts: Provided in the guide
    opened by mzaidi59 3
  • Commit 1753363 changes AL scores for audio input

    Commit 1753363 changes AL scores for audio input

    In commit 1753363, AudioInstance was changed to use the base class function Instance.sentence_level_eval(). This introduced the bug (?) that 1 is added to self.source_length() also for the audio input case. (For text input, this is added because of sentence end).

    This corresponds to 1ms only, however this changes the lagging_padding_mask in AverageLagging: all words predicted at the end of the sentence, i.e. after all audio was read, are now not taken into account for the AL computation. I observed the AL value to change from 5598ms to 3267ms for a very high latency system where most of the words are predicted at sentence end.

    See the line:

    lagging_padding_mask = delays >= src_lens.unsqueeze(1)

    src_lens is now 1 greater than delays[-1], i.e. the audio length.

    I think the old behaviour was correct, but please decide yourself 😬

    opened by patrick-wilken 2
  • questions about AL and DAL

    questions about AL and DAL

    Hi! I was looking at the code SimulEval/simuleval/metrics/latency.py in order to understand how you implemented the metrics for latency that you defined in your paper about SimulEval. I have a few questions I'd like to ask: can you confirm that in the code the calculation of AL considers the reference length rather than the prediction length as you described in the paper in equation 8? If so, why did you not apply the same correction also to the calculation of DAL (both in the code and in the paper)?

    Thank you very much in advance!

    opened by Onoratofra 2
  • no module named

    no module named

    # Copyright (c) Facebook, Inc. and its affiliates.
    # All rights reserved.
    #
    # This source code is licensed under the license found in the
    # LICENSE file in the root directory of this source tree.
    
    from simuleval.agents import TextAgent
    from simuleval import READ_ACTION, WRITE_ACTION, DEFAULT_EOS
    from utils.fun import function
    
    
    class DummyWaitkTextAgent(TextAgent):
    
        data_type = "text"
    
        def __init__(self, args):
            super().__init__(args)
            self.waitk = args.waitk
            # Initialize your agent here, for example load model, vocab, etc
    
        @staticmethod
        def add_args(parser):
            # Add additional command line arguments here
            parser.add_argument("--waitk", type=int, default=3)
    
        def policy(self, states):
            # Make decision here
            print(states)
            if len(states.source) - len(states.target) < self.waitk and not states.finish_read():
                return READ_ACTION
            else:
                return WRITE_ACTION
    
        def predict(self, states):
            # predict token here
            if states.finish_read():
                if states.target.length() == states.source.length():
                    return DEFAULT_EOS
    
            # return f"word_{len(states.target)}"
            return function(len(states.target))
    

    And the content of the utils/fun.py is

    def function(number):
        return 'word_' + number
    

    The directory looks like

    ├── data
    │   ├── src.txt
    │   └── tgt.txt
    └── dummy
        ├── dummy_waitk_text_agent.py
        └── utils
            ├── fun.py
            └── __init__.py
    

    When I run the command simuleval --agent dummy_waitk_text_agent.py --source ../data/src.txt --target ../data/tgt.txt, it failed to load the utils module.

    opened by Cppowboy 2
  • Fails to evaluate wait-k model on SimulEval if sequence is shorter than k

    Fails to evaluate wait-k model on SimulEval if sequence is shorter than k

    (Repost of the issue: https://github.com/pytorch/fairseq/issues/3445)

    I've got an error while evaluating the wait-k model following the instruction of README.md.

    2021-04-03 10:36:28 | INFO     | simuleval.scorer | Evaluating on text
    2021-04-03 10:36:28 | INFO     | simuleval.scorer | Source: /userdir/iwslt17.test0.en
    2021-04-03 10:36:28 | INFO     | simuleval.scorer | Target: /userdir/iwslt17.test0.ja
    2021-04-03 10:36:28 | INFO     | simuleval.scorer | Number of sentences: 1549
    2021-04-03 10:36:28 | INFO     | simuleval.server | Evaluation Server Started (process id 132622). Listening to port 12321
    2021-04-03 10:36:31 | WARNING  | simuleval.scorer | Resetting scorer
    2021-04-03 10:36:31 | INFO     | simuleval.cli    | Output dir: /userdir/iwslt17.test0
    2021-04-03 10:36:31 | INFO     | simuleval.cli    | Start data writer (process id 132639)
    2021-04-03 10:36:31 | INFO     | simuleval.cli    | Evaluating SimulTransTextAgentJA (process id 132556) on instances from 0 to 1548
    2021-04-03 10:36:31 | INFO     | fairseq.tasks.translation | [en] dictionary: 16004 types
    2021-04-03 10:36:31 | INFO     | fairseq.tasks.translation | [ja] dictionary: 16004 types
    Traceback (most recent call last):
      File "/userdir/.venv/bin/simuleval", line 11, in <module>
        load_entry_point('simuleval', 'console_scripts', 'simuleval')()
      File "/userdir/SimulEval/simuleval/cli.py", line 165, in main
        _main(args.client_only)
      File "/userdir/SimulEval/simuleval/cli.py", line 192, in _main
        evaluate(args, client, server_process)
      File "/userdir/SimulEval/simuleval/cli.py", line 145, in evaluate
        decode(args, client, result_queue, indices)
      File "/userdir/SimulEval/simuleval/cli.py", line 108, in decode
        action = agent.policy(states)
      File "/userdir/fairseq-v0.10.2/examples/simultaneous_translation/eval/agents/simul_t2t_enja.py", line 196, in policy
        x, outputs = self.model.decoder.forward(
      File "/userdir/fairseq-v0.10.2/fairseq/models/transformer.py", line 817, in forward
        x, extra = self.extract_features(
      File "/userdir/fairseq-v0.10.2/examples/simultaneous_translation/models/transformer_monotonic_attention.py", line 219, in extract_features
        x, attn, _ = layer(
      File "/userdir/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/userdir/fairseq-v0.10.2/examples/simultaneous_translation/modules/monotonic_transformer_layer.py", line 160, in forward
        x, attn = self.encoder_attn(
      File "/userdir/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/userdir/fairseq-v0.10.2/examples/simultaneous_translation/modules/monotonic_multihead_attention.py", line 667, in forward
        alpha = self.expected_alignment_infer(
      File "/userdir/fairseq-v0.10.2/examples/simultaneous_translation/modules/monotonic_multihead_attention.py", line 528, in expected_alignment_infer
        assert tgt_len == 1
    AssertionError
    

    This problem occurs when the sequence is shorter than k.

    Environment

    • fairseq Version: 0.10.2 (commit 14807a361202ba34dbbd3a533899db57a0ebda19)
    • SimulEval Version: latest (commit 1753363071f989ea3b79fdf5a21b96089a002f36)
    opened by fury00812 1
  • bug in AL calculation

    bug in AL calculation

    https://github.com/facebookresearch/SimulEval/blob/927072ebb7df83023adcc206b7458bf9c9aad3a2/simuleval/metrics/latency.py#L105

    Dear developers, I just updated simuleval from 0.1.0 to newest version, and got signifitianlty lower AL score compared to older version. And here I found you made a small bug, range oracle_latency as range(1, ref_len+1), but based on AL definition in your paper, $d_i^{}$ should be $(i-1)\gamma$, thansk a lot.

    opened by danliu2 1
  • Re-sync with internal repository

    Re-sync with internal repository

    The internal and external repositories are out of sync. This attempts to brings them back in sync by patching the GitHub repository. Please carefully review this patch. You must disable ShipIt for your project in order to merge this pull request. DO NOT IMPORT this pull request. Instead, merge it directly on GitHub using the MERGE BUTTON. Re-enable ShipIt after merging.

    CLA Signed fh:direct-merge-enabled 
    opened by facebook-github-bot 0
  • Add ATDScore

    Add ATDScore

    Added Average Token Delay (ATD) for a latency metric. paper: Average Token Delay: A Latency Metric for Simultaneous Translation (https://arxiv.org/abs/2211.13173)

    CLA Signed 
    opened by master-possible 5
  • Connection refused when testing set is large

    Connection refused when testing set is large

    Hi, I noticed that when using dev/test set with large number of sentences (e.g. CoVoST2's dev/test generally have 15k sentences), the evaluation will hang due to the following error:

    2021-11-27 03:49:51 | INFO     | simuleval.scorer | Number of sentences: 15492
    HTTPConnectionPool(host='localhost', port=12347): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fec33199ca0>: Failed to establish a new connection: [Errno 111] Connection refused'))
    

    After some printing here and there, I suspect this error comes from here . however I'm not familiar with web app so I have no idea how to correct/avoid this error.

    Is there any easy solution to this? Much appreciated. @xutaima

    EDIT: Forgot to mention that this does not happen if I use a subset of sentences e.g. 1500 sentences.

    opened by George0828Zhang 5
  • Character level latency for speech-to-text

    Character level latency for speech-to-text

    Hello, Will this feature ever be updated?

    2021-11-14 19:58:00 | ERROR    | simuleval.scorer | Character level latency for speech-to-text model is not supported at the moment. We will update this feature very soon.
    

    Also, I'm curious as to why this combination was not implemented, does it require additional handling? AFAIK, the only differenct between word and char is when calculating the reference length here. I'm trying to implemented myself, but I'm struggling to see the difference, would be great if you can provide some explanations or if this is updated. Thanks.

    opened by George0828Zhang 1
  • Pre- and post-processing text in Simuleval

    Pre- and post-processing text in Simuleval

    I am playing with the MMA-hard model to replicate WMT15 DE-EN experiments reported in the paper and my question is about preprocessing and postprocessing data. The paper says that:

    For each dataset, we apply tokenization with the Moses (Koehn et al., 2007) tokenizer and preserve casing. We apply byte pair encoding (BPE) (Sennrich et al., 2016) jointly on the source and target to construct a shared vocabulary with 32K symbols

    Following what is said above, I applied moses scripts to tokenize raw files and applied BPE to the tokenized files. Then, tokenized and BPE applied train, val and test files were binarized using following fairseq preprocess command:

    fairseq-preprocess --source-lang de --target-lang en \
        --trainpref ~/wmt15_de_en_32k/train --validpref ~/wmt15_de_en_32k/valid --testpref ~/wmt15_de_en_32k/test \
        --destdir ~/wmt15_de_en_32k/data-bin/ \
        --workers 20
    
    

    Afer that, I trained a MMA-hard model using the binarized data. Now, I would like to evaluate (w.r.t. Latency and Bleu) a checkpoint using SimulEval. My first question is about the file format: Which format should I provide the test files as --source and --target to simuleval command? There are three options as far as I can see:

    1. Using Raw files.
    2. Using tokenized files
    3. Using tokenized and bpe applied files.

    I am following EN-JA waitk model's agent file to understand what should be done. However, the difference between the experiment I'd like to replicate and EN-JA experiment is that in EN-JA sentencepiece model is used for tokenization whereas in my case moses is used and also bpe is applied.

    So, I tried following:

    I provided path of TOKENIZED files as --source and --target to simuleval. Also, I've implemented segment_to_units and build_word_splitter functions as follows but I couldn't figure out how I should implement units_to_segment.

    I tried to test this implementation as follows:

    $ head -n 1 ~/wmt15_de_en_32k/tmp/test.de
    Die Premierminister Indiens und Japans trafen sich in Tokio .
    $ head -n 1 ~/wmt15_de_en_32k/tmp/test.en
    India and Japan prime ministers meet in Tokyo
    
    simuleval --agent mma-dummy/mmaAgent.py --source ~/wmt15_de_en_32k/tmp/test.de  \
    --target  ~/wmt15_de_en_32k/tmp/test.en  --data-bin ~/wmt15_de_en_32k/data-bin/  \
    --model-path ~/checkpoints/checkpoint_best.pt --bpe_code ~/wmt15_de_en_32k/code
    

    So, my questions are:

    1. Is it correct to provide tokenized but not bpe applied test files as --source and --target to simuleval?
    2. Do implementations of segment_to_units and build_word_splitter functions seem correct?
    3. Could you please explain how units_to_segment and update_states_write should be implemented?

    Edit: When I evaluate the best checkpoint on a subset of test-set using the above code I got the following output:

    2021-09-19 22:10:08 | WARNING | sacrebleu | That's 100 lines that end in a tokenized period ('.') 2021-09-19 22:10:08 | WARNING | sacrebleu | It looks like you forgot to detokenize your test data, which may hurt your score. 2021-09-19 22:10:08 | WARNING | sacrebleu | If you insist your data is detokenized, or don't care, you can suppress this message with '--force'. 2021-09-19 22:10:08 | INFO | simuleval.cli | Evaluation results: { "Quality": { "BLEU": 6.068334932433579 }, "Latency": { "AL": 7.8185020314753055, "AP": 0.833324143320322, "DAL": 11.775593814849854 } }

    opened by kurtisxx 22
  • Moses tokenizer/detokenizer in cli options

    Moses tokenizer/detokenizer in cli options

    Hi, I'm wondering if you can add moses detokenizer in the cli options? More specifically, when post processing prediction and reference, the " ".join() function can be optionally replaced by sacremoses's detokenizer.

    Thanks.

    opened by George0828Zhang 0
  • visualization tool

    visualization tool

    Hi, I'm using SimulEval for evaluating my system and everything works. I've read in your paper that there is a visualization tool that can be run via: simuleval server --visual --log-dir $DIR but, when I try to run this command, it raises an error which is the request for the agent. Looking at your code I don't see any reference to this visualization tool, is it available at the moment? Thanks

    opened by sarapapi 1
Releases(v1.0.2)
Owner
Facebook Research
Facebook Research
Java and SHACL code commented in the paper "Towards compliance checking in reified I/O logic via SHACL" submitted to ICAIL 2021

shRIOL The subfolder shRIOL contains Java files to execute the SHACL files on the OWL ontology. To compile the Java files: "javac -cp ./src/;./lib/* -

1 Dec 06, 2022
Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions

Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions Accepted by AAAI 2022 [arxiv] Wenyu Liu, Gaofeng Ren, Runsheng Yu, Shi Guo, Jia

liuwenyu 245 Dec 16, 2022
Direct Multi-view Multi-person 3D Human Pose Estimation

Implementation of NeurIPS-2021 paper: Direct Multi-view Multi-person 3D Human Pose Estimation [paper] [video-YouTube, video-Bilibili] [slides] This is

Sea AI Lab 251 Dec 30, 2022
This repo contains the implementation of the algorithm proposed in Off-Belief Learning, ICML 2021.

Off-Belief Learning Introduction This repo contains the implementation of the algorithm proposed in Off-Belief Learning, ICML 2021. Environment Setup

Facebook Research 32 Jan 05, 2023
Supplementary code for the AISTATS 2021 paper "Matern Gaussian Processes on Graphs".

Matern Gaussian Processes on Graphs This repo provides an extension for gpflow with Matérn kernels, inducing variables and trainable models implemente

41 Dec 17, 2022
Code for the Paper: Conditional Variational Capsule Network for Open Set Recognition

Conditional Variational Capsule Network for Open Set Recognition This repository hosts the official code related to "Conditional Variational Capsule N

Guglielmo Camporese 35 Nov 21, 2022
PyTorch implementation of some learning rate schedulers for deep learning researcher.

pytorch-lr-scheduler PyTorch implementation of some learning rate schedulers for deep learning researcher. Usage WarmupReduceLROnPlateauScheduler Visu

Soohwan Kim 59 Dec 08, 2022
A Graph Neural Network Tool for Recovering Dense Sub-graphs in Random Dense Graphs.

PYGON A Graph Neural Network Tool for Recovering Dense Sub-graphs in Random Dense Graphs. Installation This code requires to install and run the graph

Yoram Louzoun's Lab 0 Jun 25, 2021
Pytorch Lightning Distributed Accelerators using Ray

Distributed PyTorch Lightning Training on Ray This library adds new PyTorch Lightning accelerators for distributed training using the Ray distributed

166 Dec 27, 2022
IPATool-py: download ipa easily

IPATool-py Python version of IPATool! Installation pip3 install -r requirements.txt Usage Quickstart: download app with specific bundleId into DIR: p

159 Dec 30, 2022
Accelerated Multi-Modal MR Imaging with Transformers

Accelerated Multi-Modal MR Imaging with Transformers Dependencies numpy==1.18.5 scikit_image==0.16.2 torchvision==0.8.1 torch==1.7.0 runstats==1.8.0 p

54 Dec 16, 2022
Repository accompanying the "Sign Pose-based Transformer for Word-level Sign Language Recognition" paper

by Matyáš Boháček and Marek Hrúz, University of West Bohemia Should you have any questions or inquiries, feel free to contact us here. Repository acco

Matyáš Boháček 30 Dec 30, 2022
PyTorch reimplementation of the Smooth ReLU activation function proposed in the paper "Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations" [arXiv 2022].

Smooth ReLU in PyTorch Unofficial PyTorch reimplementation of the Smooth ReLU (SmeLU) activation function proposed in the paper Real World Large Scale

Christoph Reich 10 Jan 02, 2023
Explainable Medical ImageSegmentation via GenerativeAdversarial Networks andLayer-wise Relevance Propagation

MedAI: Transparency in Medical Image Segmentation What is this repo This repo contains the code and experiments that are implemented to contribute in

Awadelrahman M. A. Ahmed 1 Nov 22, 2021
A PyTorch Toolbox for Face Recognition

FaceX-Zoo FaceX-Zoo is a PyTorch toolbox for face recognition. It provides a training module with various supervisory heads and backbones towards stat

JDAI-CV 1.6k Jan 06, 2023
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.6k Jan 01, 2023
PyTorch implementation of Pointnet2/Pointnet++

Pointnet2/Pointnet++ PyTorch Project Status: Unmaintained. Due to finite time, I have no plans to update this code and I will not be responding to iss

Erik Wijmans 1.2k Dec 29, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

selfcontact This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] It includes the main function

Lea Müller 68 Dec 06, 2022
Random Forests for Regression with Missing Entries

Random Forests for Regression with Missing Entries These are specific codes used in the article: On the Consistency of a Random Forest Algorithm in th

Irving Gómez-Méndez 1 Nov 15, 2021
Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition"

CLIPstyler Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition" Environment Pytorch 1.7.1, Python 3.6 $ c

203 Dec 30, 2022