Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding

Overview

⚠️ Checkout develop branch to see what is coming in pyannote.audio 2.0:

Neural speaker diarization with pyannote-audio

pyannote.audio is an open-source toolkit written in Python for speaker diarization. Based on PyTorch machine learning framework, it provides a set of trainable end-to-end neural building blocks that can be combined and jointly optimized to build speaker diarization pipelines:

pyannote.audio also comes with pretrained models covering a wide range of domains for voice activity detection, speaker change detection, overlapped speech detection, and speaker embedding:

segmentation

Open In Colab

Installation

pyannote.audio only supports Python 3.7 (or later) on Linux and macOS. It might work on Windows but there is no garantee that it does, nor any plan to add official support for Windows.

The instructions below assume that pytorch has been installed using the instructions from https://pytorch.org.

$ pip install pyannote.audio==1.1.1

Documentation and tutorials

Until a proper documentation is released, note that part of the API is described in this tutorial.

Citation

If you use pyannote.audio please use the following citation

@inproceedings{Bredin2020,
  Title = {{pyannote.audio: neural building blocks for speaker diarization}},
  Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe},
  Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing},
  Address = {Barcelona, Spain},
  Month = {May},
  Year = {2020},
}
Comments
  • [WIP] Multilabel Detection

    [WIP] Multilabel Detection

    This is a new PR for the VTC feature, this time based on a cleaner implem. I'm making a new PR as to keep the former branch "clean" (and prevent any mishaps).

    What is done:

    • renaming the SpeakerTracking task into a MultilabelDetection task
    • added MultilabelPipeline
    • update MultilabelFscore.report()
    • tested the new preprocessor

    What's to be done:

    • [ ] re-test the new implem on our clinical data (as well as the child data from @MarvinLvn )
    • [ ] maybe a couple of unit tests (especially for the preprocessor)
    • [ ] maybe make the aggregated "multilabel" fscore duration-based instead of file-based
    opened by hadware 29
  • Trying the diarization pipeline on random .wav files

    Trying the diarization pipeline on random .wav files

    Hey, as suggested by the detailed tutorials, i went through them and trained all the models required for the pipeline. The pipeline is working on the AMI dataset but when i try to reproduce the results on other .wav files sampled at 16k, mono, and 256bps, it is not able to diarize the audio. Here is the breif of what i actually did.

    1. Took a random meeting audio file, sampled at 16k , mono and 256bps
    2. renamed it to ES2003a and replaced it with actual ES2003a ( thought it as a turnaround of creating another database )
    3. ran all the pipelines ( sad,scd, emb, diarization )

    Output :

    1. Speaker activity detection works perfectly and is able to classify regions of speech.
    2. Speaker diarization does't works, everything is classified as 0

    can you please tell if its because of replacing the actual file that the pipeline is giving wrong outputs for the diarization, and whats a better way to test the pipeline on random audios.

    opened by saisumit 26
  • build error

    build error

    Hi, when I run pip install "pyannote.audio==0.3", I got the following error msg:

    In file included from _pysndfile.cpp:471:0: pysndfile.hh:55:21: fatal error: sndfile.h: No such file or directory #include <sndfile.h> ^ compilation terminated. error: command 'gcc' failed with exit status 1


    Failed building wheel for pysndfile Running setup.py clean for pysndfile Failed to build pysndfile

    cannot_reproduce 
    opened by ChristopherLu 24
  • Add support for file handle to pyannote.audio.core.io.Audio

    Add support for file handle to pyannote.audio.core.io.Audio

    This is not currently supported:

    from pyannote.audio.core.io import Audio
    from pyannote.core import Segment
    audio = Audio()
    with open('file.wav', 'rb') as f:
        waveform, sample_rate = audio(f)
    with open('file.wav', 'rb') as f:
        waveform, sample_rate = audio.crop(f, Segment(10, 20))
    

    One has to do this instead:

    from pyannote.audio.core.io import Audio
    from pyannote.core import Segment
    audio = Audio()
    waveform, sample_rate = audio('file.wav')
    waveform, sample_rate = audio.crop('file.wav', Segment(10, 20))
    

    This is a limitation that might be problematic (e.g. with streamlit.file_uploader that returns a file handle)

    v2 
    opened by hbredin 20
  • ValueError: inconsistent

    ValueError: inconsistent "classes" (is ['non_change', 'change'], should be: ['non_speech', 'speech'])

    Describe the bug I'm trying to go through the diarization pipeline tutorial on my own data.

    I am trying to run "apply" on my own data and model for speaker change detection. I get an error that looks like it's trying to apply speech activity detection

    ValueError: inconsistent "classes" (is ['non_change', 'change'], should be: ['non_speech', 'speech'])

    To Reproduce Steps to reproduce the behavior:

    $ export EXP_DIR=tutorials/pipelines/speaker_diarization 
    $ pyannote-audio scd  apply --step=0.1 --pretrained="<path to>/tutorials/models/speaker_change_detection/train/myData.SpeakerDiarization.general.train/validate_segmentation_fscore/myData.SpeakerDiarization.general.train" --subset=train ${EXP_DIR} myData.SpeakerDiarization.general
    
    

    pyannote environment

    $ pip freeze | grep pyannote
    pyannote.core==4.1
    pyannote.database==4.0.1
    pyannote.metrics==3.0.1
    pyannote.pipeline==1.5.2
    

    Additional context I only prepared a development set called "train" right now - so I'm running on that. I successfully ran the SAD apply step before moving to SCD.

    wontfix 
    opened by danFromTelAviv 20
  • An error was encountered while loading

    An error was encountered while loading "pyannote/speaker-diarization"

    Hello,when i run the code :

    from pyannote.audio import Pipeline
    pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization",
                                        use_auth_token="my_token")
    

    I get an error :

    Traceback (most recent call last):
      File "/home/dg/anaconda3/envs/pyannote/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 213, in hf_raise_for_status
        response.raise_for_status()
      File "/home/dg/anaconda3/envs/pyannote/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
        raise HTTPError(http_error_msg, response=self)
    requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/pyannote/segmentation/resolve/2022.07/pytorch_model.bin
    

    whether I use the read token role or the write token role. Anyone else know how to fix it? Thx.

    opened by Zpadger 18
  • [WIP] Feat/vtc

    [WIP] Feat/vtc

    This is a working PR on the future VTC implementation inspired from @MarvinLvn 's work, and to be merged into the next release of pyannote-audio.

    Note: nothing has been done yet, this is just to get things started.

    wontfix 
    opened by hadware 17
  • Trying to finetune model for new speaker

    Trying to finetune model for new speaker

    I am trying to finetune models to support one more speaker, but it looks I am doing something wrong.

    I want to use "dia_hard" pipeline, so I need to finetune models: {sad_dihard, scd_dihard, emb_voxceleb}.

    For my speaker I have one WAV file with duration more then 1 hour.

    So, I created database.yml file:

    Databases:
       IK: /content/fine/kirilov/{uri}.wav
    
    Protocols:
        IK:
           SpeakerDiarization:
              kirilov:
                train:
                   uri: train.lst
                   annotation: train.rttm
                   annotated: train.uem
    

    and put additional files near database.yml:

    kirilov
    ├── database.yml
    ├── kirilov.wav
    ├── train.lst
    ├── train.rttm
    └── train.uem
    

    train.lst: kirilov

    train.rttm: SPEAKER kirilov 1 0.0 3600.0 <NA> <NA> Kirilov <NA> <NA>

    train.uem: kirilov NA 0.0 3600.0

    I assume it will say trainer to use kirilov.wav file and take 3600 seconds of audio from it to use for training.

    Now I finetune the models, current folder is /content/fine/kirilov, so database.yml is taken from the current directory:

    !pyannote-audio sad train --pretrained=sad_dihard --subset=train --to=1 --parallel=4 "/content/fine/sad" IK.SpeakerDiarization.kirilov
    !pyannote-audio scd train --pretrained=scd_dihard --subset=train --to=1 --parallel=4 "/content/fine/scd" IK.SpeakerDiarization.kirilov
    !pyannote-audio emb train --pretrained=emb_voxceleb --subset=train --to=1 --parallel=4 "/content/fine/emb" IK.SpeakerDiarization.kirilov
    

    Output looks like:

    Using cache found in /root/.cache/torch/hub/pyannote_pyannote-audio_develop
    Loading labels: 0file [00:00, ?file/s]/usr/local/lib/python3.6/dist-packages/pyannote/database/protocol/protocol.py:128: UserWarning:
    
    Existing key "annotation" may have been modified.
    
    Loading labels: 1file [00:00, 20.49file/s]
    /usr/local/lib/python3.6/dist-packages/pyannote/audio/train/trainer.py:128: UserWarning:
    
    Did not load optimizer state (most likely because current training session uses a different loss than the one used for pre-training).
    
    2020-06-19 15:35:26.763592: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
    Training:   0%|                                        | 0/1 [00:00<?, ?epoch/s]
    Epoch pyannote/pyannote-database#1:   0%|                                       | 0/29 [00:00<?, ?batch/s]
    Epoch pyannote/pyannote-database#1:   0%|                           | 0/29 [00:00<?, ?batch/s, loss=0.676]
    Epoch pyannote/pyannote-database#1:   3%|▋                  | 1/29 [00:00<00:26,  1.04batch/s, loss=0.676]
    

    Etc.

    And try to run pipeline with new .pt's:

    import os
    import torch
    from pyannote.audio.pipeline import SpeakerDiarization
    pipeline = SpeakerDiarization(embedding = "/content/fine/emb/train/IK.SpeakerDiarization.kirilov.train/weights/0001.pt", 
                                  sad_scores = "/content/fine/sad/train/IK.SpeakerDiarization.kirilov.train/weights/0001.pt",
                                  scd_scores = "/content/fine/scd/train/IK.SpeakerDiarization.kirilov.train/weights/0001.pt",
                                  method= "affinity_propagation")
    
    #params from dia_dihard\train\X.SpeakerDiarization.DIHARD_Official.development\params.yml
    pipeline.load_params("/content/drive/My Drive/pyannote/params.yml")
    FILE = {'audio': "/content/groundtruth/new.wav"}
    diarization = pipeline(FILE)
    diarization
    

    The result is that for my new.wav the whole audio is recognized as speaker talking without pauses. So I assume that the models were broken. And it does not matter if I train for 1 epoch or for 100.

    In case I use:

    1. 0000.pt - I assume these are the original models
    pipeline = SpeakerDiarization(embedding = "/content/fine/emb/train/IK.SpeakerDiarization.kirilov.train/weights/0000.pt", 
                                  sad_scores = "/content/fine/sad/train/IK.SpeakerDiarization.kirilov.train/weights/0000.pt",
                                  scd_scores = "/content/fine/scd/train/IK.SpeakerDiarization.kirilov.train/weights/0000.pt",
                                  method= "affinity_propagation")
    

    or

    1. weights from original models
    pipeline = SpeakerDiarization(embedding = "/content/drive/My Drive/pyannote/emb_voxceleb/train/X.SpeakerDiarization.VoxCeleb.train/weights/0326.pt", 
                                 sad_scores = "/content/drive/My Drive/pyannote/sad_dihard/sad_dihard/train/X.SpeakerDiarization.DIHARD_Official.train/weights/0231.pt",
                                 scd_scores = "/content/drive/My Drive/pyannote/scd_dihard/train/X.SpeakerDiarization.DIHARD_Official.train/weights/0421.pt",
                                 method= "affinity_propagation")
    

    everything is ok and the result is similar to

    pipeline = torch.hub.load('pyannote/pyannote-audio', 'dia_dihard')
    FILE = {'audio': "/content/groundtruth/new.wav"}
    diarization = pipeline(FILE)
    diarization
    

    Could you please advise what could be wrong with my training\finetuning process?

    opened by marlon-br 17
  • `b c t` vs. `b t c`?

    `b c t` vs. `b t c`?

    Issue by hbredin Friday Oct 30, 2020 at 16:46 GMT Originally opened as https://github.com/hbredin/pyannote-audio-v2/issues/54


    Which convention should we use?

    v2 
    opened by mogwai 16
  • Segmentation Fault when conducting change detection tutorial

    Segmentation Fault when conducting change detection tutorial

    Hi,

    Everything seems ok for the feature extraction tutorial. But when I train the model following exactly what the tutorial asks me to do for change detection, I got segmentation fault. What might be probably the reason. Thank you for your help.

    opened by Charliechen1 16
  • Cannot find my Pretrained model

    Cannot find my Pretrained model

    I successfully trained an sad model. I want to create sad scores as part of the speaker diarization pipeline. I thought I am passing the weights correctly to the pyannote-audio script but the model is never found and the script aborts. Here is the output of my bash script with tracing on.

    This is the error message I get.

    RuntimeError: Cannot find callable /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt in hubconf

    For readability, I have boldfaced the commands in the script.

    ++ export EXP_DIR=models/speaker_diarization ++ EXP_DIR=models/speaker_diarization ++ cd /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2 ++ export PYANNOTE_DATABASE_CONFIG=/misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/database.yml ++ PYANNOTE_DATABASE_CONFIG=/misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/database.yml ++ sad_model=/misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt ++ ls /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt ++ pyannote-audio sad apply --step=0.1 --pretrained=/misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt --subset=dev models/speaker_diarization AMI.SpeakerDiarization.MixHeadset Using cache found in /home/map22/.cache/torch/hub/pyannote_pyannote-audio_develop Traceback (most recent call last): File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/bin/pyannote-audio", line 8, in sys.exit(main()) File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/lib/python3.7/site-packages/pyannote/audio/applications/pyannote_audio.py", line 406, in main apply_pretrained(validate_dir, protocol, **params) File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/lib/python3.7/site-packages/pyannote/audio/applications/base.py", line 514, in apply_pretrained step=step) File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/lib/python3.7/site-packages/torch/hub.py", line 364, in load entry = _load_entry_from_hubconf(hub_module, model) File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/lib/python3.7/site-packages/torch/hub.py", line 237, in _load_entry_from_hubconf raise RuntimeError('Cannot find callable {} in hubconf'.format(model)) RuntimeError: Cannot find callable /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt in hubconf

    opened by picheny-nyu 15
  • TypeError: __init__() missing 1 required positional argument: 'signature' while reproducing change-detection tutorial

    TypeError: __init__() missing 1 required positional argument: 'signature' while reproducing change-detection tutorial

    While following this tutorial https://github.com/pyannote/pyannote-audio/tree/89da05ea9d6de97da9bd21949a26ceb0042ef361/tutorials/change-detection

    and while executing this " pyannote-change-detection train \ ${EXPERIMENT_DIR} \ AMI.SpeakerDiarization.MixHeadset "

    File "/home/jashwanth/miniconda3/envs/py36-pyannote-audio/bin/pyannote-change-detection", line 8, in sys.exit(main()) File "/home/jashwanth/miniconda3/envs/py36-pyannote-audio/lib/python3.6/site-packages/pyannote/audio/applications/change_detection.py", line 380, in main train(protocol, experiment_dir, train_dir, subset=subset) File "/home/jashwanth/miniconda3/envs/py36-pyannote-audio/lib/python3.6/site-packages/pyannote/audio/applications/change_detection.py", line 202, in train generator = ChangeDetectionBatchGenerator(feature_extraction) File "/home/jashwanth/miniconda3/envs/py36-pyannote-audio/lib/python3.6/site-packages/pyannote/audio/generators/change.py", line 93, in init segment_generator) TypeError: init() missing 1 required positional argument: 'signature' please can anyone help me with this!!

    Thank you in advance

    opened by Jashwantherao 0
  • How to select threshold and min_cluster_size values in clustering after I finetuned embedding model ?

    How to select threshold and min_cluster_size values in clustering after I finetuned embedding model ?

    Hi. Thanks for sharing great repository. I have a question. I finetuned embedding model in speaker diarization pipeline. After that, I don't know how to set threshold and min_cluster_size params in config.yaml file. Can you give me some advices ?

    opened by dungnguyen98 0
  • Fixes for PytorchLightning >= 1.8

    Fixes for PytorchLightning >= 1.8

    Adjust to PytorchLightning API changes in version 1.8.0. I did some testing to make sure nothing broke, including model training/finetuning and loading from pretrained/HF-hub; however, my tests likely didn't cover everything.

    opened by entn-at 1
  • Support MLflow along with Tensorboard for logging segmentation task visualizations during validation

    Support MLflow along with Tensorboard for logging segmentation task visualizations during validation

    When using PyTorch-Lightning's MLFlowLogger instead of TensorBoardLogger during training of segmentation models, the current implementation fails because MLflow's experiment tracking client has a different method for logging figures than Tensorboard. Unfortunately, PyTorch-Lightning doesn't abstract away this logger API difference.

    Tested with MLflow and Tensorboard.

    opened by entn-at 1
  • Will it work on real time streaming data ?

    Will it work on real time streaming data ?

    I am currently running it on Apple M2 chip it is taking way much time comparing to Colab. Is there a way that pipeline could be modified to streaming data, and combined with some transcription service ?

    opened by ankurdhuriya 0
Releases(1.1.1)
NLP command-line assistant powered by OpenAI

NLP command-line assistant powered by OpenAI

Axel 16 Dec 09, 2022
Python powered crossword generator with database with 20k+ polish words

crossword_generator Generate simple crossword puzzle from words and definitions fetched from krzyżowki.edu.pl endpoints -/ string:word - returns js

0 Jan 04, 2022
PyWorld3 is a Python implementation of the World3 model

The World3 model revisited in Python Install & Hello World3 How to tune your own simulation Licence How to cite PyWorld3 with Bibtex References & ackn

Charles Vanwynsberghe 248 Dec 14, 2022
Espial is an engine for automated organization and discovery of personal knowledge

Live Demo (currently not running, on it) Espial is an engine for automated organization and discovery in knowledge bases. It can be adapted to run wit

Uzay-G 159 Dec 30, 2022
Material for GW4SHM workshop, 16/03/2022.

GW4SHM Workshop Wednesday, 16th March 2022 (13:00 – 15:15 GMT): Presented by: Dr. Rhodri Nelson, Imperial College London Project website: https://www.

Devito Codes 1 Mar 16, 2022
Label data using HuggingFace's transformers and automatically get a prediction service

Label Studio for Hugging Face's Transformers Website • Docs • Twitter • Join Slack Community Transfer learning for NLP models by annotating your textu

Heartex 135 Dec 29, 2022
EasyTransfer is designed to make the development of transfer learning in NLP applications easier.

EasyTransfer is designed to make the development of transfer learning in NLP applications easier. The literature has witnessed the success of applying

Alibaba 819 Jan 03, 2023
Create a semantic search engine with a neural network (i.e. BERT) whose knowledge base can be updated

Create a semantic search engine with a neural network (i.e. BERT) whose knowledge base can be updated. This engine can later be used for downstream tasks in NLP such as Q&A, summarization, generation

Diego 1 Mar 20, 2022
Chinese version of GPT2 training code, using BERT tokenizer.

GPT2-Chinese Description Chinese version of GPT2 training code, using BERT tokenizer or BPE tokenizer. It is based on the extremely awesome repository

Zeyao Du 5.6k Jan 04, 2023
Task-based datasets, preprocessing, and evaluation for sequence models.

SeqIO: Task-based datasets, preprocessing, and evaluation for sequence models. SeqIO is a library for processing sequential data to be fed into downst

Google 290 Dec 26, 2022
🍊 PAUSE (Positive and Annealed Unlabeled Sentence Embedding), accepted by EMNLP'2021 🌴

PAUSE: Positive and Annealed Unlabeled Sentence Embedding Sentence embedding refers to a set of effective and versatile techniques for converting raw

EQT 21 Dec 15, 2022
Official PyTorch implementation of SegFormer

SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers Figure 1: Performance of SegFormer-B0 to SegFormer-B5. Project page

NVIDIA Research Projects 1.4k Dec 29, 2022
Pytorch-Named-Entity-Recognition-with-BERT

BERT NER Use google BERT to do CoNLL-2003 NER ! Train model using Python and Inference using C++ ALBERT-TF2.0 BERT-NER-TENSORFLOW-2.0 BERT-SQuAD Requi

Kamal Raj 1.1k Dec 25, 2022
Diaformer: Automatic Diagnosis via Symptoms Sequence Generation

Diaformer Diaformer: Automatic Diagnosis via Symptoms Sequence Generation (AAAI 2022) Diaformer is an efficient model for automatic diagnosis via symp

Junying Chen 20 Dec 13, 2022
File-based TF-IDF: Calculates keywords in a document, using a word corpus.

File-based TF-IDF Calculates keywords in a document, using a word corpus. Why? Because I found myself with hundreds of plain text files, with no way t

Jakob Lindskog 1 Feb 11, 2022
Python library for interactive topic model visualization. Port of the R LDAvis package.

pyLDAvis Python library for interactive topic model visualization. This is a port of the fabulous R package by Carson Sievert and Kenny Shirley. pyLDA

Ben Mabey 1.7k Dec 20, 2022
AI_Assistant - This is a Python based Voice Assistant.

This is a Python based Voice Assistant. This was programmed to increase my understanding of python and also how the in-general Voice Assistants work.

1 Jan 06, 2022
Share constant definitions between programming languages and make your constants constant again

Introduction Reconstant lets you share constant and enum definitions between programming languages. Constants are defined in a yaml file and converted

Natan Yellin 47 Sep 10, 2022
T‘rex Park is a Youzan sponsored project. Offering Chinese NLP and image models pretrained from E-commerce datasets

T‘rex Park is a Youzan sponsored project. Offering Chinese NLP and image models pretrained from E-commerce datasets (product titles, images, comments, etc.).

55 Nov 22, 2022
NAACL 2022: MCSE: Multimodal Contrastive Learning of Sentence Embeddings

MCSE: Multimodal Contrastive Learning of Sentence Embeddings This repository contains code and pre-trained models for our NAACL-2022 paper MCSE: Multi

Saarland University Spoken Language Systems Group 39 Nov 15, 2022