Multispeaker & Emotional TTS based on Tacotron 2 and Waveglow

Overview

Multispeaker & Emotional TTS based on Tacotron 2 and Waveglow

Table of Contents

General description

This Repository contains a sample code for Tacotron 2, WaveGlow with multi-speaker, emotion embeddings together with a script for data preprocessing.
Checkpoints and code originate from following sources:

Done:

  • took all the best code parts from all of the 5 sources above
  • clean the code and fixed some of the mistakes
  • change code structure
  • add multi-speaker and emotion embendings
  • add preprocessing
  • move all the configs from command line args into experiment config file under configs/experiments folder
  • add restoring / checkpointing mechanism
  • add tensorboard
  • make decoder work with n > 1 frames per step
  • make training work at FP16

TODO:

  • make it work with pytorch-1.4.0
  • add multi-spot instance training for AWS

Getting Started

The following section lists the requirements in order to start training the Tacotron 2 and WaveGlow models.

Clone the repository:

git clone https://github.com/ide8/tacotron2  
cd tacotron2
PROJDIR=$(pwd)
export PYTHONPATH=$PROJDIR:$PYTHONPATH

Requirements

This repository contains Dockerfile which extends the PyTorch NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following components:

Setup

Build an image from Docker file:

docker build --tag taco .

Run docker container:

docker run --shm-size=8G --runtime=nvidia -v /absolute/path/to/your/code:/app -v /absolute/path/to/your/training_data:/mnt/train -v /absolute/path/to/your/logs:/mnt/logs -v /absolute/path/to/your/raw-data:/mnt/raw-data -v /absolute/path/to/your/pretrained-checkpoint:/mnt/pretrained -detach taco sleep inf

Check container id:

docker ps

Select container id of image with tag taco and log into container with:

docker exec -it container_id bash

Code structure description

Folders tacotron2 and waveglow have scripts for Tacotron 2, WaveGlow models and consist of:

  • /model.py - model architecture
  • /data_function.py - data loading functions
  • /loss_function.py - loss function

Folder common contains common layers for both models (common/layers.py), utils (common/utils.py) and audio processing (common/audio_processing.py and common/stft.py).

Folder router is used by training script to select an appropriate model

In the root directory:

  • train.py - script for model training
  • preprocess.py - performs audio processing and creates training and validation datasets
  • inference.ipynb - notebook for running inference

Folder configs contains __init__.py with all parameters needed for training and data processing. Folder configs/experiments consists of all the experiments. waveglow.py and tacotron2.py are provided as examples for WaveGlow and Tacotron 2. On training or data processing start, parameters are copied from your experiment (in our case - from waveglow.py or from tacotron2.py) to __init__.py, from which they are used by the system.

Data preprocessing

Preparing for data preprocessing

  1. For each speaker you have to have a folder named with speaker name, containing wavs folder and metadata.csv file with the next line format: file_name.wav|text.
  2. All necessary parameters for preprocessing should be set in configs/experiments/waveglow.py or in configs/experiments/tacotron2.py, in the class PreprocessingConfig.
  3. If you're running preprocessing first time, set start_from_preprocessed flag to False. preprocess.py performs trimming of audio files up to PreprocessingConfig.top_db (cuts the silence in the beginning and the end), applies ffmpeg command in order to mono, make same sampling rate and bit rate for all the wavs in dataset.
  4. It saves a folder wavs with processed audio files and data.csv file in PreprocessingConfig.output_directory with the following format: path|text|speaker_name|speaker_id|emotion|text_len|duration.
  5. Trimming and ffmpeg command are applied only to speakers, for which flag process_audio is True. Speakers with flag emotion_present is False, are treated as with emotion neutral-normal.
  6. You won't need start_from_preprocessed = False once you finish running preprocessing script. Only exception in case of new raw data comes in.
  7. Once start_from_preprocessed is set to True, script loads file data.csv (created by the start_from_preprocessed = False run), and forms train.txt and val.txt out from data.csv.
  8. Main PreprocessingConfig parameters:
    1. cpus - defines number of cores for batch generator
    2. sr - defines sample ratio for reading and writing audio
    3. emo_id_map - dictionary for emotion name to emotion_id mapping
    4. data[{'path'}] - is path to folder named with speaker name and containing wavs folder and metadata.csv with the following line format: file_name.wav|text|emotion (optional)
  9. Preprocessing script forms training and validation datasets in the following way:
    1. selects rows with audio duration and text length less or equal those for speaker PreprocessingConfig.limit_by (this step is needed for proper batch size)
    2. if such speaker is not present, than it selects rows within PreprocessingConfig.text_limit and PreprocessingConfig.dur_limit. Lower limit for audio is defined by PreprocessingConfig.minimum_viable_dur
    3. in order to be able to use the same batch size as NVIDIA guys, set PreprocessingConfig.text_limit to linda_jonson
    4. splits dataset randomly by ratio train : val = 0.95 : 0.05
    5. if speaker train set is bigger than PreprocessingConfig.n - samples n rows
    6. saves train.txt and val.txt to PreprocessingConfig.output_directory
    7. saves emotion_coefficients.json and speaker_coefficients.json with coefficients for loss balancing (used by train.py).

Run preprocessing

Since both scripts waveglow.py and tacotron2.py contain the class PreprocessingConfig, training and validation dataset can be produced by running any of them:

python preprocess.py --exp tacotron2

or

python preprocess.py --exp waveglow

Training

Preparing for training

Tacotron 2

In configs/experiment/tacotron2.py, in the class Config set:

  1. training_files and validation_files - paths to train.txt, val.txt;
  2. tacotron_checkpoint - path to pretrained Tacotron 2 if it exist (we were able to restore Waveglow from Nvidia, but Tacotron 2 code was edited to add speakers and emotions, so Tacotron 2 needs to be trained from scratch);
  3. speaker_coefficients - path to speaker_coefficients.json;
  4. emotion_coefficients - path to emotion_coefficients.json;
  5. output_directory - path for writing logs and checkpoints;
  6. use_emotions - flag indicating emotions usage;
  7. use_loss_coefficients - flag indicating loss scaling due to possible data disbalance in terms of both speakers and emotions; for balancing loss, set paths to jsons with coefficients in emotion_coefficients and speaker_coefficients;
  8. model_name - "Tacotron2".
  • Launch training
    • Single gpu:
      python train.py --exp tacotron2
      
    • Multigpu training:
      python -m multiproc train.py --exp tacotron2
      

WaveGlow:

In configs/experiment/waveglow.py, in the class Config set:

  1. training_files and validation_files - paths to train.txt, val.txt;
  2. waveglow_checkpoint - path to pretrained Waveglow, restored from Nvidia. Download checkopoint.
  3. output_directory - path for writing logs and checkpoints;
  4. use_emotions - False;
  5. use_loss_coefficients - False;
  6. model_name - "WaveGlow".
  • Launch training
    • Single gpu:
      python train.py --exp waveglow
      
    • Multigpu training:
      python -m multiproc train.py --exp waveglow
      

Running Tensorboard

Once you made your model start training, you might want to see some progress of training:

docker ps

Select container id of image with tag taco and run:

docker exec -it container_id bash

Start Tensorboard:

 tensorboard --logdir=path_to_folder_with_logs --host=0.0.0.0

Loss is being written into tensorboard:

Tensorboard Scalars

Audio samples together with attention alignments are saved into tensorbaord each Config.epochs_per_checkpoint. Transcripts for audios are listed in Config.phrases

Tensorboard Audio

Inference

Running inference with the inference.ipynb notebook.

Run Jupyter Notebook:

jupyter notebook --ip 0.0.0.0 --port 6006 --no-browser --allow-root

output:

[email protected]:/app# jupyter notebook --ip 0.0.0.0 --port 6006 --no-browser --allow-root
[I 09:31:25.393 NotebookApp] JupyterLab extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab
[I 09:31:25.393 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
[I 09:31:25.395 NotebookApp] Serving notebooks from local directory: /app
[I 09:31:25.395 NotebookApp] The Jupyter Notebook is running at:
[I 09:31:25.395 NotebookApp] http://(04096a19c266 or 127.0.0.1):6006/?token=bbd413aef225c1394be3b9de144242075e651bea937eecce
[I 09:31:25.395 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 09:31:25.398 NotebookApp] 
    
    To access the notebook, open this file in a browser:
        file:///root/.local/share/jupyter/runtime/nbserver-15398-open.html
    Or copy and paste one of these URLs:
        http://(04096a19c266 or 127.0.0.1):6006/?token=bbd413aef225c1394be3b9de144242075e651bea937eecce

Select adress with 127.0.0.1 and put it in the browser. In this case: http://127.0.0.1:6006/?token=bbd413aef225c1394be3b9de144242075e651bea937eecce

This script takes text as input and runs Tacotron 2 and then WaveGlow inference to produce an audio file. It requires pre-trained checkpoints from Tacotron 2 and WaveGlow models, input text, speaker_id and emotion_id.

Change paths to checkpoints of pretrained Tacotron 2 and WaveGlow in the cell [2] of the inference.ipynb.
Write a text to be displayed in the cell [7] of the inference.ipynb.

Parameters

In this section, we list the most important hyperparameters, together with their default values that are used to train Tacotron 2 and WaveGlow models.

Shared parameters

  • epochs - number of epochs (Tacotron 2: 1501, WaveGlow: 1001)
  • learning-rate - learning rate (Tacotron 2: 1e-3, WaveGlow: 1e-4)
  • batch-size - batch size (Tacotron 2: 64, WaveGlow: 11)
  • grad_clip_thresh - gradient clipping treshold (0.1)

Shared audio/STFT parameters

  • sampling-rate - sampling rate in Hz of input and output audio (22050)
  • filter-length - (1024)
  • hop-length - hop length for FFT, i.e., sample stride between consecutive FFTs (256)
  • win-length - window size for FFT (1024)
  • mel-fmin - lowest frequency in Hz (0.0)
  • mel-fmax - highest frequency in Hz (8.000)

Tacotron parameters

  • anneal-steps - epochs at which to anneal the learning rate (500/ 1000/ 1500)
  • anneal-factor - factor by which to anneal the learning rate (0.1) These two parameters are used to change learning rate at the points defined in anneal-steps according to:
    learning_rate = learning_rate * ( anneal_factor ** p),
    where p = 0 at the first step and increments by 1 each step.

WaveGlow parameters

  • segment-length - segment length of input audio processed by the neural network (8000). Before passing to input, audio is padded or croped to segment-length.
  • wn_config - dictionary with parameters of affine coupling layers. Contains n_layers, n_chanels, kernel_size.

Contributing

If you've ever wanted to contribute to open source, and a great cause, now is your chance!

See the contributing docs for more information

Owner
Ivan Didur
CTO at data root labs
Ivan Didur
ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost

Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost LOVE is accpeted by ACL22 main conference as a long pape

Lihu Chen 32 Jan 03, 2023
Official source for spanish Language Models and resources made @ BSC-TEMU within the "Plan de las Tecnologías del Lenguaje" (Plan-TL).

Spanish Language Models 💃🏻 A repository part of the MarIA project. Corpora 📃 Corpora Number of documents Number of tokens Size (GB) BNE 201,080,084

Plan de Tecnologías del Lenguaje - Gobierno de España 203 Dec 20, 2022
Source code for the paper "TearingNet: Point Cloud Autoencoder to Learn Topology-Friendly Representations"

TearingNet: Point Cloud Autoencoder to Learn Topology-Friendly Representations Created by Jiahao Pang, Duanshun Li, and Dong Tian from InterDigital In

InterDigital 21 Dec 29, 2022
I can help you convert your images to pdf file.

IMAGE TO PDF CONVERTER BOT Configs TOKEN - Get bot token from @BotFather API_ID - From my.telegram.org API_HASH - From my.telegram.org Deploy to Herok

MADUSHANKA 10 Dec 14, 2022
KR-FinBert And KR-FinBert-SC

KR-FinBert & KR-FinBert-SC Much progress has been made in the NLP (Natural Language Processing) field, with numerous studies showing that domain adapt

5 Jul 29, 2022
A program that uses real statistics to choose the best times to bet on BloxFlip's crash gamemode

Bloxflip Smart Bet A program that uses real statistics to choose the best times to bet on BloxFlip's crash gamemode. https://bloxflip.com/crash. THIS

43 Jan 05, 2023
Behavioral Testing of Clinical NLP Models

Behavioral Testing of Clinical NLP Models This repository contains code for testing the behavior of clinical prediction models based on patient letter

Betty van Aken 2 Sep 20, 2022
Practical Natural Language Processing Tools for Humans is build on the top of Senna Natural Language Processing (NLP)

Practical Natural Language Processing Tools for Humans is build on the top of Senna Natural Language Processing (NLP) predictions: part-of-speech (POS) tags, chunking (CHK), name entity recognition (

jawahar 20 Apr 30, 2022
Korea Spell Checker

한국어 문서 koSpellPy Korean Spell checker How to use Install pip install kospellpy Use from kospellpy import spell_init spell_checker = spell_init() # d

kangsukmin 2 Oct 20, 2021
Mapping a variable-length sentence to a fixed-length vector using BERT model

Are you looking for X-as-service? Try the Cloud-Native Neural Search Framework for Any Kind of Data bert-as-service Using BERT model as a sentence enc

Han Xiao 11.1k Jan 01, 2023
FB ID CLONER WUTHOT CHECKPOINT, FACEBOOK ID CLONE FROM FILE

* MY SOCIAL MEDIA : Programming And Memes Want to contact Mr. Error ? CONTACT : [ema

Mr. Error 9 Jun 17, 2021
Toward Model Interpretability in Medical NLP

Toward Model Interpretability in Medical NLP LING380: Topics in Computational Linguistics Final Project James Cross ( 1 Mar 04, 2022

👑 spaCy building blocks and visualizers for Streamlit apps

spacy-streamlit: spaCy building blocks for Streamlit apps This package contains utilities for visualizing spaCy models and building interactive spaCy-

Explosion 620 Dec 29, 2022
A sample project that exists for PyPUG's "Tutorial on Packaging and Distributing Projects"

A sample Python project A sample project that exists as an aid to the Python Packaging User Guide's Tutorial on Packaging and Distributing Projects. T

Python Packaging Authority 4.5k Dec 30, 2022
CCQA A New Web-Scale Question Answering Dataset for Model Pre-Training

CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training This is the official repository for the code and models of the paper CCQA: A N

Meta Research 29 Nov 30, 2022
AEC_DeepModel - Deep learning based acoustic echo cancellation baseline code

AEC_DeepModel - Deep learning based acoustic echo cancellation baseline code

凌逆战 75 Dec 05, 2022
端到端的长本文摘要模型(法研杯2020司法摘要赛道)

端到端的长文本摘要模型(法研杯2020司法摘要赛道)

苏剑林(Jianlin Su) 334 Jan 08, 2023
My Implementation for the paper EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks using Tensorflow

Easy Data Augmentation Implementation This repository contains my Implementation for the paper EDA: Easy Data Augmentation Techniques for Boosting Per

Aflah 9 Oct 31, 2022
Machine learning classifiers to predict American Sign Language .

ASL-Classifiers American Sign Language (ASL) is a natural language that serves as the predominant sign language of Deaf communities in the United Stat

Tarek idrees 0 Feb 08, 2022
The implementation of Parameter Differentiation based Multilingual Neural Machine Translation

The implementation of Parameter Differentiation based Multilingual Neural Machine Translation .

Qian Wang 21 Dec 17, 2022