Word2Wave: a framework for generating short audio samples from a text prompt using WaveGAN and COALA.

Overview

Word2Wave

Word2Wave is a simple method for text-controlled GAN audio generation. You can either follow the setup instructions below and use the source code and CLI provided in this repo or you can have a play around in the Colab notebook provided. Note that, in both cases, you will need to train a WaveGAN model first. You can also hear some examples here.

Colab playground Open In Colab

Setup

First, clone the repository

git clone https://www.github.com/ilaria-manco/word2wave

Create a virtual environment and install the requirements:

cd word2wave
python3 -m venv /path/to/venv/
pip install -r requirements.txt

WaveGAN generator

Word2Wave requires a pre-trained WaveGAN generator. In my experiments, I trained my own on the Freesound Loop Dataset, using this implementation. To download the FSL dataset do:

$ wget https://zenodo.org/record/3967852/files/FSL10K.zip?download=1

and then train following the instructions in the WaveGAN repo. Once trained, place the model in the wavegan folder:

๐Ÿ“‚wavegan
  โ”— ๐Ÿ“œgan_.tar

Pre-trained COALA encoders

You'll need to download the pre-trained weights for the COALA tag and audio encoders from the official repo. Note that the repo provides weights for the model trained with different configurations (e.g. different weights in the loss components). For more details on this, you can refer to the original code and paper. To download the model weights, you can run the following commands (or the equivalent for the desired model configuration)

$ wget https://raw.githubusercontent.com/xavierfav/coala/master/saved_models/dual_ae_c/audio_encoder_epoch_200.pt
$ wget https://raw.githubusercontent.com/xavierfav/coala/master/saved_models/dual_ae_c/tag_encoder_epoch_200.pt

Once downloaded, place them in the coala/models folder:

๐Ÿ“‚coala
 โ”ฃ ๐Ÿ“‚models
   โ”ฃ ๐Ÿ“‚dual_ae_c
     โ”ฃ ๐Ÿ“œaudio_encoder_epoch_200.pt
     โ”— ๐Ÿ“œtag_encoder_epoch_200.pt

How to use

For text-to-audio generation using the default parameters, simply do

$ python main.py "text prompt" --wavegan_path  --output_dir 

Citations

Some of the code in this repo is adapted from the official COALA repo and @mostafaelaraby's PyTorch implenentation of the WaveGAN model.

@inproceedings{donahue2018adversarial,
  title={Adversarial Audio Synthesis},
  author={Donahue, Chris and McAuley, Julian and Puckette, Miller},
  booktitle={International Conference on Learning Representations},
  year={2018}
}
@article{favory2020coala,
  title={Coala: Co-aligned autoencoders for learning semantically enriched audio representations},
  author={Favory, Xavier and Drossos, Konstantinos and Virtanen, Tuomas and Serra, Xavier},
  journal={arXiv preprint arXiv:2006.08386},
  year={2020}
}
You might also like...
FireFlyer Record file format, writer and reader for DL training samples.

FFRecord The FFRecord format is a simple format for storing a sequence of binary records developed by HFAiLab, which supports random access and Linux

Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks
Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks

Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks, which modifies the input text with a textual template and directly uses PLMs to conduct pre-trained tasks. This library provides a standard, flexible and extensible framework to deploy the prompt-learning pipeline. OpenPrompt supports loading PLMs directly from huggingface transformers. In the future, we will also support PLMs implemented by other libraries.

A high-level yet extensible library for fast language model tuning via automatic prompt search

ruPrompts ruPrompts is a high-level yet extensible library for fast language model tuning via automatic prompt search, featuring integration with Hugg

iSTFTNet : Fast and Lightweight Mel-spectrogram Vocoder Incorporating Inverse Short-time Fourier Transform
iSTFTNet : Fast and Lightweight Mel-spectrogram Vocoder Incorporating Inverse Short-time Fourier Transform

iSTFTNet : Fast and Lightweight Mel-spectrogram Vocoder Incorporating Inverse Short-time Fourier Transform This repo try to implement iSTFTNet : Fast

Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts
Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts

gpt-2-simple A simple Python package that wraps existing model fine-tuning and generation scripts for OpenAI's GPT-2 text generation model (specifical

Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts
Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts

gpt-2-simple A simple Python package that wraps existing model fine-tuning and generation scripts for OpenAI's GPT-2 text generation model (specifical

Biterm Topic Model (BTM): modeling topics in short texts
Biterm Topic Model (BTM): modeling topics in short texts

Biterm Topic Model Bitermplus implements Biterm topic model for short texts introduced by Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. Actua

Code for "Generating Disentangled Arguments with Prompts: a Simple Event Extraction Framework that Works"

GDAP The code of paper "Code for "Generating Disentangled Arguments with Prompts: a Simple Event Extraction Framework that Works"" Event Datasets Prep

Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing ๐ŸŽ‰ ๐ŸŽ‰ ๐ŸŽ‰ We released the 2.0.0 version with TF2 Support. ๐ŸŽ‰ ๐ŸŽ‰ ๐ŸŽ‰ If you

Comments
  • Colab notebook: Where are weights?

    Colab notebook: Where are weights?

    Thanks for sharing the notebook. Could it perhaps be documented a bit more to make it a bit easier for new users (like me) to make it work without crashing? I can't figure out how to fill in the missing information about where to find the weights.

    Below is a log of my run...

    !nvidia-smi -L
    GPU 0: Tesla P100-PCIE-16GB (UUID: GPU-5218d88a-592a-b7c2-d10c-ff61031ab247)
    

    Mount your drive

    Mounted at /content/drive
    

    Install Word2Wave, import necessary packages

    Cloning into 'word2wave'...
    remote: Enumerating objects: 349, done.
    remote: Counting objects: 100% (349/349), done.
    remote: Compressing objects: 100% (311/311), done.
    remote: Total 349 (delta 185), reused 81 (delta 33), pack-reused 0
    Receiving objects: 100% (349/349), 1.10 MiB | 5.21 MiB/s, done.
    Resolving deltas: 100% (185/185), done.
    /content/word2wave
    Requirement already satisfied: matplotlib>=2.2.4 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 1)) (3.2.2)
    Requirement already satisfied: numpy>=1.16.3 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 2)) (1.19.5)
    Collecting librosa==0.6.3
      Downloading librosa-0.6.3.tar.gz (1.6 MB)
         |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.6 MB 5.0 MB/s 
    Collecting pescador>=2.0.1
      Downloading pescador-2.1.0.tar.gz (20 kB)
    Requirement already satisfied: torch>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 5)) (1.10.0+cu111)
    Requirement already satisfied: tqdm>=4.32.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 6)) (4.62.3)
    Collecting numba==0.49.0
      Downloading numba-0.49.0-cp37-cp37m-manylinux2014_x86_64.whl (3.6 MB)
         |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3.6 MB 36.1 MB/s 
    Collecting torchaudio==0.8.1
      Downloading torchaudio-0.8.1-cp37-cp37m-manylinux1_x86_64.whl (1.9 MB)
         |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.9 MB 64.3 MB/s 
    Requirement already satisfied: audioread>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.6.3->-r requirements.txt (line 3)) (2.1.9)
    Requirement already satisfied: scipy>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.6.3->-r requirements.txt (line 3)) (1.4.1)
    Requirement already satisfied: scikit-learn!=0.19.0,>=0.14.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.6.3->-r requirements.txt (line 3)) (1.0.1)
    Requirement already satisfied: joblib>=0.12 in /usr/local/lib/python3.7/dist-packages (from librosa==0.6.3->-r requirements.txt (line 3)) (1.1.0)
    Requirement already satisfied: decorator>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.6.3->-r requirements.txt (line 3)) (4.4.2)
    Requirement already satisfied: six>=1.3 in /usr/local/lib/python3.7/dist-packages (from librosa==0.6.3->-r requirements.txt (line 3)) (1.15.0)
    Requirement already satisfied: resampy>=0.2.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.6.3->-r requirements.txt (line 3)) (0.2.2)
    Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from numba==0.49.0->-r requirements.txt (line 7)) (57.4.0)
    Collecting llvmlite<=0.33.0.dev0,>=0.31.0.dev0
      Downloading llvmlite-0.32.1-cp37-cp37m-manylinux1_x86_64.whl (20.2 MB)
         |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 20.2 MB 1.3 MB/s 
    Collecting torch>=1.1.0
      Downloading torch-1.8.1-cp37-cp37m-manylinux1_x86_64.whl (804.1 MB)
         |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 804.1 MB 2.6 kB/s 
    Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.1.0->-r requirements.txt (line 5)) (3.10.0.2)
    Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.2.4->-r requirements.txt (line 1)) (2.8.2)
    Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.2.4->-r requirements.txt (line 1)) (3.0.6)
    Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.2.4->-r requirements.txt (line 1)) (1.3.2)
    Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.2.4->-r requirements.txt (line 1)) (0.11.0)
    Requirement already satisfied: pyzmq>=15.0 in /usr/local/lib/python3.7/dist-packages (from pescador>=2.0.1->-r requirements.txt (line 4)) (22.3.0)
    Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn!=0.19.0,>=0.14.0->librosa==0.6.3->-r requirements.txt (line 3)) (3.0.0)
    Building wheels for collected packages: librosa, pescador
      Building wheel for librosa (setup.py) ... done
      Created wheel for librosa: filename=librosa-0.6.3-py3-none-any.whl size=1573336 sha256=fff7ac07e9d03aa008fcf3f1f369f8acffdb27082d75ffead993dfeb62fb468d
      Stored in directory: /root/.cache/pip/wheels/de/c1/94/619fb8b04ee1f567115662d26650677ecf79bc7d8e462d21f8
      Building wheel for pescador (setup.py) ... done
      Created wheel for pescador: filename=pescador-2.1.0-py3-none-any.whl size=21104 sha256=4a7aaeaff3c65a1913ee3bad1cbd83c1c6e541790056b38a2848a31f5568f4e9
      Stored in directory: /root/.cache/pip/wheels/f0/e3/c6/32d30d5eb5292dac352d2fca4ebf393aa94e09b9b8b4b0f341
    Successfully built librosa pescador
    Installing collected packages: llvmlite, numba, torch, torchaudio, pescador, librosa
      Attempting uninstall: llvmlite
        Found existing installation: llvmlite 0.34.0
        Uninstalling llvmlite-0.34.0:
          Successfully uninstalled llvmlite-0.34.0
      Attempting uninstall: numba
        Found existing installation: numba 0.51.2
        Uninstalling numba-0.51.2:
          Successfully uninstalled numba-0.51.2
      Attempting uninstall: torch
        Found existing installation: torch 1.10.0+cu111
        Uninstalling torch-1.10.0+cu111:
          Successfully uninstalled torch-1.10.0+cu111
      Attempting uninstall: torchaudio
        Found existing installation: torchaudio 0.10.0+cu111
        Uninstalling torchaudio-0.10.0+cu111:
          Successfully uninstalled torchaudio-0.10.0+cu111
      Attempting uninstall: librosa
        Found existing installation: librosa 0.8.1
        Uninstalling librosa-0.8.1:
          Successfully uninstalled librosa-0.8.1
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    torchvision 0.11.1+cu111 requires torch==1.10.0, but you have torch 1.8.1 which is incompatible.
    torchtext 0.11.0 requires torch==1.10.0, but you have torch 1.8.1 which is incompatible.
    kapre 0.3.6 requires librosa>=0.7.2, but you have librosa 0.6.3 which is incompatible.
    Successfully installed librosa-0.6.3 llvmlite-0.32.1 numba-0.49.0 pescador-2.1.0 torch-1.8.1 torchaudio-0.8.1
    /usr/local/lib/python3.7/dist-packages/librosa/util/decorators.py:9: NumbaDeprecationWarning: An import was requested from a module that has moved location.
    Import requested from: 'numba.decorators', please update to use 'numba.core.decorators' or pin to Numba version 0.48.0. This alias will not be present in Numba version 0.50.0.
      from numba.decorators import jit as optional_jit
    /usr/local/lib/python3.7/dist-packages/librosa/util/decorators.py:9: NumbaDeprecationWarning: An import was requested from a module that has moved location.
    Import of 'jit' requested from: 'numba.decorators', please update to use 'numba.core.decorators' or pin to Numba version 0.48.0. This alias will not be present in Numba version 0.50.0.
      from numba.decorators import jit as optional_jit
    

    But then part that says...

    Copy the pre-trained WaveGAN and COALA weights from drive

    drive_path:  "/content/drive/<path/to/word2wave_files/>" 
    

    ...it's not clear from the notebook what to enter in this string.

    I see further up in the output that it seems to have installed into /content/word2wave, but when I type that into the prompt, I get

    cp: cannot stat '/content/word2wavewavegan': No such file or directory
    cp: cannot stat '/content/word2wavecoala': No such file or directory
    mv: cannot stat '/content/word2wave/coala/coala/': No such file or directory
    

    Looking around on Drive, I see...

    !ls /content/drive
    
    MyDrive  Shareddrives
    

    If I just ignore the above errors and try to run the notebook with no changes, then when it comes to the part for generating with "firework", I see:

    NameError                                 Traceback (most recent call last)
    <ipython-input-9-cf00c4445550> in <module>()
          9 id2tag = json.load(open('/content/word2wave/coala/id2token_top_1000.json', 'rb'))
         10 
    ---> 11 check_text_input(text)
    
    <ipython-input-7-0cff9a5913d2> in check_text_input(text)
         28 
         29 def check_text_input(text):
    ---> 30   _, words_in_dict, words_not_in_dict = word2wave.tokenize_text(text)
         31   if not words_in_dict:
         32       raise Exception("All the words in the text prompt are out-of-vocabulary, please try with another prompt")
    
    NameError: name 'word2wave' is not defined
    
    opened by drscotthawley 4
  • train/valid splits used for FSL10K

    train/valid splits used for FSL10K

    It looks like the WaveGAN code in the wavegan-pytorch repo you used assumes that the audio files are split into train and valid subdirectories, but the FSL10K dataset doesn't seem to have any information about standard splits on their Zenodo page or in their paper. Do you have any information about the train/valid split you used?

    opened by ecooper7 0
  • Error with pip install -r requirements.txt

    Error with pip install -r requirements.txt

    I'm getting these errors with the command pip install -r requirements.txt

      ERROR: Command errored out with exit status 1:
       command: 'C:\Users\Computer\anaconda3\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Computer\\AppData\\Local\\Temp\\pip-install-vqe1knc0\\llvmlite_e671eaa75a104e25a13f7826ddcf3a51\\setup.py'"'"'; __file__='"'"'C:\\Users\\Computer\\AppData\\Local\\Temp\\pip-install-vqe1knc0\\llvmlite_e671eaa75a104e25a13f7826ddcf3a51\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\Computer\AppData\Local\Temp\pip-wheel-a4bxxl_i'
           cwd: C:\Users\Computer\AppData\Local\Temp\pip-install-vqe1knc0\llvmlite_e671eaa75a104e25a13f7826ddcf3a51\
      Complete output (24 lines):
      running bdist_wheel
      C:\Users\Computer\anaconda3\python.exe C:\Users\Computer\AppData\Local\Temp\pip-install-vqe1knc0\llvmlite_e671eaa75a104e25a13f7826ddcf3a51\ffi\build.py
      Trying generator 'Visual Studio 14 2015 Win64'
      Traceback (most recent call last):
        File "C:\Users\Computer\AppData\Local\Temp\pip-install-vqe1knc0\llvmlite_e671eaa75a104e25a13f7826ddcf3a51\ffi\build.py", line 192, in <module>
          main()
        File "C:\Users\Computer\AppData\Local\Temp\pip-install-vqe1knc0\llvmlite_e671eaa75a104e25a13f7826ddcf3a51\ffi\build.py", line 180, in main
          main_win32()
        File "C:\Users\Computer\AppData\Local\Temp\pip-install-vqe1knc0\llvmlite_e671eaa75a104e25a13f7826ddcf3a51\ffi\build.py", line 89, in main_win32
          generator = find_win32_generator()
        File "C:\Users\Computer\AppData\Local\Temp\pip-install-vqe1knc0\llvmlite_e671eaa75a104e25a13f7826ddcf3a51\ffi\build.py", line 77, in find_win32_generator
          try_cmake(cmake_dir, build_dir, generator)
        File "C:\Users\Computer\AppData\Local\Temp\pip-install-vqe1knc0\llvmlite_e671eaa75a104e25a13f7826ddcf3a51\ffi\build.py", line 28, in try_cmake
          subprocess.check_call(['cmake', '-G', generator, cmake_dir])
        File "C:\Users\Computer\anaconda3\lib\subprocess.py", line 368, in check_call
          retcode = call(*popenargs, **kwargs)
        File "C:\Users\Computer\anaconda3\lib\subprocess.py", line 349, in call
          with Popen(*popenargs, **kwargs) as p:
        File "C:\Users\Computer\anaconda3\lib\subprocess.py", line 951, in __init__
          self._execute_child(args, executable, preexec_fn, close_fds,
        File "C:\Users\Computer\anaconda3\lib\subprocess.py", line 1420, in _execute_child
          hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
      FileNotFoundError: [WinError 2] The system cannot find the file specified
      error: command 'C:\\Users\\Computer\\anaconda3\\python.exe' failed with exit code 1
      ----------------------------------------
      ERROR: Failed building wheel for llvmlite
      Running setup.py clean for llvmlite
    Successfully built numba
    Failed to build llvmlite
    Installing collected packages: llvmlite, numba, torch, resampy, audioread, torchaudio, pescador, librosa
      Attempting uninstall: llvmlite
        Found existing installation: llvmlite 0.37.0
    ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.```
    opened by Redivh 0
Owner
Ilaria Manco
AI & Music PhD Researcher at the Centre for Digital Music (QMUL)
Ilaria Manco
COVID-19 Related NLP Papers

COVID-19 outbreak has become a global pandemic. NLP researchers are fighting the epidemic in their own way.

xcfeng 28 Oct 30, 2022
Utilizing RBERT model for KLUE Relation Extraction task

RBERT for Relation Extraction task for KLUE Project Description Relation Extraction task is one of the task of Korean Language Understanding Evaluatio

snoop2head 14 Nov 15, 2022
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

Amazon Web Services - Labs 1.1k Dec 27, 2022
Official source for spanish Language Models and resources made @ BSC-TEMU within the "Plan de las Tecnologรญas del Lenguaje" (Plan-TL).

Spanish Language Models ๐Ÿ’ƒ๐Ÿป Corpora ๐Ÿ“ƒ Corpora Number of documents Size (GB) BNE 201,080,084 570GB Models ๐Ÿค– RoBERTa-base BNE: https://huggingface.co

PlanTL-SANIDAD 203 Dec 20, 2022
Tutorial to pretrain & fine-tune a ๐Ÿค— Flax T5 model on a TPUv3-8 with GCP

Pretrain and Fine-tune a T5 model with Flax on GCP This tutorial details how pretrain and fine-tune a FlaxT5 model from HuggingFace using a TPU VM ava

Gabriele Sarti 41 Nov 18, 2022
CMeEE ๆ•ฐๆฎ้›†ๅŒปๅญฆๅฎžไฝ“ๆŠฝๅ–

ๅŒปๅญฆๅฎžไฝ“ๆŠฝๅ–_GlobalPointer_torch ไป‹็ป ๆ€ๆƒณๆฅ่‡ชไบŽ่‹็ฅž GlobalPointer๏ผŒๅŽŸๅง‹็‰ˆๆœฌๆ˜ฏๅŸบไบŽkerasๅฎž็Žฐ็š„๏ผŒๆจกๅž‹็ป“ๆž„ๅฎž็Žฐๅ‚่€ƒ็Žฐๆœ‰ pytorch ๅค็Žฐไปฃ็ ใ€ๆ„Ÿ่ฐข!ใ€‘๏ผŒๅŸบไบŽtorch็™พๅˆ†็™พๅค็Žฐ่‹็ฅžๅŽŸๅง‹ๆ•ˆๆžœใ€‚ ๆ•ฐๆฎ้›† ไธญๆ–‡ๅŒปๅญฆๅ‘ฝๅๅฎžไฝ“ๆ•ฐๆฎ้›† ็‚น่ฟ™้‡Œ็”ณ่ฏท๏ผŒๅพˆ็ฎ€ๅ•๏ผŒๅ…ฑๅŒ…ๅซไน็ฑปๅŒปๅญฆ

85 Dec 28, 2022
NeurIPS'21: Probabilistic Margins for Instance Reweighting in Adversarial Training (Pytorch implementation).

source code for NeurIPS21 paper robabilistic Margins for Instance Reweighting in Adversarial Training

9 Dec 20, 2022
Code for Emergent Translation in Multi-Agent Communication

Emergent Translation in Multi-Agent Communication PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Comm

Facebook Research 75 Jul 15, 2022
Japanese NLP Library

Japanese NLP Library Back to Home Contents 1 Requirements 1.1 Links 1.2 Install 1.3 History 2 Libraries and Modules 2.1 Tokenize jTokenize.py 2.2 Cabo

Pulkit Kathuria 144 Dec 27, 2022
Code for "Parallel Instance Query Network for Named Entity Recognition", accepted at ACL 2022.

README Code for Two-stage Identifier: "Parallel Instance Query Network for Named Entity Recognition", accepted at ACL 2022. For details of the model a

Yongliang Shen 45 Nov 29, 2022
CATs: Semantic Correspondence with Transformers

CATs: Semantic Correspondence with Transformers For more information, check out the paper on [arXiv]. Training with different backbones and evaluation

74 Dec 10, 2021
Some embedding layer implementation using ivy library

ivy-manual-embeddings Some embedding layer implementation using ivy library. Just for fun. It is based on NYCTaxiFare dataset from kaggle (cut down to

Ishtiaq Hussain 2 Feb 10, 2022
I label phrases on a scale of five values: negative, somewhat negative, neutral, somewhat positive, positive

I label phrases on a scale of five values: negative, somewhat negative, neutral, somewhat positive, positive. Obstacles like sentence negation, sarcasm, terseness, language ambiguity, and many others

1 Jan 13, 2022
Product-Review-Summarizer - Created a product review summarizer which clustered thousands of product reviews and summarized them into a maximum of 500 characters, saving precious time of customers and helping them make a wise buying decision.

Product-Review-Summarizer - Created a product review summarizer which clustered thousands of product reviews and summarized them into a maximum of 500 characters, saving precious time of customers an

Parv Bhatt 1 Jan 01, 2022
LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language

LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language โš–๏ธ The library of Natural Language Processing for Brazilian legal lang

Felipe Maia Polo 125 Dec 20, 2022
Implemented shortest-circuit disambiguation, maximum probability disambiguation, HMM-based lexical annotation and BiLSTM+CRF-based named entity recognition

Implemented shortest-circuit disambiguation, maximum probability disambiguation, HMM-based lexical annotation and BiLSTM+CRF-based named entity recognition

0 Feb 13, 2022
Basic Utilities for PyTorch Natural Language Processing (NLP)

Basic Utilities for PyTorch Natural Language Processing (NLP) PyTorch-NLP, or torchnlp for short, is a library of basic utilities for PyTorch NLP. tor

Michael Petrochuk 2.1k Jan 01, 2023
Free and Open Source Machine Translation API. 100% self-hosted, offline capable and easy to setup.

LibreTranslate Try it online! | API Docs | Community Forum Free and Open Source Machine Translation API, entirely self-hosted. Unlike other APIs, it d

3.4k Dec 27, 2022
Idea is to build a model which will take keywords as inputs and generate sentences as outputs.

keytotext Idea is to build a model which will take keywords as inputs and generate sentences as outputs. Potential use case can include: Marketing Sea

Gagan Bhatia 364 Jan 03, 2023
Kestrel Threat Hunting Language

Kestrel Threat Hunting Language What is Kestrel? Why we need it? How to hunt with XDR support? What is the science behind it? You can find all the ans

Open Cybersecurity Alliance 201 Dec 16, 2022