REST API for sentence tokenization and embedding using Multilingual Universal Sentence Encoder.

Overview

What is MUSE?

MUSE stands for Multilingual Universal Sentence Encoder - multilingual extension (16 languages) of Universal Sentence Encoder (USE).
MUSE/USE models encode sentences into embedding vectors of fixed size.

MUSE paper: link.
USE paper: link.
USE Visually Explainer article: link.

What is MUSE as Service?

MUSE as Service is REST API for sentence tokenization and embedding using MUSE.
It is written on flask + gunicorn.
You can configure gunicorn with gunicorn.conf.py file.

Installation

# clone repo
git clone https://github.com/dayyass/muse_as_service.git

# install dependencies
cd muse_as_service
pip install -r requirements.txt

Run Service

To launch a service use a docker container (either locally or on a server):

docker build -t muse_as_service .
docker run -d -p 5000:5000 --name muse_as_service muse_as_service

NOTE: you can launch a service without docker using gunicorn: sh ./gunicorn.sh, or flask: python app.py, but it is preferable to launch the service inside the docker container.
NOTE: instead of building a docker image, you can pull it from Docker Hub:
docker pull dayyass/muse_as_service

Usage

After you launch the service, you can tokenize and embed any {sentence} using GET requests ({ip} is the address where the service was launched):

http://{ip}:5000/tokenize?sentence={sentence}
http://{ip}:5000/embed?sentence={sentence}

You can use python requests library to work with GET requests (example notebook):

import numpy as np
import requests

ip = "localhost"
port = 5000

sentence = "This is sentence example."

# tokenizer
response = requests.get(
    url=f"http://{ip}:{port}/tokenize",
    params={"sentence": f"{sentence}"},
)
tokenized_sentence = response.json()["content"]

# embedder
response = requests.get(
    url=f"http://{ip}:{port}/embed",
    params={"sentence": f"{sentence}"},
)
embedding = np.array(response.json()["content"][0])

# results
print(tokenized_sentence)  # ['▁This', '▁is', '▁sentence', '▁example', '.']
print(embedding.shape)  # (512,)

But it is better to use the built-in client MUSEClient for sentence tokenization and embedding, that wraps the functionality of the requests library and provides the user with a simpler interface (example notebook):

from muse_as_service import MUSEClient

ip = "localhost"
port = 5000

sentence = "This is sentence example."

# init client
client = MUSEClient(
    ip=ip,
    port=port,
)

# tokenizer
tokenized_sentence = client.tokenize(sentence)

# embedder
embedding = client.embed(sentence)

# results
print(tokenized_sentence)  # ['▁This', '▁is', '▁sentence', '▁example', '.']
print(embedding.shape)  # (512,)

Citation

If you use muse_as_service in a scientific publication, we would appreciate references to the following BibTex entry:

@misc{dayyass_muse_as_service,
    author = {El-Ayyass, Dani},
    title = {Multilingual Universal Sentence Encoder REST API},
    howpublished = {\url{https://github.com/dayyass/muse_as_service}},
    year = {2021},
}
You might also like...
Multilingual Emotion classification using BERT (fine-tuning). Published at the WASSA workshop (ACL2022).

XLM-EMO: Multilingual Emotion Prediction in Social Media Text Abstract Detecting emotion in text allows social and computational scientists to study h

Transformer-based Text Auto-encoder (T-TA) using TensorFlow 2.

T-TA (Transformer-based Text Auto-encoder) This repository contains codes for Transformer-based Text Auto-encoder (T-TA, paper: Fast and Accurate Deep

Some embedding layer implementation using ivy library
Some embedding layer implementation using ivy library

ivy-manual-embeddings Some embedding layer implementation using ivy library. Just for fun. It is based on NYCTaxiFare dataset from kaggle (cut down to

A Multilingual Latent Dirichlet Allocation (LDA) Pipeline with Stop Words Removal, n-gram features, and Inverse Stemming, in Python.

Multilingual Latent Dirichlet Allocation (LDA) Pipeline This project is for text clustering using the Latent Dirichlet Allocation (LDA) algorithm. It

Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing
Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing

Trankit: A Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing Trankit is a light-weight Transformer-based Pyth

Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

Comments
  • How to change batch size

    How to change batch size

    I got the following OOM message: Error on request: Traceback (most recent call last): File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\werkzeug\serving.py", line 324, in run_wsgi execute(self.server.app) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\werkzeug\serving.py", line 313, in execute application_iter = app(environ, start_response) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 2091, in call return self.wsgi_app(environ, start_response) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 2076, in wsgi_app response = self.handle_exception(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 271, in error_router return original_handler(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 2073, in wsgi_app response = self.full_dispatch_request() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 1518, in full_dispatch_request rv = self.handle_user_exception(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 271, in error_router return original_handler(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 1516, in full_dispatch_request rv = self.dispatch_request() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 467, in wrapper resp = resource(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\views.py", line 84, in view return current_app.ensure_sync(self.dispatch_request)(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 582, in dispatch_request resp = meth(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_jwt_extended\view_decorators.py", line 127, in decorator return current_app.ensure_sync(fn)(*args, **kwargs) File "F:\repos3\muse-as-service\muse-as-service\src\muse_as_service\endpoints.py", line 56, in get embedding = self.embedder(args["sentence"]).numpy().tolist() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\keras\engine\base_layer.py", line 1037, in call outputs = call_fn(inputs, *args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow_hub\keras_layer.py", line 229, in call result = f() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\saved_model\load.py", line 664, in _call_attribute return instance.call(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\def_function.py", line 885, in call result = self._call(*args, **kwds) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\def_function.py", line 957, in _call filtered_flat_args, self._concrete_stateful_fn.captured_inputs) # pylint: disable=protected-access File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\function.py", line 1964, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\function.py", line 596, in call ctx=ctx) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute inputs, attrs, num_outputs) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[32851,782,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node StatefulPartitionedCall/StatefulPartitionedCall/EncoderTransformer/Transformer/SparseTransformerEncode/Layer_0/SelfAttention/SparseMultiheadAttention/ComputeQKV/ScatterNd}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[StatefulPartitionedCall/StatefulPartitionedCall/EncoderTransformer/Transformer/layer_prepostprocess/layer_norm/add_1/_128]]
    

    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

    (1) Resource exhausted: OOM when allocating tensor with shape[32851,782,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node StatefulPartitionedCall/StatefulPartitionedCall/EncoderTransformer/Transformer/SparseTransformerEncode/Layer_0/SelfAttention/SparseMultiheadAttention/ComputeQKV/ScatterNd}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

    question 
    opened by jiangweiatgithub 3
  • slow response from service

    slow response from service

    I have been comparing the efficency between the muse as service and the original "hub.load" method, and see a noticeable slow reponse in the former, both running separately on my Quadro RTX 5000. Can I safely assume this slowness is due to the very nature of the web service? If so, is there any way to improve it?

    invalid 
    opened by jiangweiatgithub 1
Releases(v1.1.2)
Owner
Dani El-Ayyass
Senior NLP Engineer @ Sber AI, Master Student in Applied Mathematics and Computer Science @ CMC MSU
Dani El-Ayyass
2021海华AI挑战赛·中文阅读理解·技术组·第三名

文字是人类用以记录和表达的最基本工具,也是信息传播的重要媒介。透过文字与符号,我们可以追寻人类文明的起源,可以传播知识与经验,读懂文字是认识与了解的第一步。对于人工智能而言,它的核心问题之一就是认知,而认知的核心则是语义理解。

21 Dec 26, 2022
Beyond the Imitation Game collaborative benchmark for enormous language models

BIG-bench 🪑 The Beyond the Imitation Game Benchmark (BIG-bench) will be a collaborative benchmark intended to probe large language models, and extrap

Google 1.3k Jan 01, 2023
Code for the paper: Sequence-to-Sequence Learning with Latent Neural Grammars

Code for the paper: Sequence-to-Sequence Learning with Latent Neural Grammars

Yoon Kim 43 Dec 23, 2022
iBOT: Image BERT Pre-Training with Online Tokenizer

Image BERT Pre-Training with iBOT Official PyTorch implementation and pretrained models for paper iBOT: Image BERT Pre-Training with Online Tokenizer.

Bytedance Inc. 435 Jan 06, 2023
A collection of scripts to preprocess ASR datasets and finetune language-specific Wav2Vec2 XLSR models

wav2vec-toolkit A collection of scripts to preprocess ASR datasets and finetune language-specific Wav2Vec2 XLSR models This repository accompanies the

Anton Lozhkov 29 Oct 23, 2022
Sequence model architectures from scratch in PyTorch

This repository implements a variety of sequence model architectures from scratch in PyTorch. Effort has been put to make the code well structured so that it can serve as learning material. The train

Brando Koch 11 Mar 28, 2022
Code and datasets for our paper "PTR: Prompt Tuning with Rules for Text Classification"

PTR Code and datasets for our paper "PTR: Prompt Tuning with Rules for Text Classification" If you use the code, please cite the following paper: @art

THUNLP 118 Dec 30, 2022
Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

41 Jan 03, 2023
NLP: SLU tagging

NLP: SLU tagging

北海若 3 Jan 14, 2022
Sorce code and datasets for "K-BERT: Enabling Language Representation with Knowledge Graph",

K-BERT Sorce code and datasets for "K-BERT: Enabling Language Representation with Knowledge Graph", which is implemented based on the UER framework. R

Weijie Liu 834 Jan 09, 2023
Comprehensive-E2E-TTS - PyTorch Implementation

A Non-Autoregressive End-to-End Text-to-Speech (text-to-wav), supporting a family of SOTA unsupervised duration modelings. This project grows with the research community, aiming to achieve the ultima

Keon Lee 114 Nov 13, 2022
A crowdsourced dataset of dialogues grounded in social contexts involving utilization of commonsense.

A crowdsourced dataset of dialogues grounded in social contexts involving utilization of commonsense.

Alexa 62 Dec 20, 2022
Twitter-Sentiment-Analysis - Analysis of twitter posts' positive and negative score.

Twitter-Sentiment-Analysis The hands-on project is in Python 3 Programming class offered by University of Michigan via Coursera. The task is to build

Eszter Pai 1 Jan 03, 2022
SentimentArcs: a large ensemble of dozens of sentiment analysis models to analyze emotion in text over time

SentimentArcs - Emotion in Text An end-to-end pipeline based on Jupyter notebooks to detect, extract, process and anlayze emotion over time in text. E

jon_chun 14 Dec 19, 2022
A BERT-based reverse dictionary of Korean proverbs

Wisdomify A BERT-based reverse-dictionary of Korean proverbs. 김유빈 : 모델링 / 데이터 수집 / 프로젝트 설계 / back-end 김종윤 : 데이터 수집 / 프로젝트 설계 / front-end / back-end 임용

94 Dec 08, 2022
Linking data between GBIF, Biodiverse, and Open Tree of Life

GBIF-biodiverse-OpenTree Linking data between GBIF, Biodiverse, and Open Tree of Life The python scripts will rely on opentree and Dendropy. To set up

2 Oct 03, 2022
Unifying Cross-Lingual Semantic Role Labeling with Heterogeneous Linguistic Resources (NAACL-2021).

Unifying Cross-Lingual Semantic Role Labeling with Heterogeneous Linguistic Resources Description This is the repository for the paper Unifying Cross-

Sapienza NLP group 16 Sep 09, 2022
PyTorch implementation of the NIPS-17 paper "Poincaré Embeddings for Learning Hierarchical Representations"

Poincaré Embeddings for Learning Hierarchical Representations PyTorch implementation of Poincaré Embeddings for Learning Hierarchical Representations

Facebook Research 1.6k Dec 29, 2022
Rootski - Full codebase for rootski.io (without the data)

📣 Welcome to the Rootski codebase! This is the codebase for the application run

Eric 20 Nov 18, 2022
Learn meanings behind words is a key element in NLP. This project concentrates on the disambiguation of preposition senses. Therefore, we train a bert-transformer model and surpass the state-of-the-art.

New State-of-the-Art in Preposition Sense Disambiguation Supervisor: Prof. Dr. Alexander Mehler Alexander Henlein Institutions: Goethe University TTLa

Dirk Neuhäuser 4 Apr 06, 2022