REST API for sentence tokenization and embedding using Multilingual Universal Sentence Encoder.

Overview

tests linter codecov

python 3.7 release (latest by date) license

pre-commit code style: black

pypi version pypi downloads

What is MUSE?

MUSE stands for Multilingual Universal Sentence Encoder - multilingual extension (supports 16 languages) of Universal Sentence Encoder (USE).
MUSE model encodes sentences into embedding vectors of fixed size.

What is MUSE as Service?

MUSE as Service is the REST API for sentence tokenization and embedding using MUSE model from TensorFlow Hub.

It is written using Flask and Gunicorn.

Why I need it?

MUSE model from TensorFlow Hub requires next packages to be installed:

  • tensorflow
  • tensorflow-hub
  • tensorflow-text

These packages take up more than 1GB of memory. The model itself takes up 280MB of memory.

For efficient memory usage when working with MUSE model on several projects (several virtual environments) or/and with teammates (several model copies on different computers) it is better to deploy one instance of the model in one virtual environment where all teammates have access to.

This is what MUSE as Service is made for! ❤️

Requirements

Python >= 3.7

Installation

To install MUSE as Service run:

# clone repo (https/ssh)
git clone https://github.com/dayyass/muse-as-service.git
# git clone [email protected]:dayyass/muse-as-service.git

# install dependencies (preferable in venv)
cd muse-as-service
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip && pip install -r requirements.txt

Before using the service you need to:

  • download MUSE model executing the following command:
    python models/download_muse.py

Launch the Service

To build a docker image with a service parametrized with gunicorn.conf.py file run:

docker build -t muse_as_service .

NOTE: instead of building a docker image, you can pull it from Docker Hub.

To launch the service (either locally or on a server) use a docker container:

docker run -d -p {host_port}:{container_port} --name muse_as_service muse_as_service

NOTE: container_port should be equal to port in gunicorn.conf.py file.

You can also launch the service without docker, but it is preferable to launch the service inside the docker container:

  • Gunicorn: gunicorn --config gunicorn.conf.py app:app (parametrized with gunicorn.conf.py file)
  • Flask: python app.py --host {host} --port {port} (default host 0.0.0.0 and port 5000)

It is also possible to launch the service using systemd.

GPU support

MUSE as Service supports GPU inference. To launch the service with GPU support you need:

  • install NVIDIA Container Toolkit
  • use CUDA_VISIBLE_DEVICES environment variable to specify GPU device if needed (e.g. export CUDA_VISIBLE_DEVICES=0)
  • launch the service with docker run command above (after docker build) with --gpus all parameter

NOTE: since TensorFlow2.0 tensorflow and tensorflow-gpu packages are merged.

NOTE: depending on CUDA version installed you may need different tensorflow versions (default version tensorflow==2.3.0 supports CUDA 10.1). See table with TF/CUDA compatibility to choose the right one and pip install it.

Usage

Since the service is usually running on server, it is important to restrict access to the service.

For this reason, MUSE as Service uses token-based authorization with JWT for users in sqlite database app.db.

Initially database has only one user with:

  • username: "admin"
  • password: "admin"

To add new user with username and password run:

python src/muse_as_service/database/add_user.py --username {username} --password {password}

NOTE: no passwords are stored in the database, only their hashes.

To remove the user with username run:

python src/muse_as_service/database/remove_user.py --username {username}

MUSE as Service has the following endpoints:

- /login         - POST request with `username` and `password` to get tokens (access and refresh)
- /logout        - POST request to remove tokens (access and refresh)
- /token/refresh - POST request to refresh access token (refresh token required)
- /tokenize      - GET request for `sentence` tokenization (access token required)
- /embed         - GET request for `sentence` embedding (access token required)

You can use python requests package to work with HTTP requests:

import numpy as np
import requests

# params
ip = "localhost"
port = 5000

sentences = ["This is sentence example.", "This is yet another sentence example."]

# start session
session = requests.Session()

# login
response = session.post(
    url=f"http://{ip}:{port}/login",
    json={"username": "admin", "password": "admin"},
)

# tokenizer
response = session.get(
    url=f"http://{ip}:{port}/tokenize",
    params={"sentence": sentences},
)
tokenized_sentence = response.json()["tokens"]

# embedder
response = session.get(
    url=f"http://{ip}:{port}/embed",
    params={"sentence": sentences},
)
embedding = np.array(response.json()["embedding"])

# logout
response = session.post(
    url=f"http://{ip}:{port}/logout",
)

# close session
session.close()

# results
print(tokenized_sentence)  # [
# ["▁This", "▁is", "▁sentence", "▁example", "."],
# ["▁This", "▁is", "▁yet", "▁another", "▁sentence", "▁example", "."]
# ]
print(embedding.shape)  # (2, 512)

However it is better to use built-in client MUSEClient for sentence tokenization and embedding, that wraps the functionality of the python requests package and provides user with a simpler interface.

To install the built-in client run:
pip install muse-as-service

Instead of using endpoints, listed above, directly, MUSEClient provides the following methods to work with:

- login    - method to login with `username` and `password`
- logout   - method to logout (login required)
- tokenize - method for `sentence` tokenization (login required)
- embed    - method for `sentence` embedding (login required)

Usage example:

from muse_as_service import MUSEClient

# params
ip = "localhost"
port = 5000

sentences = ["This is sentence example.", "This is yet another sentence example."]

# init client
client = MUSEClient(ip=ip, port=port)

# login
client.login(username="admin", password="admin")

# tokenizer
tokenized_sentence = client.tokenize(sentences)

# embedder
embedding = client.embed(sentences)

# logout
client.logout()

# results
print(tokenized_sentence)  # [
# ["▁This", "▁is", "▁sentence", "▁example", "."],
# ["▁This", "▁is", "▁yet", "▁another", "▁sentence", "▁example", "."]
# ]
print(embedding.shape)  # (2, 512)

Tests

To use pre-commit hooks run:
pre-commit install

Before running tests and code coverage, you need to:

  • run app.py in background:
    python app.py &

To launch tests run:
python -m unittest discover

To measure code coverage run:
coverage run -m unittest discover && coverage report -m

NOTE: since we launched Flask application in background, we need to stop it after running tests and code coverage with the following command:

kill $(ps aux | grep '[a]pp.py' | awk '{print $2}')

MUSE supported languages

MUSE model supports next languages:

  • Arabic
  • Chinese-simplified
  • Chinese-traditional
  • Dutch
  • English
  • French
  • German
  • Italian
  • Japanese
  • Korean
  • Polish
  • Portuguese
  • Russian
  • Spanish
  • Thai
  • Turkish

Citation

If you use muse-as-service in a scientific publication, we would appreciate references to the following BibTex entry:

@misc{dayyass2021muse,
    author       = {El-Ayyass, Dani},
    title        = {Multilingual Universal Sentence Encoder REST API},
    howpublished = {\url{https://github.com/dayyass/muse-as-service}},
    year         = {2021}
}
You might also like...
A generator library for concise, unambiguous and URL-safe UUIDs.

Description shortuuid is a simple python library that generates concise, unambiguous, URL-safe UUIDs. Often, one needs to use non-sequential IDs in pl

A Python library that provides an easy way to identify devices like mobile phones, tablets and their capabilities by parsing (browser) user agent strings.

Python User Agents user_agents is a Python library that provides an easy way to identify/detect devices like mobile phones, tablets and their capabili

Format Covid values to ASCII-Table (Only for Germany and Austria)

Covid-19-Formatter (Only for Germany and Austria) Dieses Script speichert die gemeldeten Daten des RKIs / BMSGPK und formatiert diese zu einer Asci Ta

Text to ASCII and ASCII to text

Text2ASCII Description This python script (converter.py) contains two functions: encode() is used to return a list of Integer, one item per character

Hspell, the free Hebrew spellchecker and morphology engine.

Hspell, the free Hebrew spellchecker and morphology engine.

Hotpotato is a recipe portfolio App that assists users to discover and comment new recipes.
Hotpotato is a recipe portfolio App that assists users to discover and comment new recipes.

Hotpotato Hotpotato is a recipe portfolio App that assists users to discover and comment new recipes. It is a fullstack React App made with a Redux st

Etranslate is a free and unlimited python library for transiting your texts

Etranslate is a free and unlimited python library for transiting your texts

Answer some questions and get your brawler csvs ready!

BRAWL-STARS-V11-BRAWLER-MAKER-TOOL Answer some questions and get your brawler csvs ready! HOW TO RUN on android: Install pydroid3 from playstore, and

An experimental Fang Song style Chinese font generated with skeleton-tracing and pix2pix
An experimental Fang Song style Chinese font generated with skeleton-tracing and pix2pix

An experimental Fang Song style Chinese font generated with skeleton-tracing and pix2pix, with glyphs based on cwTeXFangSong. The font is optimised fo

Comments
  • How to change batch size

    How to change batch size

    I got the following OOM message: Error on request: Traceback (most recent call last): File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\werkzeug\serving.py", line 324, in run_wsgi execute(self.server.app) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\werkzeug\serving.py", line 313, in execute application_iter = app(environ, start_response) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 2091, in call return self.wsgi_app(environ, start_response) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 2076, in wsgi_app response = self.handle_exception(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 271, in error_router return original_handler(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 2073, in wsgi_app response = self.full_dispatch_request() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 1518, in full_dispatch_request rv = self.handle_user_exception(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 271, in error_router return original_handler(e) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 1516, in full_dispatch_request rv = self.dispatch_request() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\app.py", line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 467, in wrapper resp = resource(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask\views.py", line 84, in view return current_app.ensure_sync(self.dispatch_request)(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_restful_init_.py", line 582, in dispatch_request resp = meth(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\flask_jwt_extended\view_decorators.py", line 127, in decorator return current_app.ensure_sync(fn)(*args, **kwargs) File "F:\repos3\muse-as-service\muse-as-service\src\muse_as_service\endpoints.py", line 56, in get embedding = self.embedder(args["sentence"]).numpy().tolist() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\keras\engine\base_layer.py", line 1037, in call outputs = call_fn(inputs, *args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow_hub\keras_layer.py", line 229, in call result = f() File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\saved_model\load.py", line 664, in _call_attribute return instance.call(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\def_function.py", line 885, in call result = self._call(*args, **kwds) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\def_function.py", line 957, in _call filtered_flat_args, self._concrete_stateful_fn.captured_inputs) # pylint: disable=protected-access File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\function.py", line 1964, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\function.py", line 596, in call ctx=ctx) File "D:\ProgramData\Anaconda3\envs\muse-as-a-service\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute inputs, attrs, num_outputs) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[32851,782,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node StatefulPartitionedCall/StatefulPartitionedCall/EncoderTransformer/Transformer/SparseTransformerEncode/Layer_0/SelfAttention/SparseMultiheadAttention/ComputeQKV/ScatterNd}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[StatefulPartitionedCall/StatefulPartitionedCall/EncoderTransformer/Transformer/layer_prepostprocess/layer_norm/add_1/_128]]
    

    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

    (1) Resource exhausted: OOM when allocating tensor with shape[32851,782,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node StatefulPartitionedCall/StatefulPartitionedCall/EncoderTransformer/Transformer/SparseTransformerEncode/Layer_0/SelfAttention/SparseMultiheadAttention/ComputeQKV/ScatterNd}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

    question 
    opened by jiangweiatgithub 3
  • slow response from service

    slow response from service

    I have been comparing the efficency between the muse as service and the original "hub.load" method, and see a noticeable slow reponse in the former, both running separately on my Quadro RTX 5000. Can I safely assume this slowness is due to the very nature of the web service? If so, is there any way to improve it?

    invalid 
    opened by jiangweiatgithub 1
Releases(v1.1.2)
Owner
Dani El-Ayyass
NLP Tech Lead @ Sber AI, Master Student in Applied Mathematics and Computer Science @ CMC MSU
Dani El-Ayyass
Fuzz a language by mixing up only few words.

afasi Fuzz a language by mixing up only few words. Status Beta. Note: The default branch is default. Use Examples Version General Help Translate Help

Stefan Hagen 2 Dec 14, 2022
Um simulador de caixa registradora com database usando arquivos .txt

🛒 Caixa Registradora V2 ❓ - Como usar? Execute o caixa-registradora.py, nele vai ter um menu interativo, você pode cadastrar diversos produtos em um

Gabriel 0 Sep 25, 2022
A collection of pre-commit hooks for handling text files.

texthooks A collection of pre-commit hooks for handling text files. In particular, hooks for handling unicode characters which may be undesirable in a

Stephen Rosen 5 Oct 28, 2022
split Word file by chapter

split Word file by chapter we use the mircosoft word api to code this tool api url:https://docs.microsoft.com/zh-cn/dotnet/api/ if this tool is good f

wisdom under lemon trees 5 Nov 06, 2021
The bot creates hashtags for user's texts in Russian and English.

telegram_bot_hashtags The bot creates hashtags for user's texts in Russian and English. It is a simple bot for creating hashtags. NOTE file config.py

Yana Davydovich 2 Feb 12, 2022
Text Summarizationcls app with python

Text Summarizationcls app This is the repo for the Text Summarization AI Project. It makes use of pre-trained Hugging Face models Packages Used The pa

Edem Gold 1 Oct 23, 2021
Split large XML files into smaller ones for easy upload

Split large XML files into smaller ones for easy upload. Works for WordPress Posts Import and other XML files.

Joseph Adediji 1 Jan 30, 2022
A minimal python script for generating multiple onetime use bip39 seed phrases

seed_signer_ontimes WARNING This project has mainly been used for local development, and creation should be ran on a air-gapped machine. A minimal pyt

CypherToad 4 Sep 12, 2022
Find a Doc is a free online resource aimed at helping connect the foreign community in Japan with health services in their native language.

Find a Doc - Localization Find a Doc is a free online resource aimed at helping connect the foreign community in Japan with health services in their n

Our Japan Life 18 Dec 19, 2022
Wordle strategy: Find frequency of letters appearing in 5-letter words in the English language

Find frequency of letters appearing in 5-letter words in the English language In

Gabriel Apolinário 1 Jan 17, 2022
A Python package to facilitate research on building and evaluating automated scoring models.

Rater Scoring Modeling Tool Introduction Automated scoring of written and spoken test responses is a growing field in educational natural language pro

ETS 59 Oct 10, 2022
Word-Generator - Generates meaningful words from dictionary with given no. of letters and words.

Meaningful Word Generator Generates meaningful words from dictionary with given no. of letters and words. This might be useful for generating short li

Mohammed Rabil 1 Jan 01, 2022
utoken is a multilingual tokenizer that divides text into words, punctuation and special tokens such as numbers, URLs, XML tags, email-addresses and hashtags.

utoken utoken is a multilingual tokenizer that divides text into words, punctuation and special tokens such as numbers, URLs, XML tags, email-addresse

Ulf Hermjakob 11 Jan 05, 2023
A slugifier that works in unicode

Unicode Slugify Unicode Slugify is a slugifier that generates unicode slugs. It was originally used in the Firefox Add-ons web site to generate slugs

Mozilla 315 Nov 21, 2022
A Python library that provides an easy way to identify devices like mobile phones, tablets and their capabilities by parsing (browser) user agent strings.

Python User Agents user_agents is a Python library that provides an easy way to identify/detect devices like mobile phones, tablets and their capabili

Selwin Ong 1.3k Dec 22, 2022
BaseCrack is a tool written in Python that can decode all alphanumeric base encoding schemes.

BaseCrack Decoder For Base Encoding Schemes BaseCrack is a tool written in Python that can decode all alphanumeric base encoding schemes. This tool ca

Mufeed VH 383 Dec 27, 2022
JSON and CSV data for Swahili dictionary with over 16600+ words

kamusi JSON and CSV data for swahili dictionary with over 16600+ words. This repo consists of data from swahili dictionary with about 16683 words toge

Jordan Kalebu 8 Jan 13, 2022
The Levenshtein Python C extension module contains functions for fast computation of Levenshtein distance and string similarity

Contents Maintainer wanted Introduction Installation Documentation License History Source code Authors Maintainer wanted I am looking for a new mainta

Antti Haapala 1.2k Dec 16, 2022
Phone Number formatting for PlaySMS Platform - BulkSMS Platform

BulkSMS-Number-Formatting Phone Number formatting for PlaySMS Platform - BulkSMS Platform. Phone Number Formatting for PlaySMS Phonebook Service This

Edwin Senunyeme 1 Nov 08, 2021
Maiden & Spell community player ranking based on tournament data.

MnSRank Maiden & Spell community player ranking based on tournament data. Why? 2021 just ended and this seemed like a cool idea. Elo doesn't work well

Jonathan Lee 1 Apr 20, 2022