Automatic voice-synthetised summaries of latest research papers on arXiv

Overview

PaperWhisperer

PaperWhisperer is a Python application that keeps you up-to-date with research papers. How? It retrieves the latest articles from arXiv on a topic, by performing a keyword-based search. Then, it creates vocal summaries of the articles using Text-To-Speech and stores them to disk.

Installation

To install the package, move to the root of the repo and type in the console:

$ pip install .

If you plan to develop the package further, install the package in editable mode also installing the packages necessary to run unittests:

$ pip install -e .[test]

Testing

To run unittests, issue the following command from the root of the repo:

$ pytest

Package structure

The package is divided into 2 sub-packages:

  • retrieval
  • tts

retrieval contains data structures and facilities necessary to retrieve articles from arXiv. Under the hood, the app uses arxiv, a Python package that is a wrapper around the arXiv free API.

tts has facilities to generate speech renditions of text-based article summaries. The summary of an article consists of its title, authors, and abstract. Speech synthesis is performed using Google Cloud Text-To-Speech.

Setting up Google Cloud Text-To-Speech

PaperWhisperer uses Google Cloud Text-To-Speech to synthesise speech.

In order to be able to use this service, you should:

  1. create an account on Google Cloud,
  2. create a Cloud Platform project,
  3. enable the Text-To-Speech API in the project
  4. setup authentication
  5. download a Json private key

More info on how to set up Google Cloud Text-To-Speech

Environment variables

The app uses an environment variable called GOOGLE_APPLICATION_CREDENTIALS to connect to Google Cloud Text-To-Speech safely.

In config.yml, set GOOGLE_APPLICATION_CREDENTIALS to the path of the Json private key you previously downloaded while setting up the Google service.

Without this step, you won't be able to connect to Google Cloud Text-To-Speech, and the app will throw an error.

How to create summaries

To create summaries for a keyword search, use the create_summaries entry point. This is the only console script of the package and the main entry point of the application.

Below is an example of how you can run the script:

$ create_summaries "generate chord progressions" 100 /save/dir 40

The script takes 4 positional arguments:

  • keywords used for searching articles (more than one keyword is possible)
  • maximum number of articles to retrieve
  • directory where to store vocal summaries
  • retrieve articles no older than this integer value in days

Dependencies

PaperWhisperer depends on the following packages:

  • arxiv==1.2.0
  • google-cloud-texttospeech
  • python-dotenv

YouTube video

Learn more about PaperWhisperer in this project presentation video on The Sound of AI YouTube channel.

Owner
Valerio Velardo
AI audio/music researcher. Love Python.
Valerio Velardo
Kaggle Feedback Prize - Evaluating Student Writing 15th solution

Kaggle Feedback Prize - Evaluating Student Writing 15th solution First of all, I would like to thank the excellent notebooks and discussions from http

Lingyuan Zhang 6 Mar 24, 2022
Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.

Music Source Separation with Channel-wise Subband Phase Aware ResUnet (CWS-PResUNet) Introduction This repo contains the pretrained Music Source Separ

Lau 100 Dec 25, 2022
Fine-grained Control of Image Caption Generation with Abstract Scene Graphs

Faster R-CNN pretrained on VisualGenome This repository modifies maskrcnn-benchmark for object detection and attribute prediction on VisualGenome data

Shizhe Chen 7 Apr 20, 2021
[CVPR 2021] Exemplar-Based Open-Set Panoptic Segmentation Network (EOPSN)

EOPSN: Exemplar-Based Open-Set Panoptic Segmentation Network (CVPR 2021) PyTorch implementation for EOPSN. We propose open-set panoptic segmentation t

Jaedong Hwang 49 Dec 30, 2022
Optimizes image files by converting them to webp while also updating all references.

About Optimizes images by (re-)saving them as webp. For every file it replaced it automatically updates all references. Works on single files as well

Watermelon Wolverine 18 Dec 23, 2022
To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

Kunal Wadhwa 2 Jan 05, 2022
A Weakly Supervised Amodal Segmenter with Boundary Uncertainty Estimation

Paper Khoi Nguyen, Sinisa Todorovic "A Weakly Supervised Amodal Segmenter with Boundary Uncertainty Estimation", accepted to ICCV 2021 Our code is mai

Khoi Nguyen 5 Aug 14, 2022
A library that allows for inference on probabilistic models

Bean Machine Overview Bean Machine is a probabilistic programming language for inference over statistical models written in the Python language using

Meta Research 234 Dec 29, 2022
Isaac Gym Reinforcement Learning Environments

Isaac Gym Reinforcement Learning Environments

NVIDIA Omniverse 714 Jan 08, 2023
TorchIO is a Medical image preprocessing and augmentation toolkit for deep learning. Part of the PyTorch Ecosystem.

Medical image preprocessing and augmentation toolkit for deep learning. Part of the PyTorch Ecosystem.

Fernando Pérez-García 1.6k Jan 06, 2023
A repo for Causal Imitation Learning under Temporally Correlated Noise

CausIL A repo for Causal Imitation Learning under Temporally Correlated Noise. Running Experiments To re-train an expert, run: python experts/train_ex

Gokul Swamy 5 Nov 01, 2022
DeepVoxels is an object-specific, persistent 3D feature embedding.

DeepVoxels is an object-specific, persistent 3D feature embedding. It is found by globally optimizing over all available 2D observations of

Vincent Sitzmann 196 Dec 25, 2022
Devkit for 3D -- Some utils for 3D object detection based on Numpy and Pytorch

D3D Devkit for 3D: Some utils for 3D object detection and tracking based on Numpy and Pytorch Please consider siting my work if you find this library

Jacob Zhong 27 Jul 07, 2022
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 02, 2023
A Free and Open Source Python Library for Multiobjective Optimization

Platypus What is Platypus? Platypus is a framework for evolutionary computing in Python with a focus on multiobjective evolutionary algorithms (MOEAs)

Project Platypus 424 Dec 18, 2022
Locally Constrained Self-Attentive Sequential Recommendation

LOCKER This is the pytorch implementation of this paper: Locally Constrained Self-Attentive Sequential Recommendation. Zhankui He, Handong Zhao, Zhe L

Zhankui (Aaron) He 8 Jul 30, 2022
Explore extreme compression for pre-trained language models

Code for paper "Exploring extreme parameter compression for pre-trained language models ICLR2022"

twinkle 16 Nov 14, 2022
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
Regression Metrics Calculation Made easy for tensorflow2 and scikit-learn

Regression Metrics Installation To install the package from the PyPi repository you can execute the following command: pip install regressionmetrics I

Ashish Patel 11 Dec 16, 2022
Supervised domain-agnostic prediction framework for probabilistic modelling

A supervised domain-agnostic framework that allows for probabilistic modelling, namely the prediction of probability distributions for individual data

The Alan Turing Institute 112 Oct 23, 2022