AutoSub is a CLI application to generate subtitle files (.srt, .vtt, and .txt transcript) for any video file using Mozilla DeepSpeech.

Overview

AutoSub

About

AutoSub is a CLI application to generate subtitle files (.srt, .vtt, and .txt transcript) for any video file using Mozilla DeepSpeech. I use the DeepSpeech Python API to run inference on audio segments and pyAudioAnalysis to split the initial audio on silent segments, producing multiple small files.

Featured in DeepSpeech Examples by Mozilla

Motivation

In the age of OTT platforms, there are still some who prefer to download movies/videos from YouTube/Facebook or even torrents rather than stream. I am one of them and on one such occasion, I couldn't find the subtitle file for a particular movie I had downloaded. Then the idea for AutoSub struck me and since I had worked with DeepSpeech previously, I decided to use it.

Installation

  • Clone the repo. All further steps should be performed while in the AutoSub/ directory

    $ git clone https://github.com/abhirooptalasila/AutoSub
    $ cd AutoSub
  • Create a pip virtual environment to install the required packages

    $ python3 -m venv sub
    $ source sub/bin/activate
    $ pip3 install -r requirements.txt
  • Download the model and scorer files from DeepSpeech repo. The scorer file is optional, but it greatly improves inference results.

    # Model file (~190 MB)
    $ wget https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-models.pbmm
    # Scorer file (~950 MB)
    $ wget https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-models.scorer
  • Create two folders audio/ and output/ to store audio segments and final SRT and VTT file

    $ mkdir audio output
  • Install FFMPEG. If you're running Ubuntu, this should work fine.

    $ sudo apt-get install ffmpeg
    $ ffmpeg -version               # I'm running 4.1.4
  • [OPTIONAL] If you would like the subtitles to be generated faster, you can use the GPU package instead. Make sure to install the appropriate CUDA version.

    $ source sub/bin/activate
    $ pip3 install deepspeech-gpu

Docker

  • Installation using Docker is pretty straight-forward.

    • First start by downloading training models by specifying which version you want:
      • if you have your own, then skip this step and just ensure they are placed in project directory with .pbmm and .scorer extensions
    $ ./getmodel.sh 0.9.3
    • Then for a CPU build, run:
    $ docker build -t autosub .
    $ docker run --volume=`pwd`/input:/input --name autosub autosub --file /input/video.mp4
    $ docker cp autosub:/output/ .
    • For a GPU build that is reusable (saving time on instantiating the program):
    $ docker build --build-arg BASEIMAGE=nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 --build-arg DEPSLIST=requirements-gpu.txt -t autosub-base . && \
    docker run --gpus all --name autosub-base autosub-base --dry-run || \
    docker commit --change 'CMD []' autosub-base autosub-instance
    • Then
    $ docker run --volume=`pwd`/input:/input --name autosub autosub-instance --file video.mp4
    $ docker cp autosub:/output/ .

How-to example

  • Make sure the model and scorer files are in the root directory. They are automatically loaded
  • After following the installation instructions, you can run autosub/main.py as given below. The --file argument is the video file for which SRT file is to be generated
    $ python3 autosub/main.py --file ~/movie.mp4
  • After the script finishes, the SRT file is saved in output/
  • Open the video file and add this SRT file as a subtitle, or you can just drag and drop in VLC.
  • The optional --split-duration argument allows customization of the maximum number of seconds any given subtitle is displayed for. The default is 5 seconds
    $ python3 autosub/main.py --file ~/movie.mp4 --split-duration 8
  • By default, AutoSub outputs in a number of formats. To only produce the file formats you want use the --format argument:
    $ python3 autosub/main.py --file ~/movie.mp4 --format srt txt

How it works

Mozilla DeepSpeech is an amazing open-source speech-to-text engine with support for fine-tuning using custom datasets, external language models, exporting memory-mapped models and a lot more. You should definitely check it out for STT tasks. So, when you first run the script, I use FFMPEG to extract the audio from the video and save it in audio/. By default DeepSpeech is configured to accept 16kHz audio samples for inference, hence while extracting I make FFMPEG use 16kHz sampling rate.

Then, I use pyAudioAnalysis for silence removal - which basically takes the large audio file initially extracted, and splits it wherever silent regions are encountered, resulting in smaller audio segments which are much easier to process. I haven't used the whole library, instead I've integrated parts of it in autosub/featureExtraction.py and autosub/trainAudio.py All these audio files are stored in audio/. Then for each audio segment, I perform DeepSpeech inference on it, and write the inferred text in a SRT file. After all files are processed, the final SRT file is stored in output/.

When I tested the script on my laptop, it took about 40 minutes to generate the SRT file for a 70 minutes video file. My config is an i5 dual-core @ 2.5 Ghz and 8 gigs of RAM. Ideally, the whole process shouldn't take more than 60% of the duration of original video file.

TO-DO

  • Pre-process inferred text before writing to file (prettify)
  • Add progress bar to extract_audio()
  • GUI support (?)

Contributing

I would love to follow up on any suggestions/issues you find :)

References

  1. https://github.com/mozilla/DeepSpeech/
  2. https://github.com/tyiannak/pyAudioAnalysis
  3. https://deepspeech.readthedocs.io/
Comments
  • How to install on Windows?

    How to install on Windows?

    Hello could you let me know how to install and run your program on windows. I am on the step where I did "pip3 install -r requirements.txt" and I got the following error.

            ERROR: Cannot install -r requirements.txt (line 4) and numpy==1.18.1 because these package versions have conflicting 
            dependencies.
            
            The conflict is caused by:
                The user requested numpy==1.18.1
                deepspeech 0.8.2 depends on numpy<=1.17.0 and >=1.14.5
            
            To fix this you could try to:
            1. loosen the range of package versions you've specified
            2. remove package versions to allow pip attempt to solve the dependency conflict
            
            ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies
    
    opened by tawhidkhn63 37
  • .tflite files support

    .tflite files support

    After the mozilla layoffs, the deepspeech team forked the Deepspeech repo and founded the company Coqui AI (https://github.com/coqui-ai/STT) where they continue the development and AFAIK they now only allow .tflite files to export models. It theoretically should work with the old code, but for me it didn't.

    When I try to run it like this:

    python3 autosub/main.py --file /Users/sgrotz/Downloads/kp193-hejma-auxtomatigo.mp3 --split-duration 8

    with a .tflite file in the main folder and NO language model.

    Then I get:

    AutoSub

    ['autosub/main.py', '--file', '/Users/sgrotz/Downloads/kp193-hejma-auxtomatigo.mp3', '--split-duration', '8']
    ARGS: Namespace(dry_run=False, file='/Users/sgrotz/Downloads/kp193-hejma-auxtomatigo.mp3', format=['srt', 'vtt', 'txt'], model=None, scorer=None, split_duration=8.0)
    Warning no models specified via --model and none found in local directory. Please run getmodel.sh convenience script from autosub repo to get some.
    Error: Must have pbmm model. Exiting
    

    Have I done anything wrong here or doesn't AutoSub support .rflite files?

    I tested it on MacOS and installed ffmpeg via homebrew.

    opened by stefangrotz 20
  • Docker GPU Support and Path Parsing Errors Fix

    Docker GPU Support and Path Parsing Errors Fix

    Added GPU support for Docker images and a bunch of other related changes to ensure best possible docker experience.

    Major Changes: Base Image is now Ubuntu so we can use NVIDIA's cuda images Dockerfile was significantly revamped Dropped unnecessary bit-rate conversion from ffmpeg command Reordered code to allow instantiating tensor model before failure due to inadequate arguments allowing for useful side effect of faster startup times Added simple build scripts There were many errors regarding proper parsing of Paths with metacharacters. They have been fixed where I found them. We should probably stick to using os.path functions instead of manually parsing the strings ourselves.

    Also: Audio directory is directly emptied rather than deleted and recreated. Makes a difference for running docker in Read only mode. Option to ask for overwriting existing SRT. Default is not to overwrite. Previous default was append, which is not useful. Dockerfile now just copies training model if available instead of downloading each time. This enables better developement experience so I don't have to keep downloading each time I was experimenting with getting a GPU build. Users can still use newly created convenience script called getmodels.sh to still automatically get what they want. Other minor improvements like ensuring apt is non-interactive and clears its cache after installing. a specific series of copy commands to only copy what is required rather than entire project dir which may have other things users choose to use.

    opened by yash-fn 20
  • ERROR: No matching distribution found for deepspeech==0.9.3 (from -r requirements.txt (line 3))

    ERROR: No matching distribution found for deepspeech==0.9.3 (from -r requirements.txt (line 3))

    pip3 install -r requirements.txt
    
    Collecting deepspeech==0.9.3 (from -r requirements.txt (line 3))
      ERROR: Could not find a version that satisfies the requirement deepspeech==0.9.3 (from -r requirements.txt (line 3)) (from versions: none)
    ERROR: No matching distribution found for deepspeech==0.9.3 (from -r requirements.txt (line 3))
    
    opened by PacoH 15
  • ImportError: DLL load failed while importing _impl: A dynamic link library (DLL) initialization routine failed.

    ImportError: DLL load failed while importing _impl: A dynamic link library (DLL) initialization routine failed.

    Hi, after I installed Autosub I always get this error message when I try to run the program "ImportError: DLL load failed while importing _impl: A dynamic link library (DLL) initialization routine failed." Could anyone tell me how to fix this?

    opened by HafMann 9
  • ImportError: attempted relative import with no known parent package

    ImportError: attempted relative import with no known parent package

    Hi, my config is Win10_x64, Python 3.8. When i execute $ C:/Soft/Autosub/sub/Scripts/python autosub/main.py --file D:/Work/video.mkv. It gives me error: Traceback (most recent call last): File "autosub/main.py", line 8, in <module> from . import logger Info: `[email protected] MINGW64 /c/Soft/Autosub (master) $ pip list Package Version


    absl-py 1.0.0 astunparse 1.6.3 cachetools 4.2.4 certifi 2021.10.8 charset-normalizer 2.0.12 cycler 0.10.0 deepspeech-gpu 0.9.3 distlib 0.3.4 ffmpeg 1.4 filelock 3.6.0 gast 0.3.3 google-auth 1.35.0 google-auth-oauthlib 0.4.6 google-pasta 0.2.0 grpcio 1.44.0 h5py 2.10.0 idna 3.3 importlib-metadata 4.11.3 joblib 0.16.0 Keras-Preprocessing 1.1.2 kiwisolver 1.2.0 Markdown 3.3.6 numpy 1.22.3 oauthlib 3.2.0 opt-einsum 3.3.0 pip 19.2.3 platformdirs 2.5.1 protobuf 3.19.4 pyasn1 0.4.8 pyasn1-modules 0.2.8 pydub 0.23.1 pyparsing 2.4.7 python-dateutil 2.8.1 requests 2.27.1 requests-oauthlib 1.3.1 rsa 4.8 scikit-learn 1.0.2 scipy 1.4.1 setuptools 41.2.0 six 1.15.0 stt 1.0.0 tensorboard 2.2.2 tensorboard-plugin-wit 1.8.1 tensorflow-gpu 2.2.0 tensorflow-gpu-estimator 2.2.0 termcolor 1.1.0 threadpoolctl 3.1.0 tqdm 4.44.1 urllib3 1.26.9 virtualenv 20.13.3 Werkzeug 2.0.3 wheel 0.37.1 wrapt 1.14.0 zipp 3.7.0 WARNING: You are using pip version 19.2.3, however version 22.0.4 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command.`

    opened by andposteo 8
  • Fix extraneous colon character in SRT/VTT files, split run-on sentences, 3 digit milliseconds

    Fix extraneous colon character in SRT/VTT files, split run-on sentences, 3 digit milliseconds

    Fixes #21 Fixes #31 The 3 digit millisecond commit cherry-picked of public fork. It seemed pretty reasonable to include it here.

    See commit messages and referenced issues for more information.

    I will cherry-pick the best changes from all the public forks in the coming days and weeks so that AutoSub can become what it should be, and I intend to help you actively maintain and develop this project over the long term.

    Thanks!

    opened by shasheene 7
  • chores: editorconfig and dockerfile

    chores: editorconfig and dockerfile

    I was just exploring the code but wanted to tidy up a bit while I was at it.

    EditorConfig

    Adds an EditorConfig file, so contributors can use consistent code styles with the rest of the project. This makes it more convenient to work on the project when a developer's editor settings aren't the same as yours.

    Optimizes the Dockerfile

    This reduces the number of intermediate layers, and makes the build marginally faster. Its good practice to minimize layers in a Docker image to make the final image smaller, especially if there's a lot of file system changes between layers. In this case, the difference in size is negligible.

    A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer. … Each instruction creates one layer:

    FROM creates a layer from the ubuntu:18.04 Docker image. COPY adds files from your Docker client’s current directory. RUN builds your application with make. CMD specifies what command to run within the container.

    — https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

    Optimize apt-get

    • Removes the cached lists from /var/lib/apt/lists. (Reduces image size by 0.02 GB.)
    • No longer installs recommended packages. (Reduces image size by another 0.02 GB.)

    More info:

    • https://ubuntu.com/blog/we-reduced-our-docker-images-by-60-with-no-install-recommends
    • https://hackernoon.com/tips-to-reduce-docker-image-sizes-876095da3b34

    Before

    $ time docker build -t autosub:dev --build-arg model=0.9.3 --no-cache .
    real    3m57.379s
    user    0m1.650s
    sys     0m1.776s
    
    $ docker images -a
    REPOSITORY                  TAG               IMAGE ID       CREATED              SIZE
    autosub                     dev               2565f52660cb   About a minute ago   1.86GB
    <none>                      <none>            4fd064c626ea   About a minute ago   1.86GB
    <none>                      <none>            d9d3ae1ed1db   About a minute ago   1.86GB
    <none>                      <none>            cf1540452ffc   2 minutes ago        714MB
    <none>                      <none>            dfa2fae14669   2 minutes ago        714MB
    <none>                      <none>            9ada5732266d   2 minutes ago        714MB
    <none>                      <none>            d6821a416a2c   3 minutes ago        416MB
    <none>                      <none>            c918464fff64   3 minutes ago        416MB
    <none>                      <none>            594b7f697317   5 minutes ago        114MB
    <none>                      <none>            e5f83a59a602   5 minutes ago        114MB
    python                      3.8-slim-buster   4728acd2148c   2 days ago           114MB
    

    After

    $ time docker build -t autosub:dev --build-arg model=0.9.3 --no-cache .
    real    3m38.463s
    user    0m1.252s
    sys     0m1.209s
    
    $ docker images -a
    REPOSITORY                  TAG               IMAGE ID       CREATED          SIZE
    autosub                     dev               41dbaca6c36b   7 minutes ago    1.82GB
    <none>                      <none>            117e3c54c5cf   7 minutes ago    1.82GB
    <none>                      <none>            a911683cc89d   10 minutes ago   114MB
    <none>                      <none>            7cf695bce4db   10 minutes ago   114MB
    <none>                      <none>            264a99e3d9d1   10 minutes ago   114MB
    python                      3.8-slim-buster   4728acd2148c   2 days ago       114MB
    
    opened by SethFalco 7
  • Docker build broken

    Docker build broken

    docker build -t autosub . Cannot build and results in

    Step 11/13 : RUN pip3 install --no-cache-dir -r requirements.txt
     ---> Running in bdf3fc44f538
    Collecting cycler==0.10.0 (from -r requirements.txt (line 1))
      Downloading https://files.pythonhosted.org/packages/f7/d2/e07d3ebb2bd7af696440ce7e754c59dd546ffe1bbe732c8ab68b9c834e61/cycler-0.10.0-py2.py3-none-any.whl
    Collecting numpy (from -r requirements.txt (line 2))
      Downloading https://files.pythonhosted.org/packages/45/b2/6c7545bb7a38754d63048c7696804a0d947328125d81bf12beaa692c3ae3/numpy-1.19.5-cp36-cp36m-manylinux1_x86_64.whl (13.4MB)
    Collecting stt==1.0.0 (from -r requirements.txt (line 3))
      Could not find a version that satisfies the requirement stt==1.0.0 (from -r requirements.txt (line 3)) (from versions: 0.10.0a5, 0.10.0a6, 0.10.0a8, 0.10.0a9, 0.10.0a10)
    No matching distribution found for stt==1.0.0 (from -r requirements.txt (line 3))
    The command '/bin/sh -c pip3 install --no-cache-dir -r requirements.txt' returned a non-zero code: 1
    
    
    opened by Loqova 6
  • Is it impossible to recognize in another language?

    Is it impossible to recognize in another language?

    Is it impossible to recognize in another language?

    I hope the caption file comes out in Japanese.

    I put a video of the conversation in Japanese and it came out in English, is there a way to modify it?

    or.. is there any japaense model files?

    opened by kdrkdrkdr 5
  • Here are the steps to reduce big chunks of text - SRT

    Here are the steps to reduce big chunks of text - SRT

    Hi,

    I did this manually but maybe someone can improve it and write some script for it?:

    1. Check text that is longer than 7 words

    2. Add a line break to each line longer than 7 words.

    3. Get the numbers of lines you got, ie: blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah

    There are 4 lines 4. We take the initial and final time

    13 <<< SRT Subtitle position ie: 00:00:25,90 --> 00:00:35,25

    1. We surplus them and divide them by the number of lines 35,25 - 25,90 = 9,35 4 lines of max 7 words each 9,35/4 = 2,33

    2. We take 2.33 less 1 for each limit, Ie:

    13 <<< SRT Subtitle position 00:00:25,90 --> 00:00:28,23 blah blah blah blah blah blah blah

    14 00:00:28,24 --> 00:00:30,57 blah blah blah blah blah blah blah

    15 00:00:30,58 --> 00:00:32,91 blah blah blah blah blah blah blah

    16 00:00:32,92 --> 00:00:35,25 blah blah blah blah blah

    14 <<<< WARNING >>>> update all the other numbers in this case to 17 (check step 7 below) blah blah blah blah

    1. We need to update the SRT Subtitle position, in this case we finished at 16, so we need to replace 14 for 17, and do the same for all the other numbers. Note: Update from top to bottom so the counter increments.

    That's it.

    Anyone? :)

    opened by javaintheuk 5
  • force use utf-8 open README.md

    force use utf-8 open README.md

    Otherwise encounter error

        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "C:\Users\Aqaao\AppData\Local\Temp\pip-req-build-2dcr43hl\setup.py", line 8, in <module>
            README = fh.read()
        UnicodeDecodeError: 'gbk' codec can't decode byte 0x90 in position 757: illegal multibyte sequence
    
    opened by Aqaao 5
  • Add support for Python 3.10

    Add support for Python 3.10

    Adds support for Python 3.10.

    One of the major problems upgrading to Python 3.10 with this is that DeepSpeech appears to be unmaintained and no longer supports 3.10. So this also removes DeepSpeech as an engine but also updates the getmodels.sh script to easily download Coqui models.

    opened by KyleMaas 2
  • Docker run return ` No module named 'autosub'`

    Docker run return ` No module named 'autosub'`

    I build the docker image as the instruction. It returns this error after running.

    Traceback (most recent call last):
      File "./autosub/main.py", line 10, in <module>
        from autosub import logger
    ModuleNotFoundError: No module named 'autosub'
    

    I think the problem is I didn't have autosub installed in the image. So I add RUN pip3 install . to Dockerfile. Then everything works out fine.

    BTW: With this solution, it's also necessary to have COPY README.md ./ in the image. Also, I have to add encoding='utf-8' in with open("README.md", "r") as fh: in setup.py, otherwise it would use ASCII encoding as default in my case.

    opened by babaolanqiu 1
  • Generated VTT files are not standard compliant

    Generated VTT files are not standard compliant

    When I first started using this program, I took subtitles and opened them in VLC and it looked fine. But then I tried another program, and they refused to work. Took me quite a while to realize that the output of AutoSub was actually not compliant with the VTT format according to this validator:

    https://w3c.github.io/webvtt.js/parser.html

    Errors I get include:

    You are hopeless, RTFS. (10ms)
    1. Line 2: No blank line after the signature.
    2. Line 7: Cue identifier cannot be standalone.
    3. Line 11: Cue identifier cannot be standalone.
    4. Line 15: Cue identifier cannot be standalone.
    5. Line 19: Cue identifier cannot be standalone.
    6. Line 23: Cue identifier cannot be standalone.
    7. Line 27: Cue identifier cannot be standalone.
    [many, many more of these]
    

    Unfortunately, I can't share the subtitles that did this. However, it happened with every file I tried. I was able to build a series of sed replacements to run on the AutoSub output file to make it so it passes the validator, but that's quite a hack. I'd recommend trying validation on a file yourself - it was consistently repeatable for me.

    opened by KyleMaas 0
  • Include OpenAI Whisper model

    Include OpenAI Whisper model

    OpenAI just released probably the best model that there is for speech recognition right now.

    It would be great to incorprate this into this project!

    More info: https://openai.com/blog/whisper/

    opened by xBurnsed 0
  • multi core

    multi core

    autosub is using only 100% cpu when it should use 400% on a quad core cpu

    the task should be easy to do in parallel, by splitting the audio into N segments for N cpu cores

    related

    • https://stackoverflow.com/questions/4047789/parallel-file-parsing-multiple-cpu-cores
      • multiprocessing.Pool
      • ray for distributed computing
    opened by milahu 0
Releases(v1.1.0)
  • v1.1.0(Jan 11, 2022)

    What's Changed

    • Update README.md - GPU support by @shravanshetty1 in https://github.com/abhirooptalasila/AutoSub/pull/8
    • Remove hardcoded directory separator by @vnq in https://github.com/abhirooptalasila/AutoSub/pull/19
    • Add normalization to sox call. by @xfim in https://github.com/abhirooptalasila/AutoSub/pull/23
    • Docker: Combine RUN statements for smaller images by @nightscape in https://github.com/abhirooptalasila/AutoSub/pull/27
    • Adds TXT transcript, --format option, handle input with special chars by @shasheene in https://github.com/abhirooptalasila/AutoSub/pull/35
    • Applies formatting changes, fixes VTT output by @shasheene in https://github.com/abhirooptalasila/AutoSub/pull/36
    • Only delete audio files that need to be deleted by @shasheene in https://github.com/abhirooptalasila/AutoSub/pull/37
    • Dry-run + ability to specify models + minor edits by @yash-fn in https://github.com/abhirooptalasila/AutoSub/pull/39

    New Contributors

    • @shravanshetty1 made their first contribution in https://github.com/abhirooptalasila/AutoSub/pull/8
    • @vnq made their first contribution in https://github.com/abhirooptalasila/AutoSub/pull/19
    • @xfim made their first contribution in https://github.com/abhirooptalasila/AutoSub/pull/23
    • @nightscape made their first contribution in https://github.com/abhirooptalasila/AutoSub/pull/27
    • @yash-fn made their first contribution in https://github.com/abhirooptalasila/AutoSub/pull/39

    Full Changelog: https://github.com/abhirooptalasila/AutoSub/compare/v1.0.0...v1.1.0

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Sep 6, 2020)

Owner
Abhiroop Talasila
I try to make things work
Abhiroop Talasila
OneDriveExplorer - A command line and GUI based application for reconstructing the folder structure of OneDrive from the UserCid.dat file

OneDriveExplorer - A command line and GUI based application for reconstructing the folder structure of OneDrive from the UserCid.dat file

Brian Maloney 100 Dec 13, 2022
Salesforce object access auditor

Salesforce object access auditor Released as open source by NCC Group Plc - https://www.nccgroup.com/ Developed by Jerome Smith @exploresecurity (with

NCC Group Plc 90 Sep 19, 2022
Automaton - python script to execute bash command based on changes in size of a file.

automaton python script to execute given command = everytime size of a given file changes,hence everytime a file is modified.(almost) download automa

asrar bhat 1 Jan 03, 2022
Python Command Line Application (CLI) using Typer, SQLModel, Async-PostgrSQL, and FastAPI

pyflycli is a command-line interface application built with Typer that allows you to view flights above your location.

Kevin Zehnder 14 Oct 01, 2022
Magma is a NeoVim plugin for running code interactively with Jupyter.

Magma Magma is a NeoVim plugin for running code interactively with Jupyter. Requirements NeoVim 0.5+ Python 3.8+ Required Python packages: pynvim (for

Daniel Csillag 372 Dec 26, 2022
Un module simple pour demander l'accord de l'utilisateur dans une CLI.

Demande de confirmation utilisateur pour CLI Présentation ask_lib est un module pour le langage Python proposant une seule fonction; ask(). Le but pri

CallMePixelMan 7 May 09, 2022
A collection of command-line interface games written in python

Command Line Interface Python Games Collection of some starter python game projects for beginners How to play these games Clone this repository git cl

Paras Gupta 7 Jun 06, 2022
This is the public repo for the VS Code Extension AT&T i386/IA32 UIUC-ECE391 Syntax Highlighting

AT&T i386 IA32 UIUC ECE391 GCC Highlighter & Snippet & Linter This is the VS Code Extension for UIUC ECE 391, MIT 6.828, and all other AT&T-based i386

Jackgetup 1 Feb 05, 2022
Dark powered asynchronous completion framework for neovim/Vim8

deoplete.nvim Dark powered asynchronous completion framework for neovim/Vim8 Note: The development of this plugin is finished. Accepts minor patches a

Shougo 5.9k Dec 30, 2022
Python-Stock-Info-CLI: Get stock info through CLI by passing stock ticker.

Python-Stock-Info-CLI Get stock info through CLI by passing stock ticker. Installation Use the following command to install the required modules at on

Ayush Soni 1 Nov 05, 2021
A CLI based task manager tool which helps you track your daily task and activity.

CLI based task manager tool This is the simple CLI tool can be helpful in increasing your productivity. More like your todolist. It uses Postgresql as

ritik 1 Jan 19, 2022
Use case: quick JSON processing/restructuring with jq without terminal

alfred-jq Alfred workflow to conveniently process JQ on clipboard based on a jq query Also available at: packal/jq Use case: quick JSON processing/res

T on Meta Mode 5 Sep 30, 2022
電通大のCLIツールです

uecli 電通大のCLIツールです。コマンドラインからシラバス検索、成績参照、図書館の貸出リストなどを見ることができます インストール pip install uecli 使い方 シラバスを検索 uecli syllabus search -s 'コンピュータサイエンス' シラバスを取得し、Mar

UEC World Dominators 2 Oct 31, 2021
CLI para o projeto Compilado (Newsletter e Podcast do Código Fonte TV)

Compilado CLI Automatização de tarefas através de linha de comando para a geração de assets para episódios do Compilado, a newsletter e podcast do can

Gabriel Froes 18 Nov 21, 2022
Python Processing Tool for Vasp Ipnut/Output

PivotPy A Python Processing Tool for Vasp Input/Output. A CLI is available in Powershell, see Vasp2Visual. stylea{text-decoration: none !important;c

Abdul Saboor 5 Aug 16, 2022
Just a shell writed on Python

HIGHSHELL (also hSH or HS) Just a shell writed on Python Send bug report • How to use the shell • Broked features • Licenses How to use the shell Inst

0LungSkill0 2 Jan 04, 2022
CLI/GUI Math commands based on python 3

PyMath Commands Syntax Installation Commands: pymath add: usage: pymath add 12.5 12.5 sub: usage: pymath sub 25 12.5 div: usage: pymath div 144 12 mul

eggsnham07 0 Nov 22, 2021
Double Pendulum visualised with fetching system information in Python.

Show off your terminal, in style. A nice relaxing double pendulum simulation using ASCII, able to simulate multiple pendulums at once, and provide tra

Nekurone 62 Dec 14, 2022
A simple and easy-to-use CLI parse tool.

A simple and easy-to-use CLI parse tool.

AbsentM 1 Mar 04, 2022
Code for "Salient Deconvolutional Networks, Aravindh Mahendran, Andrea Vedaldi, ECCV 2016"

deconvnet_analysis Code for "Salient Deconvolutional Networks, Aravindh Mahendran, Andrea Vedaldi, ECCV 2016" Parts of this code Generate figures in t

Aravindh Mahendran 12 Jan 25, 2021