Tom-the-AI - A compound artificial intelligence software for Linux systems.

Overview

Tom the AI (version 0.82)

WARNING: This software is not yet ready to use, I'm still setting up the GitHub repository. Should be ready in a few days.

Tom is an open source AI desktop assistant for Linux systems, built using a series of independent response modules to generate replies to any input.

Tom uses natural language processing to determine which response module is best suited to generate a response for each input, thus avoiding the need for precise syntax.

Tom the AI

By Analogy

Tom the AI is designed as a Linux alternative to software such as Apple's Siri, or Microsoft's Cortana.

Set Up

Step 1 - Update repositories:

Update apt package repositories using sudo apt update to ensure that the apt package manager has access to the latest versions of the below dependencies.

Step 2 - Install APT dependencies:

First, install python by running sudo apt install python3.9 in a terminal. Tom is tested on python 3.9, but any newer version should (probably) also work just fine.

Next, install the latest version of VLC Media player using sudo apt install vlc.

Step 3 - Download Tom:

Download Tom by cloning the GitHub repository into your home folder using git clone https://github.com/Mblizzard/Tom-the-AI.

Step 4 - Install Python dependencies:

Open a terminal inside Tom's application folder, or navigate using cd ~/Tom-the-AI/. Now run sudo pip3 install requirements.txt. Some systems may use pip in place of pip3.

Next, we need to download the required NLTK libraries by running the following code in a python shell:

>>> import nltk
>>> nltk.download('all')

Step 5 - Running Tom:

Go ahead and run python3.9 ~/Tom-the-AI/tom.py. Tom will boot up, and after a minute or so of loading, you'll be ready to go! If you feel inclined, go ahead and make a desktop launcher of this command, link Tom into your Application Menu, or create a dock shortcut.

Mission

The mission of Tom is to provide an open source compound AI for which anyone can program and contribute response modules, expanding Tom's capabilities to create a useful and entertaining artificial intelligence software.

Examples

Tom generates outputs to any input by using natural language processing to determine the most suitable response module from which to source the reply.

Give Tom natural language input, either via voice recognition or text input, for instance Hey Tom, what is petrichor?, and he'll respond in the most appropriate way. Note that the 'Hey Tom' activation phrase is only required of voice inputs.

The following is a non-exhaustive list of things you can do:

Hey Tom, I'm in an optimistic mood. I'm not sure if this is a good thing or not. Emotions (Using sentiment analysis + NLTK chatbots): ~> Hey Tom, you are a brilliant individual! I am but one, you are but one more. ~> Hey Tom, thou art a fool. Become more interesting before I die of fatal boredom. Fact Memory & Recall: ~> Hey Tom, the answer to life, the universe, and everything is 42. Ok. ~> Hey Tom, what is the answer to life, the universe, and everything?. The answer to life, the universe, and everything is 42. Playing music (From device or web, includes UI controls for the former): ~> Hey Tom, play up the shard. Playing /home/murray/Music/Dr Who/Up The Shard.webm. ~> Hey Tom, stop the music. Media stopped. *NOTE: File names do not have to match exactly.* ~> Hey Tom, open my English essay. Alright. *NOTE: File names do not have to match exactly.* Opening websites: ~> Hey Tom, open Reddit. Alright. Jokes (From PyJokes): ~> Hey Tom, tell me a joke. I went to a street where the houses were numbered 8k, 16k, 32k, 64k, 128k, 256k and 512k. It was a trip down Memory Lane. Trivia: ~> Hey Tom, ask me a trivia question. Question: What is "Sealed crustless sandwich"? 1) The part of Yellowstone National Park in Idaho, where any crime can technically be committed without punishment – but don't tempt fate! 2) I got a fever, and the only prescription... is more cowbell! 3) The only nuclear reactor in a 17th-century building. 4) A patented peanut butter and jelly sandwich. ~> 4. Correct! Colossal Cave Adventure (Willie Crowther's ADVENT-350): ~> Hey Tom, let's go on an adventure! Welcome to adventure!! would you like instructions? Fun facts: ~> Hey Tom, make me smarter. Spices were not used to mask the flavor of rotting meat before refrigeration. Spices were an expensive luxury item; those who could afford them could afford good meat, and there are no contemporaneous documents calling for spices to disguise the taste of bad meat. Dice Rolls (great for D&D): ~> Hey Tom, roll me a d20. I rolled a 14. Word generation (great for Articulate) ~> Hey Tom, give me a random action word. Your word is 'winning'. Complex Mathematics (using SymPy): ~> Hey Tom, integrate (tan(x))^1/2 ∫f(x) = -ln(cos(x))/2 + c Code generation (using howdoi): ~> Hey Tom, write a hello world script in C++. #include <\iostream> int main() { std::cout << "Hello World!" << std::endl; return 0; } Most of Betty's functionality (From https://github.com/pickhardt/betty): ~> Hey Tom, what time is it? Running date +"%r (%T)" ... 02:34:46 PM (14:34:46). ~> Hey Tom, what day is it? Running date +"%A" ... Saturday. ~> Hey Tom, whats my username? Running whoami ... murray ~> Hey Tom, what is my ip address? Wlo1: flags=4163 mtu 1500 inet 192.168.43.9 netmask 255.255.255.0 broadcast 192.168.43.255 inet6 fe80::5c61:caf:5614:7b82 prefixlen 64 scopeid 0x20 ether 54:35:30:60:a8:b9 txqueuelen 1000 (Ethernet) RX packets 401121 bytes 523184185 (523.1 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 235650 bytes 23471151 (23.4 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0.">
Objective Response (From anywhere on the internet):
~> Hey Tom, what is petrichor?
According to en.wikipedia.org... Petrichor is the earthy scent produced when rain falls on dry soil. The word is constructed from the Greek petra, "rock", or petros, "stone", and ichor, the fluid that flows in the veins of the gods in Greek mythology.

Subjective Response (From Cleverbot):
~> Hey Tom, I'm in an optimistic mood.
I'm not sure if this is a good thing or not.

Emotions (Using sentiment analysis + NLTK chatbots):
~> Hey Tom, you are a brilliant individual!
I am but one, you are but one more.
~> Hey Tom, thou art a fool.
Become more interesting before I die of fatal boredom.

Fact Memory & Recall:
~> Hey Tom, the answer to life, the universe, and everything is 42.
Ok.
~> Hey Tom, what is the answer to life, the universe, and everything?.
The answer to life, the universe, and everything is 42.

Playing music (From device or web, includes UI controls for the former):
~> Hey Tom, play up the shard.
Playing /home/murray/Music/Dr Who/Up The Shard.webm.
~> Hey Tom, stop the music.
Media stopped.
*NOTE: File names do not have to match exactly.*

~> Hey Tom, open my English essay.
Alright.
*NOTE: File names do not have to match exactly.*

Opening websites:
~> Hey Tom, open Reddit.
Alright.

Jokes (From PyJokes):
~> Hey Tom, tell me a joke.
I went to a street where the houses were numbered 8k, 16k, 32k, 64k, 128k, 256k and 512k. It was a trip down Memory Lane.

Trivia:
~> Hey Tom, ask me a trivia question.
Question: What is "Sealed crustless sandwich"?
1) The part of Yellowstone National Park in Idaho, where any crime can technically be committed without punishment – but don't tempt fate!
2) I got a fever, and the only prescription... is more cowbell!
3) The only nuclear reactor in a 17th-century building.
4) A patented peanut butter and jelly sandwich.
~> 4.
Correct!

Colossal Cave Adventure (Willie Crowther's ADVENT-350):
~> Hey Tom, let's go on an adventure!
Welcome to adventure!! would you like instructions?

Fun facts:
~> Hey Tom, make me smarter.
Spices were not used to mask the flavor of rotting meat before refrigeration. Spices were an expensive luxury item; those who could afford them could afford good meat, and there are no contemporaneous documents calling for spices to disguise the taste of bad meat.

Dice Rolls (great for D&D):
~> Hey Tom, roll me a d20.
I rolled a 14.

Word generation (great for Articulate)
~> Hey Tom, give me a random action word.
Your word is 'winning'.

Complex Mathematics (using SymPy):
~> Hey Tom, integrate (tan(x))^1/2
∫f(x) = -ln(cos(x))/2 + c

Code generation (using howdoi):
~> Hey Tom, write a hello world script in C++.
#include <\iostream>
int main()
{
std::cout << "Hello World!" << std::endl;
return 0;
}

Most of Betty's functionality (From https://github.com/pickhardt/betty):
~> Hey Tom, what time is it?
Running date +"%r (%T)" ...
02:34:46 PM (14:34:46).
~> Hey Tom, what day is it?
Running date +"%A" ...
Saturday.
~> Hey Tom, whats my username?
Running whoami ...
murray
~> Hey Tom, what is my ip address?
Wlo1: flags=4163
    
      mtu 1500
    inet 192.168.43.9 netmask 255.255.255.0 broadcast 192.168.43.255
    inet6 fe80::5c61:caf:5614:7b82 prefixlen 64 scopeid 0x20
     
    ether 54:35:30:60:a8:b9 txqueuelen 1000 (Ethernet)
    RX packets 401121 bytes 523184185 (523.1 MB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 235650 bytes 23471151 (23.4 MB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0.

    

This is a fair representation of Tom's capabilities as they currently stand. See the following section on contributing for a guide of how to create your own response modules for Tom, and expand upon the above abilities.

Contributing

How to write a custom response module for Tom:

Step 1 - Understanding how Tom will treat your module:

Tom is programmed in Python. Response modules are imported into Tom using the python import statement, and the response is retrieved from the module using output = .respond( ) . The output is then returned to the user.

Step 2 - Programming the response module:

Go ahead and program your response. Your script should have a main function def respond(inp):, where inp is the user input parameter that will be passed to your function by Tom. Your function should provide it's output through a return statement (NOT a print() statement).

Step 3 - Testing your module:

Paste the following bit of code at the end of your python script, then run your program:

")))">
if __name__ == "__main__":
    while True:
        print(respond(input("~> ")))

If this works as expected, and you can type inputs on the ~> prompts and receive your output printed in the console, then continue to step 4.

Step 4 - Relative imports:

Rename your main response script to __init__.py, and make sure it's at the first level of your project folder (not nested in other folders). Next, rename the folder containing your script to the name of your module (no white-space or special characters). Now, if you are importing any functions from other scripts (does not include dependencies installed through pip), you will need to change the import statement by placing a '.' in front of the location. For example, from myOtherScript import customFunction becomes from .myOtherScript import customFunction, but import requests would remain unchanged.

Step 5 - Dependencies:

If your response module requires python packages from PyPi, make sure it includes a requirements.txt file. Any dependencies not available from PyPi should bundled with project, located in the project folder alongside __init__.py.

Step 6 - Using your module:

Paste the folder containing your response module into Tom's /responses directory. You will then need to activate the response module within Tom's modules interface, or by manually adding the name of your module to responseOrder.txt.

Step 7 - Creating a pull request:

If you feel inclined to share your module with the world, go ahead and create a pull request for your module on Tom's GitHub repository (https://github.com/Mblizzard/Tom-the-AI).

Planned Features

New response modules & capabilities to look forward to in future versions of Tom:

  • Timers & stopwatch capabilities.
  • Ability execute terminal commands.
  • Automated module installation.
  • Releases and updates available on the Ubuntu apt repositories.

Features I'm not currently planning to include in Tom, but that I'll consider adding if enough people are interested:

  • Windows support.

Versioning

Releases will follow a semantic versioning format:

. .

For more information on SemVer, visit http://semver.org/.

License

Tom the AI: A compound AI for Linux systems.
Copyright (C) 2021  Murray Jones

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program. If not, see 
   .
Image transformations designed for Scene Text Recognition (STR) data augmentation. Published at ICCV 2021 Workshop on Interactive Labeling and Data Augmentation for Vision.

Data Augmentation for Scene Text Recognition (ICCV 2021 Workshop) (Pronounced as "strog") Paper Arxiv Why it matters? Scene Text Recognition (STR) req

Rowel Atienza 152 Dec 28, 2022
Stochastic Tensor Optimization for Robot Motion - A GPU Robot Motion Toolkit

STORM Stochastic Tensor Optimization for Robot Motion - A GPU Robot Motion Toolkit [Install Instructions] [Paper] [Website] This package contains code

NVIDIA Research Projects 101 Dec 12, 2022
Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for prediction.

Predicitng_viability Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for

Gopalika Sharma 1 Nov 08, 2021
Transfer Reinforcement Learning for Differing Action Spaces via Q-Network Representations

Transfer-Learning-in-Reinforcement-Learning Transfer Reinforcement Learning for Differing Action Spaces via Q-Network Representations Final Report Tra

Trung Hieu Tran 4 Oct 17, 2022
A flexible ML framework built to simplify medical image reconstruction and analysis experimentation.

meddlr Getting Started Meddlr is a config-driven ML framework built to simplify medical image reconstruction and analysis problems. Installation To av

Arjun Desai 36 Dec 16, 2022
Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization

Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization This repository contains the code for the BBI optimizer, introduced in the p

G. Bruno De Luca 5 Sep 06, 2022
Callable PyTrees and filtered JIT/grad transformations => neural networks in JAX.

Equinox Callable PyTrees and filtered JIT/grad transformations = neural networks in JAX Equinox brings more power to your model building in JAX. Repr

Patrick Kidger 909 Dec 30, 2022
3D cascade RCNN for object detection on point cloud

3D Cascade RCNN This is the implementation of 3D Cascade RCNN: High Quality Object Detection in Point Clouds. We designed a 3D object detection model

Qi Cai 22 Dec 02, 2022
Tilted Empirical Risk Minimization (ICLR '21)

Tilted Empirical Risk Minimization This repository contains the implementation for the paper Tilted Empirical Risk Minimization ICLR 2021 Empirical ri

Tian Li 40 Nov 28, 2022
Code for Transformer Hawkes Process, ICML 2020.

Transformer Hawkes Process Source code for Transformer Hawkes Process (ICML 2020). Run the code Dependencies Python 3.7. Anaconda contains all the req

Simiao Zuo 111 Dec 26, 2022
Experiments and examples converting Transformers to ONNX

Experiments and examples converting Transformers to ONNX This repository containes experiments and examples on converting different Transformers to ON

Philipp Schmid 4 Dec 24, 2022
Open source implementation of "A Self-Supervised Descriptor for Image Copy Detection" (SSCD).

A Self-Supervised Descriptor for Image Copy Detection (SSCD) This is the open-source codebase for "A Self-Supervised Descriptor for Image Copy Detecti

Meta Research 68 Jan 04, 2023
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 03, 2023
Wafer Fault Detection using MlOps Integration

Wafer Fault Detection using MlOps Integration This is an end to end machine learning project with MlOps integration for predicting the quality of wafe

Sethu Sai Medamallela 0 Mar 11, 2022
PyTorch implementation of MulMON

MulMON This repository contains a PyTorch implementation of the paper: Learning Object-Centric Representations of Multi-object Scenes from Multiple Vi

NanboLi 16 Nov 03, 2022
FAMIE is a comprehensive and efficient active learning (AL) toolkit for multilingual information extraction (IE)

FAMIE: A Fast Active Learning Framework for Multilingual Information Extraction

18 Sep 01, 2022
🎓Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

🎓Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

Realcat 270 Jan 07, 2023
A U-Net combined with a variational auto-encoder that is able to learn conditional distributions over semantic segmentations.

Probabilistic U-Net + **Update** + An improved Model (the Hierarchical Probabilistic U-Net) + LIDC crops is now available. See below. Re-implementatio

Simon Kohl 498 Dec 26, 2022
[NeurIPS'21] Projected GANs Converge Faster

[Project] [PDF] [Supplementary] [Talk] This repository contains the code for our NeurIPS 2021 paper "Projected GANs Converge Faster" by Axel Sauer, Ka

798 Jan 04, 2023
Lazy, a tool for running things in idle time

Lazy, a tool for running things in idle time Mostly used to stop CUDA ML model training from making my desktop unusable. Simply monitors keyboard/mous

N Shepperd 46 Nov 06, 2022