Covid-ChatBot - A Rapid Response Virtual Agent for Covid-19 Queries

Overview

COVID-19 CHatBot

A Rapid Response Virtual Agent for Covid-19 Queries

Contents

  1. What is ChatBot
  2. Types of ChatBots
  3. About the Project
  4. Dataset
  5. Prerequisites
  6. Building a Chatbot

1. What is ChatBot

A chatbot is an intelligent piece of software that is capable of communicating and performing actions similar to a human. Chatbots are used a lot in customer interaction, marketing on social network sites and instantly messaging the client.second

2. Types of CHatBots

There are two basic types of chatbot models based on how they are built; Retrieval based and Generative based models.

  • Retrieval based Chatbots
    A retrieval-based chatbot uses predefined input patterns and responses. It then uses some type of heuristic approach to select the appropriate response. It is widely used in the industry to make goal-oriented chatbots where we can customize the tone and flow of the chatbot to drive our customers with the best experience.
  • Generative based Chatbots
    Generative models are not based on some predefined responses.
    They are based on seq 2 seq neural networks. It is the same idea as machine translation. In machine translation, we translate the source code from one language to another language but here, we are going to transform input into an output. It needs a large amount of data and it is based on Deep Neural networks.

3. About the Project

In this Python and NLP project, we have build a chatbot using deep learning techniques. The chatbot will be trained on the dataset which contains categories (intents), pattern and responses. We used a special recurrent neural network (LSTM) to classify which category the user’s message belongs to and then we will give a random response from the list of responses. Let’s see how to create a retrieval based chatbot using NLTK, Keras, Python, etc.

4. Dataset

The dataset we have used is ‘intents.json’. This is a JSON file that contains the patterns we need to find and the responses we want to return to the user.

5. Prerequisites

The project requires you to have good knowledge of Python, Keras, and Natural language processing (NLTK). Along with them, we will use some helping modules which you can download using the python-pip command. Some basic Git commands are:

pip install tensorflow, keras, pickle, nltk

6. Building a ChatBot

  • Intents.json – The data file which has predefined patterns and responses.
  • train_chatbot.py – In this Python file, we wrote a script to build the model and train our chatbot.
  • Words.pkl – This is a pickle file in which we store the words Python object that contains a list of our vocabulary.
  • Classes.pkl – The classes pickle file contains the list of categories.
  • Chatbot_model.h5 – This is the trained model that contains information about the model and has weights of the neurons.
  • Chatgui.py – This is the Python script in which we implemented GUI for our chatbot. Users can easily interact with the bot.

Please follow the following steps to create the project

1. Import and load the data file

First, make a file name as train_chatbot.py. We import the necessary packages for our chatbot and initialize the variables we will use in our Python project. The data file is in JSON format so we used the json package to parse the JSON file into Python.
intents-data-file

2. Preprocess data

When working with text data, we need to perform various preprocessing on the data before we make a machine learning or a deep learning model. Based on the requirements we need to apply various operations to preprocess the data.
Tokenizing is the most basic and first thing you can do on text data. Tokenizing is the process of breaking the whole text into small parts like words.
Here we iterate through the patterns and tokenize the sentence using nltk.word_tokenize() function and append each word in the words list. We also create a list of classes for our tags.

for intent in intents['intents']:
    for pattern in intent['patterns']:

        #tokenize each word
        w = nltk.word_tokenize(pattern)
        words.extend(w)
        #add documents in the corpus
        documents.append((w, intent['tag']))

        # add to our classes list
        if intent['tag'] not in classes:
            classes.append(intent['tag'])

Now we will lemmatize each word and remove duplicate words from the list. Lemmatizing is the process of converting a word into its lemma form and then creating a pickle file to store the Python objects which we will use while predicting.

# lemmatize, lower each word and remove duplicates
words = [lemmatizer.lemmatize(w.lower()) for w in words if w not in ignore_words]
words = sorted(list(set(words)))
# sort classes
classes = sorted(list(set(classes)))
# documents = combination between patterns and intents
print (len(documents), "documents")
# classes = intents
print (len(classes), "classes", classes)
# words = all words, vocabulary
print (len(words), "unique lemmatized words", words)

pickle.dump(words,open('words.pkl','wb'))
pickle.dump(classes,open('classes.pkl','wb'))

3. Create training and testing data

Now, we will create the training data in which we will provide the input and the output. Our input will be the pattern and output will be the class our input pattern belongs to. But the computer doesn’t understand text so we will convert text into numbers.

# create our training data
training = []
# create an empty array for our output
output_empty = [0] * len(classes)
# training set, bag of words for each sentence
for doc in documents:
    # initialize our bag of words
    bag = []
    # list of tokenized words for the pattern
    pattern_words = doc[0]
    # lemmatize each word - create base word, in attempt to represent related words
    pattern_words = [lemmatizer.lemmatize(word.lower()) for word in pattern_words]
    # create our bag of words array with 1, if word match found in current pattern
    for w in words:
        bag.append(1) if w in pattern_words else bag.append(0)

    # output is a '0' for each tag and '1' for current tag (for each pattern)
    output_row = list(output_empty)
    output_row[classes.index(doc[1])] = 1

    training.append([bag, output_row])
# shuffle our features and turn into np.array
random.shuffle(training)
training = np.array(training)
# create train and test lists. X - patterns, Y - intents
train_x = list(training[:,0])
train_y = list(training[:,1])
print("Training data created")

4. Build the model

We have our training data ready, now we will build a deep neural network that has 3 layers. We use the Keras sequential API for this. After training the model for 200 epochs, we achieved 100% accuracy on our model.

# Create model - 3 layers. First layer 128 neurons, second layer 64 neurons and 3rd output layer contains number of neurons
# equal to number of intents to predict output intent with softmax
model = Sequential()
model.add(Dense(128, input_shape=(len(train_x[0]),), activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(len(train_y[0]), activation='softmax'))

# Compile model. Stochastic gradient descent with Nesterov accelerated gradient gives good results for this model
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

#fitting and saving the model 
hist = model.fit(np.array(train_x), np.array(train_y), epochs=200, batch_size=5, verbose=1)
model.save('chatbot_model.h5', hist)

print("model created")

5. Predict the response( Graphical User Interface)

To predict the sentences and get a response from the user to let us create a new file ‘chatapp.py’.

We will load the trained model and then use a graphical user interface that will predict the response from the bot. The model will only tell us the class it belongs to, so we will implement some functions which will identify the class and then retrieve us a random response from the list of responses.

Again we import the necessary packages and load the ‘words.pkl’ and ‘classes.pkl’ pickle files which we have created when we trained our model:

import nltk
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
import pickle
import numpy as np

from keras.models import load_model
model = load_model('chatbot_model.h5')
import json
import random
intents = json.loads(open('intents.json').read())
words = pickle.load(open('words.pkl','rb'))
classes = pickle.load(open('classes.pkl','rb'))

To predict the class, we will need to provide input in the same way as we did while training. So we will create some functions that will perform text preprocessing and then predict the class.

def clean_up_sentence(sentence):
    # tokenize the pattern - split words into array
    sentence_words = nltk.word_tokenize(sentence)
    # stem each word - create short form for word
    sentence_words = [lemmatizer.lemmatize(word.lower()) for word in sentence_words]
    return sentence_words
# return bag of words array: 0 or 1 for each word in the bag that exists in the sentence

def bow(sentence, words, show_details=True):
    # tokenize the pattern
    sentence_words = clean_up_sentence(sentence)
    # bag of words - matrix of N words, vocabulary matrix
    bag = [0]*len(words) 
    for s in sentence_words:
        for i,w in enumerate(words):
            if w == s: 
                # assign 1 if current word is in the vocabulary position
                bag[i] = 1
                if show_details:
                    print ("found in bag: %s" % w)
    return(np.array(bag))

def predict_class(sentence, model):
    # filter out predictions below a threshold
    p = bow(sentence, words,show_details=False)
    res = model.predict(np.array([p]))[0]
    ERROR_THRESHOLD = 0.25
    results = [[i,r] for i,r in enumerate(res) if r>ERROR_THRESHOLD]
    # sort by strength of probability
    results.sort(key=lambda x: x[1], reverse=True)
    return_list = []
    for r in results:
        return_list.append({"intent": classes[r[0]], "probability": str(r[1])})
    return return_list

After predicting the class, we will get a random response from the list of intents.

def getResponse(ints, intents_json):
    tag = ints[0]['intent']
    list_of_intents = intents_json['intents']
    for i in list_of_intents:
        if(i['tag']== tag):
            result = random.choice(i['responses'])
            break
    return result

def chatbot_response(text):
    ints = predict_class(text, model)
    res = getResponse(ints, intents)
    return res

Now we will develop a graphical user interface. Let’s use Tkinter library which is shipped with tons of useful libraries for GUI. We will take the input message from the user and then use the helper functions we have created to get the response from the bot and display it on the GUI.

6. Run the chatbot

To run the chatbot, we have two main files; train_chatbot.py and chatapp.py.
First, we train the model using the command in the terminal:

python train_chatbot.py

If we don’t see any error during training, we have successfully created the model. Then to run the app, we run the second file.

python chatgui.py

The program will open up a GUI window within a few seconds. With the GUI you can easily chat with the bot.

Owner
NelakurthiSudheer
Data Scientist | Kaggle Participant | Web Developer | MachineHack
NelakurthiSudheer
Blender addon, import and update mixamo animation

This is a blender addon for import and update mixamo animations.

ywaby 7 Apr 19, 2022
python scripts and other files to generate induction encoder PCBs in Kicad

induction_encoder python scripts and other files to generate induction encoder PCBs in Kicad Targeting the Renesas IPS2200 encoder chips.

Taylor Alexander 8 Feb 16, 2022
Urban Big Data Centre Housing Sensor Project

Housing Sensor Project The Urban Big Data Centre is conducting a study of indoor environmental data in Scottish houses. We are using Raspberry Pi devi

Jeremy Singer 2 Dec 13, 2021
A library for pattern matching on symbolic expressions in Python.

MatchPy is a library for pattern matching on symbolic expressions in Python. Work in progress Installation MatchPy is available via PyPI, and

High-Performance and Automatic Computing 151 Dec 24, 2022
Plux - A dynamic code loading framework for building plugable Python distributions

Plux plux is the dynamic code loading framework used in LocalStack. Overview The

LocalStack 65 Dec 20, 2022
Wrapper around anjlab's Android In-app Billing Version 3 to be used in Kivy apps

IABwrapper Wrapper around anjlab's Android In-app Billing Version 3 to be used in Kivy apps Install pip install iabwrapper Important ( Add these into

Shashi Ranjan 8 May 23, 2022
A Python version of Canvacord

A copy of canvacord made in python! Table of contents Installation Examples Creating Images Links Downloads Installation Run any of these commands in

10 Mar 28, 2022
Write complicated anonymous functions other than lambdas in Python.

lambdex allows you to write multi-line anonymous function expression (called a lambdex) in an idiomatic manner.

Xie Jingyi 71 May 19, 2022
Visual Python and C++ nanosecond profiler, logger, tests enabler

Look into Palanteer and get an omniscient view of your program Palanteer is a set of lean and efficient tools to improve the quality of software, for

Damien Feneyrou 1.9k Dec 26, 2022
Cross-platform MachO/ObjC Static binary analysis tool & library. class-dump + otool + lipo + more

ktool Static Mach-O binary metadata analysis tool / information dumper pip3 install k2l Development is currently taking place on the @python3.10 branc

Kritanta 301 Dec 28, 2022
OntoSeer is a tool to help users build better quality ontologies

Ontoseer This document provides documentation for the first version of OntoSeer.OntoSeer is a tool that monitors the ontology development process andp

Knowledgeable Computing and Reasoning Lab 9 Aug 15, 2022
kurwa deska ADB

kurwa-deska-ADB kurwa-deska Запуск Linux -- python3 kurwa_deska.py Termux -- python3 kurwa_deska.py Встановлення cd kurwa_deska ADB і зразу запуск pyt

1 Jan 21, 2022
ChronoRace is a tool to accurately perform timed race conditions to circumvent application business logic.

ChronoRace is a tool to accurately perform timed race conditions to circumvent application business logic. I've found in my research that w

Tanner 64 Aug 04, 2022
🎴 LearnQuick is a flashcard application that you can study with decks and cards.

🎴 LearnQuick is a flashcard application that you can study with decks and cards. The main function of the application is to show the front sides of the created cards to the user and ask them to guess

Mehmet Güdük 7 Aug 21, 2022
Basit bir sunucu - istemci örneği

basitSunucuistemci Aşağıdaki adresteki uygulamadaki process kapanmama sorununun çözülmesi ile oluşturulmuş yeni depo https://github.com/pricheal/pytho

Ali Orhun Akkirman 10 Dec 27, 2022
AKSWINPOSTINIT -- AKS Windows node post provisioning initialization

AKSWINPOSTINIT -- AKS Windows node post provisioning initialization Features This is a tool that provides one-time powershell script initilization for

Ping He 3 Nov 25, 2021
Visualization of COVID-19 Omicron wave data in Seoul, Osaka, Tokyo, Hong Kong and Shanghai. 首尔、大阪、东京、香港、上海由新冠病毒 Omicron 变异株引起的本轮疫情数据可视化分析。

COVID-19 in East Asian Megacities This repository holds original Python code for processing and visualization COVID-19 data in East Asian megacities a

STONE 10 May 18, 2022
Cross-platform .NET Core pre-commit hooks

dotnet-core-pre-commit Cross-platform .NET Core pre-commit hooks How to use Add this to your .pre-commit-config.yaml - repo: https://github.com/juan

Juan Odicio 5 Jul 20, 2021
A demo Piccolo app - a movie database!

PyMDb Welcome to the Python Movie Database! Built using Piccolo, Piccolo Admin, and FastAPI. Created for a presentation given at PyData Global 2021. R

11 Oct 16, 2022
Automatically give thanks to Pypi packages you use in your project!

Automatically give thanks to Pypi packages you use in your project!

Ward 25 Dec 20, 2021