TLA - Twitter Linguistic Analysis

Related tags

Text Data & NLPTLA
Overview

TLA - Twitter Linguistic Analysis

Tool for linguistic analysis of communities

TLA is built using PyTorch, Transformers and several other State-of-the-Art machine learning techniques and it aims to expedite and structure the cumbersome process of collecting, labeling, and analyzing data from Twitter for a corpus of languages while providing detailed labeled datasets for all the languages. The analysis provided by TLA will also go a long way in understanding the sentiments of different linguistic communities and come up with new and innovative solutions for their problems based on the analysis. List of languages our library provides support for are listed as follows:

Language Code Language Code
English en Hindi hi
Swedish sv Thai th
Dutch nl Japanese ja
Turkish tr Urdu ur
Indonesian id Portuguese pt
French fr Chinese zn-ch
Spanish es Persian fa
Romainain ro Russian ru

Features

  • Provides 16 labeled Datasets for different languages for analysis.
  • Implements Bert based architecture to identify languages.
  • Provides Functionalities to Extract,process and label tweets from twitter.
  • Provides a Random Forest classifier to implement sentiment analysis on any string.

Installation :

pip install --upgrade https://github.com/tusharsarkar3/TLA.git

Overview

Extract data
from TLA.Data.get_data import store_data
store_data('en',False)

This will extract and store the unlabeled data in a new directory inside data named datasets.

Label data
from TLA.Datasets.get_lang_data import language_data
df = language_data('en')
print(df)

This will print the labeled data that we have already collected.

Classify languages
Training

Training can be done in the following way:

from TLA.Lang_Classify.train import train_lang
train_lang(path_to_dataset,epochs)
Prediction

Inference is done in the following way:

from TLA.Lang_Classify.predict import predict
model = get_model(path_to_weights)
preds = predict(dataframe_to_be_used,model)
Analyse
Training

Training can be done in the following way:

from TLA.Analyse.train_rf import train_rf
train_rf(path_to_dataset)

This will store all the vectorizers and models in a seperate directory named saved_rf and saved_vec and they are present inside Analysis directory. Further instructions for training multiple languages is given in the next section which shows how to run the commands using CLI

Final Analysis

Analysis is done in the following way:

from TLA.Analysis.analyse import analyse_data 
analyse_data(path_to_weights)

This will store the final analysis as .csv inside a new directory named analysis.

Overview with Git

Installation another method
git clone https://github.com/tusharsarkar3/TLA.git
Extract data Navigate to the required directory
cd Data

Run the following command:

python get_data.py --lang en --process True

Lang flag is used to input the language of the dataset that is required and process flag shows where pre-processing should be done before returning the data. Give the following codes in the lang flag wrt the required language:

Loading Dataset

To load a dataset run the following command in python.

df= pd.read_csv("TLA/TLA/Datasets/get_data_en.csv")
 

The command will return a dataframe consisting of the data for the specific language requested.

In the phrase get_data_en, en can be sunstituted by the desired language code to load the dataframe for the specific language.

Pre-Processing

To preprocess a given string run the following command.

In your terminal use code

cd Data

then run the command in python

from TLA.Data import Pre_Process_Tweets

df=Pre_Process_Tweets.pre_process_tweet(df)

Here the function pre_process_tweet takes an input as a dataframe of tweets and returns an output of a dataframe with the list of preprocessed words for a particular tweet next to the tweet in the dataframe.

Analysis Training To train a random forest classifier for the purpose of sentiment analysis run the following command in your terminal.
cd Analysis

then

python train.rf --path "path to your datafile" --train_all_datasets False

here the --path flag represents the path to the required dataset you want to train the Random Forest Classifier on the --train_all_datasets flag is a boolean which can be used to train the model on multiple datasets at once.

The output is a file with the a .pkl file extention saved in the folder at location "TLA\Analysis\saved_rf{}.pkl" The output for vectorization of is stored in a .pkl file in the directory "TLA\Analysis\saved_vec{}.pkl"

Get Sentiment

To get the sentiment of any string use the following code.

In your terminal type

cd Analysis

then in your terminal type

python get_sentiment.py --prediction "Your string for prediction to be made upon" --lang "en"

here the --prediction flag collects the string for which you want to get the sentiment for. the --lang represents the language code representing the language you typed your string in.

The output is a sentiment which is either positive or negative depending on your string.

Statistics

To get a comprehensive statistic on sentiment of datasets run the following command.

In your terminal type

cd Analysis

then

python analyse.py 

This will give you an output of a table1.csv file at the location 'TLA\Analysis\analysis\table1.csv' comprising of statistics relating to the percentage of positive or negative tweets for a given language dataset.

It will also give a table2.csv file at 'TLA\Analysis\analysis\table2.csv' comprising of statistics for all languages combined.

Language Classification Training To train a model for language classfication on a given dataset run the following commands.

In your terminal run

cd Lang_Classify

then run

python train.py --data "path for your dataset" --model "path to weights if pretrained" --epochs 4

The --data flag requires the path to your training dataset.

The --model flag requires the path to the model you want to implement

The --epoch flag represents the epochs you want to train your model for.

The output is a file with a .pt extention named saved_wieghts_full.pt where your trained wieghst are stored.

Prediction To make prediction on any given string Us ethe following code.

In your terminal type

cd Lang_Classify

then run the code

python predict.py --predict "Text/DataFrame for language to predicted" --weights " Path for the stored weights of your model " 

The --predict flag requires the string you want to get the language for.

The --wieghts flag is the path for the stored wieghts you want to run your model on to make predictions.

The outputs is the language your string was typed in.


Results:

img

Performance of TLA ( Loss vs epochs)

Language Total tweets Positive Tweets Percentage Negative Tweets Percentage
English 500 66.8 33.2
Spanish 500 61.4 38.6
Persian 50 52 48
French 500 53 47
Hindi 500 62 38
Indonesian 500 63.4 36.6
Japanese 500 85.6 14.4
Dutch 500 84.2 15.8
Portuguese 500 61.2 38.8
Romainain 457 85.55 14.44
Russian 213 62.91 37.08
Swedish 420 80.23 19.76
Thai 424 71.46 28.53
Turkish 500 67.8 32.2
Urdu 42 69.04 30.95
Chinese 500 80.6 19.4

Reference:

@misc{sarkar2021tla,
     title={TLA: Twitter Linguistic Analysis}, 
     author={Tushar Sarkar and Nishant Rajadhyaksha},
     year={2021},
     eprint={2107.09710},
     archivePrefix={arXiv},
     primaryClass={cs.CL}
}
@misc{640cba8b-35cb-475e-ab04-62d079b74d13,
 title = {TLA: Twitter Linguistic Analysis},
 author = {Tushar Sarkar and Nishant Rajadhyaksha},
  journal = {Software Impacts},
 doi = {10.24433/CO.6464530.v1}, 
 howpublished = {\url{https://www.codeocean.com/}},
 year = 2021,
 month = {6},
 version = {v1}
}

Features to be added :

  • Access to more language
  • Creating GUI based system for better accesibility
  • Improving performance of the baseline model

Developed by Tushar Sarkar and Nishant Rajadhyaksha

Owner
Tushar Sarkar
I love solving problems with data
Tushar Sarkar
NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations

NL-Augmenter 🦎 → 🐍 The NL-Augmenter is a collaborative effort intended to add transformations of datasets dealing with natural language. Transformat

684 Jan 09, 2023
Implementation of ProteinBERT in Pytorch

ProteinBERT - Pytorch (wip) Implementation of ProteinBERT in Pytorch. Original Repository Install $ pip install protein-bert-pytorch Usage import torc

Phil Wang 92 Dec 25, 2022
Code for CodeT5: a new code-aware pre-trained encoder-decoder model.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation This is the official PyTorch implementation

Salesforce 564 Jan 08, 2023
Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"

GAN stability This repository contains the experiments in the supplementary material for the paper Which Training Methods for GANs do actually Converg

Lars Mescheder 884 Nov 11, 2022
NLP, before and after spaCy

textacy: NLP, before and after spaCy textacy is a Python library for performing a variety of natural language processing (NLP) tasks, built on the hig

Chartbeat Labs Projects 2k Jan 04, 2023
Use PaddlePaddle to reproduce the paper:mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer

MT5_paddle Use PaddlePaddle to reproduce the paper:mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer English | 简体中文 mT5: A Massively

2 Oct 17, 2021
Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models

Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model.

Prithivida 681 Jan 01, 2023
Transformer related optimization, including BERT, GPT

This repository provides a script and recipe to run the highly optimized transformer-based encoder and decoder component, and it is tested and maintained by NVIDIA.

NVIDIA Corporation 1.7k Jan 04, 2023
Programme de chiffrement et de déchiffrement inverse d'un message en python3.

Chiffrement Inverse En Python3 Programme de chiffrement et de déchiffrement inverse d'un message en python3. Explication du chiffrement inverse avec c

Malik Makkes 2 Mar 26, 2022
Pretrain CPM - 大规模预训练语言模型的预训练代码

CPM-Pretrain 版本更新记录 为了促进中文自然语言处理研究的发展,本项目提供了大规模预训练语言模型的预训练代码。项目主要基于DeepSpeed、Megatron实现,可以支持数据并行、模型加速、流水并行的代码。 安装 1、首先安装pytorch等基础依赖,再安装APEX以支持fp16。 p

Tsinghua AI 37 Dec 06, 2022
Fastseq 基于ONNXRUNTIME的文本生成加速框架

Fastseq 基于ONNXRUNTIME的文本生成加速框架

Jun Gao 9 Nov 09, 2021
Wind Speed Prediction using LSTMs in PyTorch

Implementation of Deep-Forecast using PyTorch Deep Forecast: Deep Learning-based Spatio-Temporal Forecasting Adapted from original implementation Setu

Onur Kaplan 151 Dec 14, 2022
Translate U is capable of translating the text present in an image from one language to the other.

Translate U is capable of translating the text present in an image from one language to the other. The app uses OCR and Google translate to identify and translate across 80+ languages.

Neelanjan Manna 1 Dec 22, 2021
Built for cleaning purposes in military institutions

Ferramenta do AL Construído para fins de limpeza em instituições militares. Instalação Requer python = 3.2 pip install -r requirements.txt Usagem Exe

0 Aug 13, 2022
My implementation of Safaricom Machine Learning Codility test. The code has bugs, logical I guess I made errors and any correction will be appreciated.

Safaricom_Codility Machine Learning 2022 The test entails two questions. Question 1 was on Machine Learning. Question 2 was on SQL I ran out of time.

Lawrence M. 1 Mar 03, 2022
Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".

Patience-based Early Exit Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit". NEWS: We now have a better and tidier i

Kevin Canwen Xu 54 Jan 04, 2023
ETM - R package for Topic Modelling in Embedding Spaces

ETM - R package for Topic Modelling in Embedding Spaces This repository contains an R package called topicmodels.etm which is an implementation of ETM

bnosac 37 Nov 06, 2022
This is a NLP based project to extract effective date of the contract from their text files.

Date-Extraction-from-Contracts This is a NLP based project to extract effective date of the contract from their text files. Problem statement This is

Sambhav Garg 1 Jan 26, 2022
Mkdocs + material + cool stuff

Modern-Python-Doc-Example mkdocs + material + cool stuff Doc is live here Features out of the box amazing good looking website thanks to mkdocs.org an

Francesco Saverio Zuppichini 61 Oct 26, 2022
Code for "Generative adversarial networks for reconstructing natural images from brain activity".

Reconstruct handwritten characters from brains using GANs Example code for the paper "Generative adversarial networks for reconstructing natural image

K. Seeliger 2 May 17, 2022