Code for "Adversarial Training for a Hybrid Approach to Aspect-Based Sentiment Analysis

Overview

HAABSAStar

Code for "Adversarial Training for a Hybrid Approach to Aspect-Based Sentiment Analysis". This project builds on the code from https://github.com/ofwallaart/HAABSA and https://github.com/mtrusca/HAABSA_PLUS_PLUS.

All software is written in PYTHON3 (https://www.python.org/) and makes use of the TensorFlow framework (https://www.tensorflow.org/).

Installation Instructions (Windows):

Dowload required files and add them to data/externalData folder:

  1. Download ontology: https://github.com/KSchouten/Heracles/tree/master/src/main/resources/externalData
  2. Download SemEval2015 Datasets: http://alt.qcri.org/semeval2015/task12/index.php?id=data-and-tools
  3. Download SemEval2016 Dataset: http://alt.qcri.org/semeval2016/task5/index.php?id=data-and-tools
  4. Download Glove Embeddings: http://nlp.stanford.edu/data/glove.42B.300d.zip
  5. Download Stanford CoreNLP parser: https://nlp.stanford.edu/software/stanford-parser-full-2018-02-27.zip
  6. Download Stanford CoreNLP Language models: https://nlp.stanford.edu/software/stanford-english-corenlp-2018-02-27-models.jar

Setup Environment

  1. Install chocolatey (a package manager for Windows): https://chocolatey.org/install
  2. Open a command prompt.
  3. Install python3 by running the following command: code(choco install python) (http://docs.python-guide.org/en/latest/starting/install3/win/).
  4. Make sure that pip is installed and use pip to install the following packages: setuptools and virtualenv (http://docs.python-guide.org/en/latest/dev/virtualenvs/#virtualenvironments-ref).
  5. Create a virtual environemnt in a desired location by running the following command: code(virtualenv ENV_NAME)
  6. Direct to the virtual environment source directory.
  7. Unzip the zip file of this GitHub repository in the virtual environment directrory.
  8. Activate the virtual environment by the following command: 'code(Scripts\activate.bat)`.
  9. Install the required packages from the requirements.txt file by running the following command: code(pip install -r requirements.txt).
  10. Install the required space language pack by running the following command: code(python -m spacy download en)

Note: the files BERT768embedding2015.txt and BERT768embedding2016.txt are too large for GitHub. These can be generated using getBERTusingColab.py.

Configure paths

The following scripts contain file paths to adapt to your computer (this is done by adding the path to you virtual environment before the filename. For example "/path/to/venv"+"data/programGeneratedData/GloVetraindata"): main_cross.py, main_hyper.py, config.py, HyperDataMaker.py, adversarial.py.

Run Software

  1. Configure one of the three main files to the required configuration (main.py, main_cross.py, main_hyper.py)
  2. Run the program from the command line by the following command: code(python PROGRAM_TO_RUN.py) (where PROGRAM_TO_RUN is main/main_cross/main_hyper)

Software explanation:

The environment contains the following main files that can be run: main.py, main_cross.py, main_hyper.py

  • main.py: program to run single in-sample and out-of-sample valdition runs. Each method can be activated by setting its corresponding boolean to True e.g. to run the Adversarial method set runAdversarial= True.

  • main_cross.py: similar to main.py but runs a 10-fold cross validation procedure for each method.

  • main_hyper.py: program that is able to do hyperparameter optimzation for a given space of hyperparamters for each method. To change a method change the objective and space parameters in the run_a_trial() function.

  • config.py: contains parameter configurations that can be changed such as: dataset_year, batch_size, iterations.

  • dataReader2016.py, loadData.py: files used to read in the raw data and transform them to the required formats to be used by one of the algorithms

  • lcrModel.py: Tensorflow implementation for the LCR-Rot algorithm

  • lcrModelAlt.py: Tensorflow implementation for the LCR-Rot-hop algorithm

  • lcrModelInverse.py: Tensorflow implementation for the LCR-Rot-inv algorithm

  • cabascModel.py: Tensorflow implementation for the CABASC algorithm

  • OntologyReasoner.py: PYTHON implementation for the ontology reasoner

  • svmModel.py: PYTHON implementation for a BoW model using a SVM.

  • adversarial.py: Tensorflow implementation of adversarial training for LCR-Rot-hop

  • att_layer.py, nn_layer.py, utils.py: programs that declare additional functions used by the machine learning algorithms.

Directory explanation:

The following directories are necessary for the virtual environment setup: __pycache, \Include, \Lib, \Scripts, \tcl, \venv

  • cross_results_2015: Results for a k-fold cross validation process for the SemEval-2015 dataset
  • cross_results_2016: Results for a k-fold cross validation process for the SemEval-2015 dataset
  • Results_Run_Adversarial: If WriteFile = True, a csv with accuracies per iteration is saved here
  • data:
    • externalData: Location for the external data required by the methods
    • programGeneratedData: Location for preprocessed data that is generated by the programs
  • hyper_results: Contains the stored results for hyperparameter optimzation for each method
  • results: temporary store location for the hyperopt package

Changed files with respect to https://github.com/mtrusca/HAABSA_PLUS_PLUS:

  • main.py
  • main_hyper.py
  • main_cross.py
  • config.py
  • adversarial.py (added)
Pytorch implementation of SELF-ATTENTIVE VAD, ICASSP 2021

SELF-ATTENTIVE VAD: CONTEXT-AWARE DETECTION OF VOICE FROM NOISE (ICASSP 2021) Pytorch implementation of SELF-ATTENTIVE VAD | Paper | Dataset Yong Rae

97 Dec 23, 2022
Official repository for the NeurIPS 2021 paper Get Fooled for the Right Reason: Improving Adversarial Robustness through a Teacher-guided curriculum Learning Approach

Get Fooled for the Right Reason Official repository for the NeurIPS 2021 paper Get Fooled for the Right Reason: Improving Adversarial Robustness throu

Sowrya Gali 1 Apr 25, 2022
a reimplementation of Holistically-Nested Edge Detection in PyTorch

pytorch-hed This is a personal reimplementation of Holistically-Nested Edge Detection [1] using PyTorch. Should you be making use of this work, please

Simon Niklaus 375 Dec 06, 2022
PIKA: a lightweight speech processing toolkit based on Pytorch and (Py)Kaldi

PIKA: a lightweight speech processing toolkit based on Pytorch and (Py)Kaldi PIKA is a lightweight speech processing toolkit based on Pytorch and (Py)

336 Nov 25, 2022
MTA:SA Server Configer.

MTAConfiger MTA:SA Server Configer. Hi šŸ‘‹ , I'm Alireza A Python Developer Boy šŸ”­ I’m currently working on my C# projects 🌱 I’m currently Learning CS

3 Jun 07, 2022
THIS IS THE **OLD** PYMC PROJECT. PLEASE USE PYMC3 INSTEAD:

Introduction Version: 2.3.8 Authors: Chris Fonnesbeck Anand Patil David Huard John Salvatier Web site: https://github.com/pymc-devs/pymc Documentation

PyMC 7.2k Jan 07, 2023
MINERVA: An out-of-the-box GUI tool for offline deep reinforcement learning

MINERVA is an out-of-the-box GUI tool for offline deep reinforcement learning, designed for everyone including non-programmers to do reinforcement learning as a tool.

Takuma Seno 80 Nov 06, 2022
Repo for WWW 2022 paper: Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval

BiDR Repo for WWW 2022 paper: Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval. Requirements torch==

Microsoft 11 Oct 20, 2022
Riemann Noise Injection With PyTorch

Riemann Noise Injection - PyTorch A module for modeling GAN noise injection based on Riemann geometry, as described in Ruili Feng, Deli Zhao, and Zhen

2 May 27, 2022
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 06, 2022
Official source code to CVPR'20 paper, "When2com: Multi-Agent Perception via Communication Graph Grouping"

When2com: Multi-Agent Perception via Communication Graph Grouping This is the PyTorch implementation of our paper: When2com: Multi-Agent Perception vi

34 Nov 09, 2022
A small library for creating and manipulating custom JAX Pytree classes

Treeo A small library for creating and manipulating custom JAX Pytree classes Light-weight: has no dependencies other than jax. Compatible: Treeo Tree

Cristian Garcia 58 Nov 23, 2022
SphereFace: Deep Hypersphere Embedding for Face Recognition

SphereFace: Deep Hypersphere Embedding for Face Recognition By Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj and Le Song License SphereFa

Weiyang Liu 1.5k Dec 29, 2022
A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing.

AnimeGAN A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing. Randomly Generated Images The images are

Jie Lei é›·ę° 1.2k Jan 03, 2023
Low-dose Digital Mammography with Deep Learning

Impact of loss functions on the performance of a deep neural network designed to restore low-dose digital mammography ====== This repository contains

WANG-AXIS 6 Dec 13, 2022
Here I will explain the flow to deploy your custom deep learning models on Ultra96V2.

Xilinx_Vitis_AI This repo will help you to Deploy your Deep Learning Model on Ultra96v2 Board. Prerequisites Vitis Core Development Kit 2019.2 This co

Amin Mamandipoor 1 Feb 08, 2022
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternativ

9 Oct 18, 2022
Get 2D point positions (e.g., facial landmarks) projected on 3D mesh

points2d_projection_mesh Input 2D points (e.g. facial landmarks) on an image Camera parameters (extrinsic and intrinsic) of the image Aligned 3D mesh

5 Dec 08, 2022
automated systems to assist guarding corona Virus precautions for Closed Rooms (e.g. Halls, offices, etc..)

Automatic-precautionary-guard automated systems to assist guarding corona Virus precautions for Closed Rooms (e.g. Halls, offices, etc..) what is this

badra 0 Jan 06, 2022
ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning. In ICCV, 2021.

ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning This repository contains the code for our ICCV 202

sangho.lee 28 Nov 08, 2022