Unified Instance and Knowledge Alignment Pretraining for Aspect-based Sentiment Analysis

Related tags

Deep LearningUIKA
Overview

Unified Instance and Knowledge Alignment Pretraining for Aspect-based Sentiment Analysis


Requirements

  • python 3.7
  • pytorch-gpu 1.7
  • numpy 1.19.4
  • pytorch_pretrained_bert 0.6.2
  • nltk 3.3
  • GloVe.840B.300d
  • bert-base-uncased

Environment

  • OS: Ubuntu-16.04.1
  • GPU: GeForce RTX 2080
  • CUDA: 10.2
  • cuDNN: v8.0.2

Dataset

  1. target datasets

    • raw data: "./dataset/"
    • processing data: "./dataset_npy/"
    • word embedding file: "./embeddings/"
  2. pretraining datasets

Training options

  • ds_name: the name of target dataset, ['14semeval_laptop', '14semeval_rest', 'Twitter'], default='14semeval_rest'
  • pre_name: the name of pretraining dataset, ['Amazon', 'Yelp'], default='Amazon'
  • bs: batch size to use during training, [64, 100, 200], default=64
  • learning_rate: learning rate to use, [0.001, 0.0005, 0.00001], default=0.001
  • n_epoch: number of epoch to use, [5, 10], default=10
  • model: the name of model, ['ABGCN', 'GCAE', 'ATAE'], default='ABGCN'
  • is_test: train or test the model, [0, 1], default=1
  • is_bert: GloVe-based or BERT-based, [0, 1], default=0
  • alpha: value of parameter \alpha in knowledge guidance loss of the paper, [0.5, 0.6, 0.7], default=0.06
  • stage: the number of training stage, [1, 2, 3, 4], default=4

Running

  1. running for the first stage (pretraining on the document)

    • python ./main.py -pre_name Amaozn -bs 256 -learning_rate 0.0005 -n_epoch 10 -model ABGCN -is_test 0 -is_bert 0 -stage 1
  2. running for the second stage

    • python ./main.py -ds_name 14semeval_laptop -bs 64 -learning_rate 0.001 -n_epoch 5 -model ABGCN -is_test 0 -is_bert 0 -alpha 0.6 -stage 2
  3. runing for the final stage

    • python ./main.py -ds_name 14semeval_laptop -bs 64 -learning_rate 0.001 -n_epoch 10 -model ABGCN -is_test 0 -is_bert 0 -stage 3
  4. training from scratch:

    • python ./main.py -ds_name 14semeval_laptop -bs 64 -learning_rate 0.001 -n_epoch 10 -model ABGCN -is_test 0 -is_bert 0 -stage 4

Evaluation

To have a quick look, we saved the best model weight trained on the target datasets in the "./best_model_weight". You can easily load them and test the performance. Due to the limited file space, we only provide the weight of ABGCN on 14semeval_laptop and 14semeval_rest datasets. You can evaluate the model weight with:

  • python ./main.py -ds_name 14semeval_laptop -bs 64 -model ABGCN -is_test 1 -is_bert 0
  • python ./main.py -ds_name 14semeval_rest-bs 64 -model ABGCN -is_test 1 -is_bert 0

Notes

  • The target datasets and more than 50% of the code are borrowed from TNet-ATT (Tang et.al, ACL2019).

  • The pretraining datasets are obtained from www.Kaggle.com.

Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images

Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images This repository contains the implementation of the following paper

Seonggwan Ko 9 Jul 30, 2022
Neural Scene Graphs for Dynamic Scene (CVPR 2021)

Implementation of Neural Scene Graphs, that optimizes multiple radiance fields to represent different objects and a static scene background. Learned representations can be rendered with novel object

151 Dec 26, 2022
Pytorch domain adaptation package

DomainAdaptation This package is created to tackle the problem of domain shifts when dealing with two domains of different feature distributions. In d

Institute of Computational Perception 7 Oct 22, 2022
Asynchronous Advantage Actor-Critic in PyTorch

Asynchronous Advantage Actor-Critic in PyTorch This is PyTorch implementation of A3C as described in Asynchronous Methods for Deep Reinforcement Learn

Reiji Hatsugai 38 Dec 12, 2022
Self-training for Few-shot Transfer Across Extreme Task Differences

Self-training for Few-shot Transfer Across Extreme Task Differences (STARTUP) Introduction This repo contains the official implementation of the follo

Cheng Perng Phoo 33 Oct 31, 2022
PyTorch implementation of Weak-shot Fine-grained Classification via Similarity Transfer

SimTrans-Weak-Shot-Classification This repository contains the official PyTorch implementation of the following paper: Weak-shot Fine-grained Classifi

BCMI 60 Dec 02, 2022
SegNet-Basic with Keras

SegNet-Basic: What is Segnet? Deep Convolutional Encoder-Decoder Architecture for Semantic Pixel-wise Image Segmentation Segnet = (Encoder + Decoder)

Yad Konrad 81 Jun 30, 2022
OCR-D wrapper for detectron2 based segmentation models

ocrd_detectron2 OCR-D wrapper for detectron2 based segmentation models Introduction Installation Usage OCR-D processor interface ocrd-detectron2-segm

Robert Sachunsky 13 Dec 06, 2022
A repository for generating stylized talking 3D and 3D face

style_avatar A repository for generating stylized talking 3D faces and 2D videos. This is the repository for paper Imitating Arbitrary Talking Style f

Haozhe Wu 191 Dec 22, 2022
TorchGRL is the source code for our paper Graph Convolution-Based Deep Reinforcement Learning for Multi-Agent Decision-Making in Mixed Traffic Environments for IV 2022.

TorchGRL TorchGRL is the source code for our paper Graph Convolution-Based Deep Reinforcement Learning for Multi-Agent Decision-Making in Mixed Traffi

XXQQ 42 Dec 09, 2022
Weakly Supervised Posture Mining with Reverse Cross-entropy for Fine-grained Classification

Fine-grainedImageClassification Weakly Supervised Posture Mining with Reverse Cross-entropy for Fine-grained Classification We trained model here: lin

ZhenchaoTang 14 Oct 21, 2022
The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation

PointNav-VO The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation Project Page | Paper Table of Contents Setup

Xiaoming Zhao 41 Dec 15, 2022
RAANet: Range-Aware Attention Network for LiDAR-based 3D Object Detection with Auxiliary Density Level Estimation

RAANet: Range-Aware Attention Network for LiDAR-based 3D Object Detection with Auxiliary Density Level Estimation Anonymous submission Abstract 3D obj

30 Sep 16, 2022
A Benchmark For Measuring Systematic Generalization of Multi-Hierarchical Reasoning

Orchard Dataset This repository contains the code used for generating the Orchard Dataset, as seen in the Multi-Hierarchical Reasoning in Sequences: S

Bill Pung 1 Jun 05, 2022
Code for the paper: On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations

Non-Parametric Prior Actor-Critic (N-PPAC) This repository contains the code for On Pathologies in KL-Regularized Reinforcement Learning from Expert D

Cong Lu 5 May 13, 2022
Implementation of Convolutional LSTM in PyTorch.

ConvLSTM_pytorch This file contains the implementation of Convolutional LSTM in PyTorch made by me and DavideA. We started from this implementation an

Andrea Palazzi 1.3k Dec 29, 2022
Universal Probability Distributions with Optimal Transport and Convex Optimization

Sylvester normalizing flows for variational inference Pytorch implementation of Sylvester normalizing flows, based on our paper: Sylvester normalizing

Rianne van den Berg 172 Dec 13, 2022
(NeurIPS 2021) Pytorch implementation of paper "Re-ranking for image retrieval and transductive few-shot classification"

SSR (NeurIPS 2021) Pytorch implementation of paper "Re-ranking for image retrieval and transductivefew-shot classification" [Paper] [Project webpage]

xshen 29 Dec 06, 2022
Code for the paper "Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds" (ICCV 2021)

Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds This is the official code implementation for the paper "Spatio-temporal Se

Hesper 63 Jan 05, 2023
Python scripts using the Mediapipe models for Halloween.

Mediapipe-Halloween-Examples Python scripts using the Mediapipe models for Halloween. WHY Mainly for fun. But this repository also includes useful exa

Ibai Gorordo 23 Jan 06, 2023