Code for "Adversarial Training for Aspect-Based Sentiment Analysis with BERT" and "Improving BERT Performance for Aspect-Based Sentiment Analysis".
We have used the codebase from the following paper and improved upon their results by applying adversarial training. "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis".
We focus on two major tasks in Aspect-Based Sentiment Analysis (ABSA).
Aspect Extraction (AE): given a review sentence ("The retina display is great."), find aspects("retina display");
Aspect Sentiment Classification (ASC): given an aspect ("retina display") and a review sentence ("The retina display is great."), detect the polarity of that aspect (positive).
Place laptop and restaurant post-trained BERTs into pt_model/laptop_pt and pt_model/rest_pt, respectively. The post-trained Laptop weights can be download here and restaurant here.
Execute the following command to run the model for Aspect Extraction task:
bash run_absa.sh ae laptop_pt laptop pt_ae 9 0
Here, laptop_pt is the post-trained weights for laptop, laptop is the domain, pt_ae is the fine-tuned folder in run/, 9 means run 9 times and 0 means use gpu-0.
Similarly,
bash run_absa.sh ae rest_pt rest pt_ae 9 0
bash run_absa.sh asc laptop_pt laptop pt_asc 9 0
bash run_absa.sh asc rest_pt rest pt_asc 9 0
Evaluation wrapper code has been written in ipython notebook eval/eval.ipynb.
AE eval/evaluate_ae.py additionally needs Java JRE/JDK to be installed.
Open result.ipynb and check the results.
@article{karimi2020adversarial,
title={Adversarial training for aspect-based sentiment analysis with BERT},
author={Karimi, Akbar and Rossi, Leonardo and Prati, Andrea and Full, Katharina},
journal={arXiv preprint arXiv:2001.11316},
year={2020}
}
@article{karimi2020improving,
title={Improving BERT Performance for Aspect-Based Sentiment Analysis},
author={Karimi, Akbar and Rossi, Leonardo and Prati, Andrea},
journal={arXiv preprint arXiv:2010.11731},
year={2020}
}