Bidirectional Encoder Representations from Transformers (BERT) Language Model for Sentiment Analysis task: Review

The latest trend in the direction of sentiment analysis has brought up new demand for understanding the contextual representation of the language. Among the various conventional machine learning and deep learning models, learning the context is the promising candidate for the sentiment classificatio...

Full description

Saved in:
Bibliographic Details
Published inTurkish journal of computer and mathematics education Vol. 12; no. 7; pp. 1708 - 1721
Main Authors Deepa, D, Tamilarasi, A
Format Journal Article
LanguageEnglish
Published Gurgaon Ninety Nine Publication 19.04.2021
Subjects
Online AccessGet full text
ISSN1309-4653

Cover

More Information
Summary:The latest trend in the direction of sentiment analysis has brought up new demand for understanding the contextual representation of the language. Among the various conventional machine learning and deep learning models, learning the context is the promising candidate for the sentiment classification task. BERT is a new pre-trained language model for context embedding and attracted more attention due to its deep analyzing capability, valuable linguistic knowledge in the intermediate layer, trained with larger corpus, and fine-tuned for any NLP task. Many researchers adapted the BERT model for sentiment analysis tasks by influencing the original architecture to get better classification accuracy. This article summarizes and reviews BERT architecture and its performance observed from fine-tuning different layers and attention heads.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1309-4653