The Accuracy Comparison Between Word2Vec and FastText On Sentiment Analysis of Hotel Reviews

Word embedding vectorization is more efficient than Bag-of-Word in word vector size. Word embedding also overcomes the loss of information related to sentence context, word order, and semantic relationships between words in sentences. Several kinds of Word Embedding are often considered for sentimen...

Full description

Saved in:
Bibliographic Details
Published inJurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) (Online) Vol. 6; no. 3; pp. 352 - 358
Main Authors Siti Khomsah, Rima Dias Ramadhani, Sena Wijaya
Format Journal Article
LanguageEnglish
Published Ikatan Ahli Informatika Indonesia 01.06.2022
Subjects
Online AccessGet full text
ISSN2580-0760
2580-0760
DOI10.29207/resti.v6i3.3711

Cover

More Information
Summary:Word embedding vectorization is more efficient than Bag-of-Word in word vector size. Word embedding also overcomes the loss of information related to sentence context, word order, and semantic relationships between words in sentences. Several kinds of Word Embedding are often considered for sentiment analysis, such as Word2Vec and FastText. Fast Text works on N-Gram, while Word2Vec is based on the word. This research aims to compare the accuracy of the sentiment analysis model using Word2Vec and FastText. Both models are tested in the sentiment analysis of Indonesian hotel reviews using the dataset from TripAdvisor.Word2Vec and FastText use the Skip-gram model. Both methods use the same parameters: number of features, minimum word count, number of parallel threads, and the context window size. Those vectorizers are combined by ensemble learning: Random Forest, Extra Tree, and AdaBoost. The Decision Tree is used as a baseline for measuring the performance of both models. The results showed that both FastText and Word2Vec well-to-do increase accuracy on Random Forest and Extra Tree. FastText reached higher accuracy than Word2Vec when using Extra Tree and Random Forest as classifiers. FastText leverage accuracy 8% (baseline: Decision Tree 85%), it is proofed by the accuracy of 93%, with 100 estimators.  
ISSN:2580-0760
2580-0760
DOI:10.29207/resti.v6i3.3711