Sentiment analysis based on rhetorical structure theory:Learning deep neural networks from discourse trees
•Improves sentiment analysis with discourse trees from rhetoric structure theory.•Extracts salient passages based on the position and relation in the discourse tree.•Develops a tensor-based tree-structured neural network.•Tensor structure distinguishes hierarchy and relation types.•Overfitting is re...
Saved in:
| Published in | Expert systems with applications Vol. 118; pp. 65 - 79 |
|---|---|
| Main Authors | , |
| Format | Journal Article |
| Language | English |
| Published |
New York
Elsevier Ltd
15.03.2019
Elsevier BV |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0957-4174 1873-6793 |
| DOI | 10.1016/j.eswa.2018.10.002 |
Cover
| Summary: | •Improves sentiment analysis with discourse trees from rhetoric structure theory.•Extracts salient passages based on the position and relation in the discourse tree.•Develops a tensor-based tree-structured neural network.•Tensor structure distinguishes hierarchy and relation types.•Overfitting is reduced by a tree-based algorithms for data augmentation.
Prominent applications of sentiment analysis are countless, covering areas such as marketing, customer service and communication. The conventional bag-of-words approach for measuring sentiment merely counts term frequencies; however, it neglects the position of the terms within the discourse. As a remedy, we develop a discourse-aware method that builds upon the discourse structure of documents. For this purpose, we utilize rhetorical structure theory to label (sub-)clauses according to their hierarchical relationships and then assign polarity scores to individual leaves. To learn from the resulting rhetorical structure, we propose a tensor-based, tree-structured deep neural network (named Discourse-LSTM) in order to process the complete discourse tree. The underlying tensors infer the salient passages of narrative materials. In addition, we suggest two algorithms for data augmentation (node reordering and artificial leaf insertion) that increase our training set and reduce overfitting. Our benchmarks demonstrate the superior performance of our approach. Moreover, our tensor structure reveals the salient text passages and thereby provides explanatory insights. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0957-4174 1873-6793 |
| DOI: | 10.1016/j.eswa.2018.10.002 |