CogLign: Interpretable Text Sentiment Determination by Aligning Cognition Between EEG-Derived Brain Graph and Text-Derived Knowledge Graph
Nowadays, detecting sentiment or emotion from user generated texts has been intensively studied in natural language understanding, especially via neural-based models based on text representation. However, the interpretability on how could the final text sentiment be determined by neural-based text r...
Saved in:
| Published in | IEEE transactions on knowledge and data engineering Vol. 37; no. 6; pp. 3220 - 3239 |
|---|---|
| Main Authors | , , , , |
| Format | Journal Article |
| Language | English |
| Published |
IEEE
01.06.2025
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 1041-4347 1558-2191 |
| DOI | 10.1109/TKDE.2025.3538618 |
Cover
| Summary: | Nowadays, detecting sentiment or emotion from user generated texts has been intensively studied in natural language understanding, especially via neural-based models based on text representation. However, the interpretability on how could the final text sentiment be determined by neural-based text representation has not been thoroughly unfolded yet. Consequently, in this paper, we propose CogLign which injects the neural-cognition derived from Electroencephalogram (EEG)-signal into the neural-based text sentiment analysis model, aimed at learning the activation of brain regions stimulated by different sentiments, so as to guide our proposed CogLign to make proper determination on text sentiment in brain-like way. Specifically, on the one hand, the given videos in different sentiments have been watched by subjects , during which the EEG-signals are monitored to construct brain connectivity pattern as brain graph ( BG ), attaining more obvious sentiment response on brain region activation for neural-cognition . On the other hand, we interpret the video-plots (or video-semantics) along timeline into text, where the entire video-interpreted-text will be strictly bound with the whole EEG-signal-sequence by segment via the fixed size of time-window . Then, entities and relations are extracted from the video-interpreted-text to construct knowledge graph ( KG ), depicting text semantics. Next, mapping from entities (or nodes) in KG to EEG-Electrodes (or nodes) in BG , further dated back to different brain regions, has been learned via cognition alignment between the EEG-derived BG and text-derived KG . In this way, by aligning neural cognition from brain graph with the semantic cognition from knowledge graph , our proposed framework CogLign can not only achieve the overall best sentiment analysis performance on the video-interpreted-text , but can also detect brain connectivity patterns in different sentiments more consistent with the prior conclusion of brain region sentiment preference, revealing competitive interpretability on text sentiment determination. |
|---|---|
| ISSN: | 1041-4347 1558-2191 |
| DOI: | 10.1109/TKDE.2025.3538618 |