MSA K-BERT: A Method for Medical Text Intent Classification

Improving medical text intent classification accuracy can assist the medical field in achieving more precise diagnoses. However, existing methods suffer from problems such as low accuracy and a lack of knowledge supplementation. To address these challenges, this paper proposes MSA K-BERT, a knowledg...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 15; no. 12; p. 6834
Main Authors Yuan, Yujia, Xi, Guan
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.06.2025
Subjects
Online AccessGet full text
ISSN2076-3417
2076-3417
DOI10.3390/app15126834

Cover

More Information
Summary:Improving medical text intent classification accuracy can assist the medical field in achieving more precise diagnoses. However, existing methods suffer from problems such as low accuracy and a lack of knowledge supplementation. To address these challenges, this paper proposes MSA K-BERT, a knowledge-enhanced bidirectional encoder representation model that integrates a multi-scale attention (MSA) mechanism to enhance prediction performance while solving critical issues like heterogeneity of embedding spaces and knowledge noise. We systematically validate the reliability of this model on medical text intent classification datasets and compare it with various deep learning models. The research results indicate that MSA K-BERT makes the following key contributions: First, it introduces a knowledge-supported language representation model compatible with BERT, enhancing language representations through the refined injection of knowledge graphs. Second, it adopts a multi-scale attention mechanism to reinforce different feature layers, significantly improving the model’s accuracy and interpretability. Especially in the IMCS-21 dataset, MSA K-BERT achieves precision, recall, and F1 scores of 0.826, 0.794, and 0.810, respectively, all exceeding the current mainstream methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2076-3417
2076-3417
DOI:10.3390/app15126834