A New Method of Improving BERT for Text Classification
Text classification is a basic task in natural language processing. Recently, pre-training models such as BERT have achieved outstanding results compared with previous methods. However, BERT fails to take into account local information in the text such as a sentence and a phrase. In this paper, we p...
Saved in:
| Published in | Intelligence Science and Big Data Engineering. Big Data and Machine Learning Vol. 11936; pp. 442 - 452 |
|---|---|
| Main Authors | , |
| Format | Book Chapter |
| Language | English |
| Published |
Switzerland
Springer International Publishing AG
2019
Springer International Publishing |
| Series | Lecture Notes in Computer Science |
| Subjects | |
| Online Access | Get full text |
| ISBN | 9783030362034 3030362035 |
| ISSN | 0302-9743 1611-3349 |
| DOI | 10.1007/978-3-030-36204-1_37 |
Cover
| Summary: | Text classification is a basic task in natural language processing. Recently, pre-training models such as BERT have achieved outstanding results compared with previous methods. However, BERT fails to take into account local information in the text such as a sentence and a phrase. In this paper, we present a BERT-CNN model for text classification. By adding CNN to the task-specific layers of BERT model, our model can get the information of important fragments in the text. In addition, we input the local representation along with the output of the BERT into the transformer encoder in order to take advantage of the self-attention mechanism and finally get the representation of the whole text through transformer layer. Extensive experiments demonstrate that our model obtains competitive performance against state-of-the-art baselines on four benchmark datasets. |
|---|---|
| ISBN: | 9783030362034 3030362035 |
| ISSN: | 0302-9743 1611-3349 |
| DOI: | 10.1007/978-3-030-36204-1_37 |