Deep reinforcement and transfer learning for abstractive text summarization: A review
•ATS field overview: We provide a full review of the ATS field, providing.•A taxonomy with different categories.•A brief history of models’ evolution.•Evaluation measurements review.•Datasets comparisons.•ATS and MT research fields comparison and relationship.•Models comprehensive review: We collect...
        Saved in:
      
    
          | Published in | Computer speech & language Vol. 71; p. 101276 | 
|---|---|
| Main Authors | , , , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
            Elsevier Ltd
    
        01.01.2022
     | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 0885-2308 1095-8363  | 
| DOI | 10.1016/j.csl.2021.101276 | 
Cover
| Summary: | •ATS field overview: We provide a full review of the ATS field, providing.•A taxonomy with different categories.•A brief history of models’ evolution.•Evaluation measurements review.•Datasets comparisons.•ATS and MT research fields comparison and relationship.•Models comprehensive review: We collect abundant resources on the main topics of this study and provide a comprehensive review of SOTA research work: Starting from Deep neural sequence-to-sequence models, then RL approaches, and finally TL architectures, including PTLMs.•Challenges: We analyze previous and current challenges that faced and are facing researchers in the focused fields and the proposed solutions.•Comparisons: We provide different kinds of comparisons of the investigated models from different perspectives: theoretically, practically, and models’ evaluation results. Then the best models are highlighted.•Future trends: We suggest and discuss possible future research trends.
Automatic Text Summarization (ATS) is an important area in Natural Language Processing (NLP) with the goal of shortening a long text into a more compact version by conveying the most important points in a readable form. ATS applications continue to evolve and utilize effective approaches that are being evaluated and implemented by researchers. State-of-the-Art (SotA) technologies that demonstrate cutting-edge performance and accuracy in abstractive ATS are deep neural sequence-to-sequence models, Reinforcement Learning (RL) approaches, and Transfer Learning (TL) approaches, including Pre-Trained Language Models (PTLMs). The graph-based Transformer architecture and PTLMs have influenced tremendous advances in NLP applications. Additionally, the incorporation of recent mechanisms, such as the knowledge-enhanced mechanism, significantly enhanced the results. This study provides a comprehensive review of recent research advances in the area of abstractive text summarization for works spanning the past six years. Past and present problems are described, as well as their proposed solutions. In addition, abstractive ATS datasets and evaluation measurements are also highlighted. The paper concludes by comparing the best models and discussing future research directions. | 
|---|---|
| ISSN: | 0885-2308 1095-8363  | 
| DOI: | 10.1016/j.csl.2021.101276 |