Towards Machine Learning Explainability in Text Classification for Fake News Detection
The digital media landscape has been exposed in recent years to an increasing number of deliberately misleading news and disinformation campaigns, a phenomenon popularly referred as fake news. In an effort to combat the dissemination of fake news, designing machine learning models that can classify...
Saved in:
Published in | 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA) pp. 775 - 781 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.12.2020
|
Subjects | |
Online Access | Get full text |
ISBN | 1728184711 9781728184708 1728184703 9781728184715 |
DOI | 10.1109/ICMLA51294.2020.00127 |
Cover
Summary: | The digital media landscape has been exposed in recent years to an increasing number of deliberately misleading news and disinformation campaigns, a phenomenon popularly referred as fake news. In an effort to combat the dissemination of fake news, designing machine learning models that can classify text as fake or not has become an active line of research. While new models are continuously being developed, the focus so far has mainly been aimed at improving the accuracy of the models for given datasets. Hence, there is little research done in the direction of explainability of the deep learning (DL) models constructed for the task of fake news detection.In order to add a level of explainability, several aspects have to be taken into consideration. For instance, the pre-processing phase, or the length and complexity of the text play an important role in achieving a successful classification. These aspects need to be considered in conjunction with the model's architecture. All of these issues are addressed and analyzed in this paper. Visualizations are further employed to grasp a better understanding how different models distribute their attention when classifying fake news texts. In addition, statistical data is gathered to deepen the analysis and to provide insights with respect to the model's interpretability. |
---|---|
ISBN: | 1728184711 9781728184708 1728184703 9781728184715 |
DOI: | 10.1109/ICMLA51294.2020.00127 |