A machine learning model for Alzheimer's disease prediction
Alzheimer’s disease (AD) is a neurodegenerative disorder that mostly affects old aged people. Its symptoms are initially mild, but they get worse over time. Although this health disease has no cure, its early diagnosis can help to reduce its impacts. A methodology SMOTE‐RF is proposed for AD predict...
        Saved in:
      
    
          | Published in | IET cyber-physical systems Vol. 9; no. 2; pp. 125 - 134 | 
|---|---|
| Main Authors | , , , , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
        Southampton
          John Wiley & Sons, Inc
    
        01.06.2024
     Wiley  | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 2398-3396 2398-3396  | 
| DOI | 10.1049/cps2.12090 | 
Cover
| Summary: | Alzheimer’s disease (AD) is a neurodegenerative disorder that mostly affects old aged people. Its symptoms are initially mild, but they get worse over time. Although this health disease has no cure, its early diagnosis can help to reduce its impacts. A methodology SMOTE‐RF is proposed for AD prediction. Alzheimer's is predicted using machine learning algorithms. Performances of three algorithms decision tree, extreme gradient boosting (XGB), and random forest (RF) are evaluated in prediction. Open Access Series of Imaging Studies longitudinal dataset available on Kaggle is used for experiments. The dataset is balanced using synthetic minority oversampling technique. Experiments are done on both imbalanced and balanced datasets. Decision tree obtained 73.38% accuracy, XGB obtained 83.88% accuracy and RF obtained a maximum of 87.84% accuracy on the imbalanced dataset. Decision tree obtained 83.15% accuracy, XGB obtained 91.05% accuracy and RF obtained maximum 95.03% accuracy on the balanced dataset. A maximum accuracy of 95.03% is achieved with SMOTE‐RF.
Machine learning algorithms namely Decision tree, XGB, and random forest are used for model building to predict Alzheimer's disease. Experiments are performed in two ways, first on the original dataset and then on class balanced datasets. As the dataset is highly imbalanced, the class imbalance problem is overcome by SMOTE technique. | 
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14  | 
| ISSN: | 2398-3396 2398-3396  | 
| DOI: | 10.1049/cps2.12090 |