Efficient heart disease prediction-based on optimal feature selection using DFCSS and classification by improved Elman-SFO

Prediction of cardiovascular disease (CVD) is a critical challenge in the area of clinical data analysis. In this study, an efficient heart disease prediction is developed based on optimal feature selection. Initially, the data pre-processing process is performed using data cleaning, data transforma...

Full description

Saved in:
Bibliographic Details
Published inIET systems biology Vol. 14; no. 6; pp. 380 - 390
Main Authors Wankhede, Jaishri, Kumar, Magesh, Sambandam, Palaniappan
Format Journal Article
LanguageEnglish
Published England The Institution of Engineering and Technology 01.12.2020
Subjects
Online AccessGet full text
ISSN1751-8849
1751-8857
1751-8857
DOI10.1049/iet-syb.2020.0041

Cover

More Information
Summary:Prediction of cardiovascular disease (CVD) is a critical challenge in the area of clinical data analysis. In this study, an efficient heart disease prediction is developed based on optimal feature selection. Initially, the data pre-processing process is performed using data cleaning, data transformation, missing values imputation, and data normalisation. Then the decision function-based chaotic salp swarm (DFCSS) algorithm is used to select the optimal features in the feature selection process. Then the chosen attributes are given to the improved Elman neural network (IENN) for data classification. Here, the sailfish optimisation (SFO) algorithm is used to compute the optimal weight value of IENN. The combination of DFCSS–IENN-based SFO (IESFO) algorithm effectively predicts heart disease. The proposed (DFCSS–IESFO) approach is implemented in the Python environment using two different datasets such as the University of California Irvine (UCI) Cleveland heart disease dataset and CVD dataset. The simulation results proved that the proposed scheme achieved a high-classification accuracy of 98.7% for the CVD dataset and 98% for the UCI dataset compared to other classifiers, such as support vector machine, K-nearest neighbour, Elman neural network, Gaussian Naive Bayes, logistic regression, random forest, and decision tree.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1751-8849
1751-8857
1751-8857
DOI:10.1049/iet-syb.2020.0041