Explainable GMDH-type neural networks for decision making: Case of medical diagnostics
In medical diagnostics, the use of interpretable artificial neural networks (ANN) is crucial to enabling healthcare professionals to make informed decisions that consider risks, especially when faced with uncertainties in patient data and expert opinions. Despite advances, conventional ANNs often pr...
        Saved in:
      
    
          | Published in | Applied soft computing Vol. 182; p. 113607 | 
|---|---|
| Main Authors | , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
            Elsevier B.V
    
        01.10.2025
     | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 1568-4946 1872-9681  | 
| DOI | 10.1016/j.asoc.2025.113607 | 
Cover
| Summary: | In medical diagnostics, the use of interpretable artificial neural networks (ANN) is crucial to enabling healthcare professionals to make informed decisions that consider risks, especially when faced with uncertainties in patient data and expert opinions. Despite advances, conventional ANNs often produce complex, not transparent models that limit interpretability, particularly in medical contexts where transparency is essential. Existing methods, such as decision trees and random forests, provide some interpretability but struggle with inconsistent medical data and fail to adequately quantify decision uncertainty. This paper introduces a novel Group Method of Data Handling (GMDH)-type neural network approach that addresses these gaps by generating concise, interpretable decision models based on the self-organizing concept. The proposed method builds multilayer networks using two-argument logical functions, ensuring explainability and minimizing the negative impact of human intervention. The method employs a selection criterion to incrementally grow networks, optimizing complexity while reducing validation errors. The algorithm’s convergence is proven through a bounded, monotonically decreasing error sequence, ensuring reliable solutions. Having been tested in complex diagnostic cases, including infectious endocarditis, systemic red lupus, and postoperative outcomes in acute appendicitis, the method achieved high expert agreement scores (Fleiss’s kappa of 0.98 (95% CI 0.97-0.99) and 0.86 (95% CI 0.83-0.89), respectively) compared to random forests (0.84 and 0.71). These results demonstrate statistically significant improvements (p<0.05), highlighting the method’s ability to produce interpretable rules that reflect uncertainties and improve the reliability of decisions. Having demonstrated a transparent and robust framework for medical decision-making, the proposed approach bridges the gap between model accuracy and interpretability, providing practitioners with reliable insights and confidence estimates required for making risk-aware decisions.
[Display omitted]
•Explainable Neural Networks help practitioners to understand models learnt form data.•Making risk-aware decisions in the presence of uncertainties existing in both data and decision models.•A new Boolean concept based on self-organizing principles to grow Explainable Neural Networks.•Practitioners could efficiently control the reliability of making risk-aware decisions. | 
|---|---|
| ISSN: | 1568-4946 1872-9681  | 
| DOI: | 10.1016/j.asoc.2025.113607 |