Machine learning model of facial expression outperforms models using analgesia nociception index and vital signs to predict postoperative pain intensity: a pilot study

Background: Few studies have evaluated the use of automated artificial intelligence (AI)-based pain recognition in postoperative settings or the correlation with pain intensity. In this study, various machine learning (ML)-based models using facial expressions, the analgesia nociception index (ANI),...

Full description

Saved in:
Bibliographic Details
Published inKorean journal of anesthesiology Vol. 77; no. 2; pp. 195 - 204
Main Authors Park, Insun, Park, Jae Hyon, Yoon, Jongjin, Oh, Ah-Young, Ryu, Junghee, Koo, Bon-Wook
Format Journal Article
LanguageEnglish
Published Korea (South) Korean Society of Anesthesiologists 01.04.2024
대한마취통증의학회
Subjects
Online AccessGet full text
ISSN2005-6419
2005-7563
2005-7563
DOI10.4097/kja.23583

Cover

More Information
Summary:Background: Few studies have evaluated the use of automated artificial intelligence (AI)-based pain recognition in postoperative settings or the correlation with pain intensity. In this study, various machine learning (ML)-based models using facial expressions, the analgesia nociception index (ANI), and vital signs were developed to predict postoperative pain intensity, and their performances for predicting severe postoperative pain were compared.Methods: In total, 155 facial expressions from patients who underwent gastrectomy were recorded postoperatively; one blinded anesthesiologist simultaneously recorded the ANI score, vital signs, and patient self-assessed pain intensity based on the 11-point numerical rating scale (NRS). The ML models’ area under the receiver operating characteristic curves (AUROCs) were calculated and compared using DeLong’s test.Results: ML models were constructed using facial expressions, ANI, vital signs, and different combinations of the three datasets. The ML model constructed using facial expressions best predicted an NRS ≥ 7 (AUROC 0.93) followed by the ML model combining facial expressions and vital signs (AUROC 0.84) in the test-set. ML models constructed using combined physiological signals (vital signs, ANI) performed better than models based on individual parameters for predicting NRS ≥ 7, although the AUROCs were inferior to those of the ML model based on facial expressions (all P < 0.050). Among these parameters, absolute and relative ANI had the worst AUROCs (0.69 and 0.68, respectively) for predicting NRS ≥ 7.Conclusions: The ML model constructed using facial expressions best predicted severe postoperative pain (NRS ≥ 7) and outperformed models constructed from physiological signals.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Insun Park and Jae Hyon Park have contributed equally to this work as first authors.
https://doi.org/10.4097/kja.23583
ISSN:2005-6419
2005-7563
2005-7563
DOI:10.4097/kja.23583