Multimodal Sequential Deep Learning for Agitation Detection in People Living with Dementia

A common behavioral symptom in people living with dementia (PwD) is agitation. Agitation poses risks to the health and safety of both the patient and caregivers. Using sensor data from wearable devices is a promising means of detecting agitation events in a minimally invasive manner. Between 2017 -...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE Conference on Pervasive and Intelligent Computing (PICom) pp. 131 - 136
Main Authors Khan, Shehroz S., Lau, Grant, Ye, Bing, Newman, Kristine, Mihailidis, Alex, Iaboni, Andrea
Format Conference Proceeding
LanguageEnglish
Published IEEE 05.11.2024
Subjects
Online AccessGet full text
DOI10.1109/PICom64201.2024.00025

Cover

More Information
Summary:A common behavioral symptom in people living with dementia (PwD) is agitation. Agitation poses risks to the health and safety of both the patient and caregivers. Using sensor data from wearable devices is a promising means of detecting agitation events in a minimally invasive manner. Between 2017 - 2019, 600 days of sensor data was collected using an Empatica E4 wristband from 20 PwD. This paper investigates the application of sequential deep learning models on this unique sensor data to detect agitation in this population. Four deep learning architectures - Long Short-Term Models, Temporal Convolution Networks, Transformers and TS2Vec - are compared against each other and with previous results detecting agitation with classical machine learning models. We tested each model at various downsample factors and finds that Transformer-based models gave the highest AUC ROC and AUC PR scores. The findings also show the performance of the best performing deep learning models is comparable to the best performing machine learning models (random forest). This result underscores the potential of deep learning models in detecting agitation, as well as their potential to generalize to other similar clinical problems without the need for extensive feature engineering,
DOI:10.1109/PICom64201.2024.00025