A Real-time skeleton-based fall detection algorithm based on temporal convolutional networks and transformer encoder

•Fall detection is critical for prompt medical assistance in older adults.•We proposed a novel and real-time skeleton-based fall detection algorithm (TCNTE).•Weighted focal loss was implemented to address the severe class imbalance issue.•TCNTE demonstrated state-of-the-art accuracy on various visio...

Full description

Saved in:
Bibliographic Details
Published inPervasive and mobile computing Vol. 107; p. 102016
Main Authors Yu, Xiaoqun, Wang, Chenfeng, Wu, Wenyu, Xiong, Shuping
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.02.2025
Subjects
Online AccessGet full text
ISSN1574-1192
DOI10.1016/j.pmcj.2025.102016

Cover

More Information
Summary:•Fall detection is critical for prompt medical assistance in older adults.•We proposed a novel and real-time skeleton-based fall detection algorithm (TCNTE).•Weighted focal loss was implemented to address the severe class imbalance issue.•TCNTE demonstrated state-of-the-art accuracy on various vision-based fall datasets.•TCNTE achieved excellent real-time performance (19 fps) on edge devices. As the population of older individuals living independently rises, coupled with the heightened risk of falls among this demographic, the need for automatic fall detection systems becomes increasingly urgent to ensure timely medical intervention. Computer vision (CV)-based methodologies have emerged as a preferred approach among researchers due to their contactless and pervasive nature. However, existing CV-based solutions often suffer from either poor robustness or prohibitively high computational requirements, impeding their practical implementation in elderly living environments. To address these challenges, we introduce TCNTE, a real-time skeleton-based fall detection algorithm that combines Temporal Convolutional Network (TCN) with Transformer Encoder (TE). We also successfully mitigate the severe class imbalance issue by implementing weighted focal loss. Cross-validation on multiple publicly available vision-based fall datasets demonstrates TCNTE's superiority over individual models (TCN and TE) and existing state-of-the-art fall detection algorithms, achieving remarkable accuracies (front view of UP-Fall: 99.58 %; side view of UP-Fall: 98.75 %; Le2i: 97.01 %; GMDCSA-24: 92.99 %) alongside practical viability. Visualizations using t-distributed stochastic neighbor embedding (t-SNE) reveal TCNTE's superior separation margin and cohesive clustering between fall and non-fall classes compared to TCN and TE. Crucially, TCNTE is designed for pervasive deployment in mobile and resource-constrained environments. Integrated with YOLOv8 pose estimation and BoT-SORT human tracking, the algorithm operates on NVIDIA Jetson Orin NX edge device, achieving an average frame rate of 19 fps for single-person and 17 fps for two-person scenarios. With its validated accuracy and impressive real-time performance, TCNTE holds significant promise for practical fall detection applications in older adult care settings.
ISSN:1574-1192
DOI:10.1016/j.pmcj.2025.102016