A Deep Learning Framework With Domain Generalization and Few-Shot Learning for Locomotion Mode Classification Across Users, Sessions, and Prostheses

Transfemoral amputees don and doff their prostheses at least daily, making inter-session classification performance important for clinical implementation of locomotion mode classification algorithms. Here, we present a deep-learning framework based on domain-adversarial training and few-shot learnin...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on medical robotics and bionics p. 1
Main Authors Anselmino, Eugenio, Simon, Ann M., Hargrove, Levi J.
Format Journal Article
LanguageEnglish
Published IEEE 2025
Subjects
Online AccessGet full text
ISSN2576-3202
2576-3202
DOI10.1109/TMRB.2025.3606364

Cover

More Information
Summary:Transfemoral amputees don and doff their prostheses at least daily, making inter-session classification performance important for clinical implementation of locomotion mode classification algorithms. Here, we present a deep-learning framework based on domain-adversarial training and few-shot learning fine-tuning to classify locomotion modes in unseen sessions or subjects data across different prosthesis models. We validated the approach with a leave-one-session-out analysis repeated five times and made comparisons to a prosthesis-specific classifier. The dataset was created by merging data from two different prosthesis models (Vanderbilt University, VU, Gen 2 and Gen 3 powered knee-ankle prostheses), for a total of 31 sessions acquired across multiple days from 11 subjects. Subjects performed five locomotion tasks: level walking, incline and decline walking, and stair ascent and descent. Since transitions between different locomotion modes happen at different gait events, the analyses have been repeated for both heel-strike (HS) and toe-off (TO) events. At HS events, the proposed approach achieves a median f1-score of 99.12% and 92.41% on VU Gen 2 and Gen 3 prostheses respectively. At TO events, the proposed approach reaches a median f1-score of 96.83% with VU Gen 2 and 94.36% with VU Gen 3. The proposed framework is a promising solution for locomotion classification on data of previously unseen sessions or subjects, allowing classification on multiple prosthesis models.
ISSN:2576-3202
2576-3202
DOI:10.1109/TMRB.2025.3606364