Multi-modality sensor fusion for gait classification using deep learning

Human gait has been acquired and studied through modalities such as video cameras, inertial sensors and floor sensors etc. Due to many environmental constraints such as illumination, noise, drifts over extended periods or restricted environment, the classification f-score of gait classifications is...

Full description

Saved in:
Bibliographic Details
Published in2020 IEEE Sensors Applications Symposium (SAS) pp. 1 - 6
Main Authors Yunas, Syed Usama, Alharthi, Abdullah, Ozanyan, Krikor B
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.03.2020
Subjects
Online AccessGet full text
DOI10.1109/SAS48726.2020.9220037

Cover

More Information
Summary:Human gait has been acquired and studied through modalities such as video cameras, inertial sensors and floor sensors etc. Due to many environmental constraints such as illumination, noise, drifts over extended periods or restricted environment, the classification f-score of gait classifications is highly dependent on the usage scenario. This is addressed in this work by proposing sensor fusion of data obtained from 1) ambulatory inertial sensors (AIS) and 2) plastic optical fiber-based floor sensors (FS). Four gait activities are executed by 11 subjects on FS whilst wearing AIS. The proposed sensor fusion method achieves classification f-scores of 88% using artificial neural network (ANN) and 91% using convolutional neural network (CNN) by learning the best data representations from both modalities.
DOI:10.1109/SAS48726.2020.9220037