Multi-modality sensor fusion for gait classification using deep learning
Human gait has been acquired and studied through modalities such as video cameras, inertial sensors and floor sensors etc. Due to many environmental constraints such as illumination, noise, drifts over extended periods or restricted environment, the classification f-score of gait classifications is...
Saved in:
Published in | 2020 IEEE Sensors Applications Symposium (SAS) pp. 1 - 6 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.03.2020
|
Subjects | |
Online Access | Get full text |
DOI | 10.1109/SAS48726.2020.9220037 |
Cover
Summary: | Human gait has been acquired and studied through modalities such as video cameras, inertial sensors and floor sensors etc. Due to many environmental constraints such as illumination, noise, drifts over extended periods or restricted environment, the classification f-score of gait classifications is highly dependent on the usage scenario. This is addressed in this work by proposing sensor fusion of data obtained from 1) ambulatory inertial sensors (AIS) and 2) plastic optical fiber-based floor sensors (FS). Four gait activities are executed by 11 subjects on FS whilst wearing AIS. The proposed sensor fusion method achieves classification f-scores of 88% using artificial neural network (ANN) and 91% using convolutional neural network (CNN) by learning the best data representations from both modalities. |
---|---|
DOI: | 10.1109/SAS48726.2020.9220037 |