Human Activity Recognition Using Multi-modal Data Fusion
The automated recognition of human activity is an important computer vision task, and it has been the subject of an increasing number of interesting home, sports, security, and industrial applications. Approaches using a single sensor have generally shown unsatisfactory performance. Therefore, an ap...
        Saved in:
      
    
          | Published in | Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications Vol. 11401; pp. 946 - 953 | 
|---|---|
| Main Authors | , , | 
| Format | Book Chapter | 
| Language | English | 
| Published | 
        Switzerland
          Springer International Publishing AG
    
        2019
     Springer International Publishing  | 
| Series | Lecture Notes in Computer Science | 
| Subjects | |
| Online Access | Get full text | 
| ISBN | 9783030134686 3030134687  | 
| ISSN | 0302-9743 1611-3349  | 
| DOI | 10.1007/978-3-030-13469-3_109 | 
Cover
| Summary: | The automated recognition of human activity is an important computer vision task, and it has been the subject of an increasing number of interesting home, sports, security, and industrial applications. Approaches using a single sensor have generally shown unsatisfactory performance. Therefore, an approach that efficiently combines data from a heterogeneous set of sensors is required. In this paper, we propose a new method for human activity recognition fusing data obtained from inertial sensors (IMUs), surface electromyographic recording electrodes (EMGs), and visual depth sensors, such as the Microsoft Kinect®. A network of IMUs and EMGs is scattered on a human body and a depth sensor keeps the human in its field of view. From each sensor, we keep track of a succession of primitive movements over a time window, and combine them to uniquely describe the overall activity performed by the human. We show that the multi-modal fusion of the three sensors offers higher performance in activity recognition than the combination of two or a single sensor. Also, we show that our approach is highly robust against temporary occlusions, data losses due to communication failures, and other events that naturally occur in non-structured environments. | 
|---|---|
| ISBN: | 9783030134686 3030134687  | 
| ISSN: | 0302-9743 1611-3349  | 
| DOI: | 10.1007/978-3-030-13469-3_109 |