A Signal-Level Transfer Learning Framework for Autonomous Reconfiguration of Wearable Systems
Machine learning algorithms, which form the core intelligence of wearables, traditionally deduce a computational model from a set of training data to detect events of interest. However, in the dynamic environment in which wearables operate, the accuracy of a computational model drops whenever change...
        Saved in:
      
    
          | Published in | IEEE transactions on mobile computing Vol. 19; no. 3; pp. 513 - 527 | 
|---|---|
| Main Authors | , | 
| Format | Magazine Article | 
| Language | English | 
| Published | 
        Los Alamitos
          IEEE
    
        01.03.2020
     The Institute of Electrical and Electronics Engineers, Inc. (IEEE)  | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 1536-1233 1558-0660  | 
| DOI | 10.1109/TMC.2018.2878673 | 
Cover
| Summary: | Machine learning algorithms, which form the core intelligence of wearables, traditionally deduce a computational model from a set of training data to detect events of interest. However, in the dynamic environment in which wearables operate, the accuracy of a computational model drops whenever changes in configuration or context of the system occur. In this paper, using transfer learning as an organizing principle, we propose a novel design framework to enable autonomous reconfiguration of wearable systems. More specifically, we focus on the cases where the specifications of sensor(s) or the subject vary compared to what is available in the training data. We develop two new algorithms for data mapping (the mapping is between the training data and the data for the current operating setting). The first data mapping algorithm combines effective methods for finding signal similarity with network-based clustering, while the second algorithm is based on finding signal motifs. The data mapping algorithms constitute the centerpiece of the transfer learning phase in our framework. We demonstrate the efficacy of the data mapping algorithms using two publicly available datasets on human activity recognition. We show that the data mapping algorithms are up to two orders of magnitude faster compared to a brute-force approach. We also show that the proposed framework overall improves activity recognition accuracy by up to 15 percent for the first dataset and by up to 32 percent for the second dataset. | 
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14  | 
| ISSN: | 1536-1233 1558-0660  | 
| DOI: | 10.1109/TMC.2018.2878673 |