Hardware Deployable Edge AI Solution for Posture Classification Using mmWave Radar and Low- Computational Machine Learning Model
Identifying correct human postures is crucial in areas, such as patient care, in hospitals. However, the traditional vision-based methods widely used for this purpose raise privacy concerns for the subject, and the other wearable sensor-based approaches are impractical for real-world scenarios. In t...
        Saved in:
      
    
          | Published in | IEEE sensors journal Vol. 24; no. 16; pp. 26836 - 26844 | 
|---|---|
| Main Authors | , , , , , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
        New York
          IEEE
    
        15.08.2024
     The Institute of Electrical and Electronics Engineers, Inc. (IEEE)  | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 1530-437X 1558-1748  | 
| DOI | 10.1109/JSEN.2024.3416390 | 
Cover
| Summary: | Identifying correct human postures is crucial in areas, such as patient care, in hospitals. However, the traditional vision-based methods widely used for this purpose raise privacy concerns for the subject, and the other wearable sensor-based approaches are impractical for real-world scenarios. In this article, we propose a contactless, privacy-conscious, and memory-efficient posture classification system based on a millimeter-wave (mmWave) radar. This system utilizes 3-D point-cloud data captured using Texas Instrument's IWR1843BOOST frequency-modulated continuous-wave (FMCW) radar module to classify the posture of the subject. Two types of datasets are extracted from these radar data: 1) image dataset derived from the isometric view of the point-cloud data and 2) spatial coordinates dataset also extracted from the point-cloud data. A low-computational tiny machine learning (TinyML) model is employed on the datasets for efficient implementation on embedded hardware, Raspberry Pi 3 B+. The proposed model's parameters were quantized to 8 bits (int8), which accurately classify four postures, i.e., standing, sitting, lying, and bending, with an accuracy of 98.97% for the image data. However, to make it more computationally efficient, the int8 quantized TinyML model was trained on the spatial coordinates dataset, giving an accuracy of 96.12%. This highlights the efficiency and effectiveness of our proposed lightweight model that can be deployed on edge devices for real-world applications. | 
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14  | 
| ISSN: | 1530-437X 1558-1748  | 
| DOI: | 10.1109/JSEN.2024.3416390 |