VTD: Visual and Tactile Database for Driver State and Behavior Perception
In the domain of autonomous vehicles, the human-vehicle co-pilot system has garnered significant research attention. To address the subjective uncertainties in driver state and interaction behaviors, which are pivotal to the safety of Human-in-the-loop co-driving systems, we introduce a novel visual...
        Saved in:
      
    
          | Main Authors | , , , , , | 
|---|---|
| Format | Journal Article | 
| Language | English | 
| Published | 
          
        06.12.2024
     | 
| Subjects | |
| Online Access | Get full text | 
| DOI | 10.48550/arxiv.2412.04888 | 
Cover
| Summary: | In the domain of autonomous vehicles, the human-vehicle co-pilot system has
garnered significant research attention. To address the subjective
uncertainties in driver state and interaction behaviors, which are pivotal to
the safety of Human-in-the-loop co-driving systems, we introduce a novel
visual-tactile perception method. Utilizing a driving simulation platform, a
comprehensive dataset has been developed that encompasses multi-modal data
under fatigue and distraction conditions. The experimental setup integrates
driving simulation with signal acquisition, yielding 600 minutes of fatigue
detection data from 15 subjects and 102 takeover experiments with 17 drivers.
The dataset, synchronized across modalities, serves as a robust resource for
advancing cross-modal driver behavior perception algorithms. | 
|---|---|
| DOI: | 10.48550/arxiv.2412.04888 |