A Deep Learning-Based Assistive System for the Visually Impaired Using YOLO-V7
Individuals with visual impairments frequently confront substantial difficulties in interacting with their environment, a problem that is often exacerbated by the cost and accessibility of existing assistive technologies. This study introduces a prototype for a cost-effective and accessible assistiv...
Saved in:
| Published in | Revue d intelligence artificielle Vol. 37; no. 4; p. 901 |
|---|---|
| Main Authors | , |
| Format | Journal Article |
| Language | English |
| Published |
Edmonton
International Information and Engineering Technology Association (IIETA)
01.08.2023
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 0992-499X 1958-5748 1958-5748 |
| DOI | 10.18280/ria.370409 |
Cover
| Summary: | Individuals with visual impairments frequently confront substantial difficulties in interacting with their environment, a problem that is often exacerbated by the cost and accessibility of existing assistive technologies. This study introduces a prototype for a cost-effective and accessible assistive device that employs deep learning techniques for object recognition. The proposed system utilizes the YOLO-V7 model, a deep learning algorithm trained on a comprehensive dataset encompassing various everyday objects, including US dollar denominations. In conjunction with two transfer learning-based cascade models, the system offers detection across 86 object categories. Upon object identification, the name of the item is converted into a Braille-readable format using the Python Braille library. Comprehensive experiments and analyses were undertaken to assess the efficacy of the proposed system. The results corroborate the system's effectiveness in achieving its intended purpose, demonstrating its potential to significantly aid visually impaired individuals in recognizing and interacting with objects in their environment. With a processing and Braille code generation time of 188.5 ms per frame, the model achieved recall, precision, and mAP scores of 0.81, 0.92, and 0.96, respectively. The integration of deep learning technology with high-performance platform boards has facilitated the development of a promising solution to the challenges faced by visually impaired individuals in environmental interaction. Overall, the proposed prototype represents an accessible and cost-effective assistive device, potentially revolutionizing the manner in which visually impaired individuals interact with their surroundings. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0992-499X 1958-5748 1958-5748 |
| DOI: | 10.18280/ria.370409 |