Research and development of sign language recognition system using neural network algorithm
Sign language is an important communication tool for the hearing-impaired community. Due to technological development, it is now possible to develop systems that can recognize, translate and process sign language into text or speech, according to the visual representation of gestures. This article e...
        Saved in:
      
    
          | Published in | 2024 IEEE 4th International Conference on Smart Information Systems and Technologies (SIST) pp. 321 - 327 | 
|---|---|
| Main Authors | , , , , , | 
| Format | Conference Proceeding | 
| Language | English | 
| Published | 
            IEEE
    
        15.05.2024
     | 
| Subjects | |
| Online Access | Get full text | 
| DOI | 10.1109/SIST61555.2024.10629529 | 
Cover
| Summary: | Sign language is an important communication tool for the hearing-impaired community. Due to technological development, it is now possible to develop systems that can recognize, translate and process sign language into text or speech, according to the visual representation of gestures. This article explores the development of a real-time sign language recognition system using a neural network algorithm. The aim of the research is to develop such a sign language recognition and translation system which should be optimized for integration into web applications. The Mediapipe library was used to determine the key points and orientation of the user's hands and fingers. After that, the software module transmits the collected data to a sequential neural network that includes layers of Long Short-Term Memory (LSTM). To build this type of neural network, the open Keras library was used. The key feature of the presented neural network model is the combination and interaction of convolutional and Recurrent neural network (RNN) layers. The considered set of layers provides the ability to track the dependence of data over time, this is achieved by switching between layers of different types and reducing the number of neurons. The LSTM network is trained using a custom editable dataset based on American Sign Language gestures. The dataset was formed on the recording of signs. Each sign representation was preprocessed to extract three-dimensional landmarks. These collected key points were transmitting to the layers of the LSTM neural network, which allowed the model to study the complex relationships between hand movements and the corresponding gestures. Each sign sample is represented by a sequence of 24 frames. The effectiveness of the neural network algorithm is evaluated using various indicators, including an accuracy of the model. The results of the experiment show that the developed software can achieve a high level of accuracy in recognizing the gestures of the sign language. The relevance of this study is confirmed by the possibility of its application in a wide range of areas. In particular, the software has the potential to be used as a service or tool for communication between people with disabilities and the general public, or as a technology that helps people with hearing impairments. The authors of the research note that the results of the work done demonstrate the possibility and effectiveness of using a neural network algorithm, including LSTM layers, when developing a neural network algorithm for sign language recognition. | 
|---|---|
| DOI: | 10.1109/SIST61555.2024.10629529 |