Recognition of Sign Language Letters Using Image Processing and Deep Learning Methods

Abstract
In order for people to be able to communicate with each other, they must be able to agree mutually. Communication is quite difficult for individuals with hearing problems. Such individuals make their lives much more difficult by isolating themselves from society. The people living with hearing loss can understand the contact person with often lip-reading method, but it is quite difficult for them to express themselves to the people. Since the use of sign language has not become widespread around the world, the number of people who know sign language is very low, except for individuals with hearing disabilities. In this study, it was achieved to dynamically recognize the movements of the sign language finger alphabet via image processing using deep learning methods and to translate it into writing. Accordingly, it is aimed to facilitate communication between people who do not know the sign language in daily life and people with hearing loss. The input given to the system is an image of the hand showing any letter from the alphabet. The image of the hand is interpreted by deep learning methods in the system, and it is compared to one of the letters in the alphabet and an output with the similarity ratio to this letter is displayed on the screen. The system has been tested with a total of 1300 images. The overall accuracy rate of the system was calculated as 88% where true positive rate was 87% and false negative rate was 13%.

This publication has 7 references indexed in Scilit: