Please use this identifier to cite or link to this item:
http://hdl.handle.net/2080/4312
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Bhatt, Vikas | - |
dc.contributor.author | Dash, Ratnakar | - |
dc.date.accessioned | 2024-01-16T06:04:02Z | - |
dc.date.available | 2024-01-16T06:04:02Z | - |
dc.date.issued | 2023-12 | - |
dc.identifier.citation | 5th International Conference on Machine Learning, Image Processing, Network Security, and Data Sciences (MIND), NIT, Hamirpur, 21-22 December, 2023 | en_US |
dc.identifier.uri | http://hdl.handle.net/2080/4312 | - |
dc.description | Copyright belongs to proceeding publisher | en_US |
dc.description.abstract | Hand gesture recognition is crucial to computer vision and human-computer interaction(HCI), enabling natural and intuitive inter- actions between humans and machines. Sign language is essential for communication between deaf-mute and other people. In this research paper, we proposed an advanced deep learning-based methodology for real-time sign language detection designed to address communication disparities between individuals with normal hearing and the deaf com- munity. The inception of our study involved the creation of a dataset comprising 37 unique sign language gestures. This dataset underwent preprocessing procedures to optimize its relevance for subsequent training of a customized Convolutional Neural Network (CNN) model. The trained CNN model demonstrated proficiency in recognizing and interpreting real-time sign language expressions, thereby contributing to the mitigation of communication barriers in this context. The implementation of two alternative methodologies was executed in parallel with the proposed approach. Specifically, real-time detection of American Sign Language (ASL) employed MediaPipe and Teachable Machine, while the second method illustrates the real-time detection of hand gestures utilizing a convexity-based approach. In this study, the proposed customized CNN model achieved an accuracy of 99.46% and, precision of 99 %, whereas the model trained with the teachable machine achieved 94.54% accuracy, and the method using the convexity approach achieved 84.44% accuracy. | en_US |
dc.subject | Hand gesture | en_US |
dc.subject | CNN | en_US |
dc.subject | HCI | en_US |
dc.subject | ASL | en_US |
dc.subject | HSV color model | en_US |
dc.subject | sign language recognition | en_US |
dc.subject | real-time recognition | en_US |
dc.title | Real-Time Hand Gesture Recognition for American Sign Language Using CNN, Mediapipe and Convexity Approach | en_US |
dc.type | Article | en_US |
Appears in Collections: | Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2023_MIND_VBhatt_Real-Time.pdf | 248.59 kB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.