Please use this identifier to cite or link to this item: http://hdl.handle.net/2080/4312
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBhatt, Vikas-
dc.contributor.authorDash, Ratnakar-
dc.date.accessioned2024-01-16T06:04:02Z-
dc.date.available2024-01-16T06:04:02Z-
dc.date.issued2023-12-
dc.identifier.citation5th International Conference on Machine Learning, Image Processing, Network Security, and Data Sciences (MIND), NIT, Hamirpur, 21-22 December, 2023en_US
dc.identifier.urihttp://hdl.handle.net/2080/4312-
dc.descriptionCopyright belongs to proceeding publisheren_US
dc.description.abstractHand gesture recognition is crucial to computer vision and human-computer interaction(HCI), enabling natural and intuitive inter- actions between humans and machines. Sign language is essential for communication between deaf-mute and other people. In this research paper, we proposed an advanced deep learning-based methodology for real-time sign language detection designed to address communication disparities between individuals with normal hearing and the deaf com- munity. The inception of our study involved the creation of a dataset comprising 37 unique sign language gestures. This dataset underwent preprocessing procedures to optimize its relevance for subsequent training of a customized Convolutional Neural Network (CNN) model. The trained CNN model demonstrated proficiency in recognizing and interpreting real-time sign language expressions, thereby contributing to the mitigation of communication barriers in this context. The implementation of two alternative methodologies was executed in parallel with the proposed approach. Specifically, real-time detection of American Sign Language (ASL) employed MediaPipe and Teachable Machine, while the second method illustrates the real-time detection of hand gestures utilizing a convexity-based approach. In this study, the proposed customized CNN model achieved an accuracy of 99.46% and, precision of 99 %, whereas the model trained with the teachable machine achieved 94.54% accuracy, and the method using the convexity approach achieved 84.44% accuracy.en_US
dc.subjectHand gestureen_US
dc.subjectCNNen_US
dc.subjectHCIen_US
dc.subjectASLen_US
dc.subjectHSV color modelen_US
dc.subjectsign language recognitionen_US
dc.subjectreal-time recognitionen_US
dc.titleReal-Time Hand Gesture Recognition for American Sign Language Using CNN, Mediapipe and Convexity Approachen_US
dc.typeArticleen_US
Appears in Collections:Conference Papers

Files in This Item:
File Description SizeFormat 
2023_MIND_VBhatt_Real-Time.pdf248.59 kBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.