Please use this identifier to cite or link to this item: http://hdl.handle.net/2080/4627
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGyanvarsha, Shourya-
dc.contributor.authorMohanty, Swayamjit-
dc.contributor.authorSahoo, Himanshu Sekhar-
dc.contributor.authorPatra, Dipti-
dc.date.accessioned2024-07-31T04:44:21Z-
dc.date.available2024-07-31T04:44:21Z-
dc.date.issued2024-07-
dc.identifier.citationIEEE International Conference on Smart Power Control and Renewable Energy ((ICSPCRE), NIT Rourkela, India, 19-21 July 2024en_US
dc.identifier.urihttp://hdl.handle.net/2080/4627-
dc.descriptionCopyright belongs to proceeding publisheren_US
dc.description.abstractThe paper delineates a pioneering advancement in emotion recognition technology, showcasing a sophisticated multimodal system adept at discerning human emotions through the fusion of video, audio, and facial features, facilitated by state-of-the-art deep learning methodologies. By amalgamating information from diverse sensory modalities, the system attains remarkable precision in classifying emotions, surpassing the efficacy of unimodal approaches. Through a comprehensive array of experiments and evaluations, the study substantiates the system’s prowess in accurately deciphering emotional states. Notably, the fusion of multimodal cues enables nuanced insights into human affective responses, transcending the limitations of individual modalities and furnishing a holistic understanding of emotional dynamics.en_US
dc.subjectaudio-visual emotion recognitionen_US
dc.subjectxlsrWav2Vec2.0 transformeren_US
dc.subjecttransfer learningen_US
dc.subjectAction Unitsen_US
dc.subjectRAVDESSen_US
dc.subjectspeech emotion recognitionen_US
dc.subjectfacial emotion recognitionen_US
dc.titleFusion of Modalities for Emotion Recognition with Deep Learningen_US
dc.typeArticleen_US
Appears in Collections:Conference Papers

Files in This Item:
File Description SizeFormat 
2024_ICSPCRE_SGyanvarsha_Fusion.pdf776.6 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.