Please use this identifier to cite or link to this item: http://hdl.handle.net/2080/5378
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMondal, Shiladitya-
dc.contributor.authorChatterjee, Saptarshi-
dc.date.accessioned2025-11-26T11:10:58Z-
dc.date.available2025-11-26T11:10:58Z-
dc.date.issued2025-11-
dc.identifier.citation4th IEEE Conference on Applied Signal Processing (ASPCON), Jadavpur University, Kolkata, 21-22 November 2025en_US
dc.identifier.urihttp://hdl.handle.net/2080/5378-
dc.descriptionCopyright belongs to the proceeding publisher.en_US
dc.description.abstractIn-bed human pose estimation is a very challenging task in computer vision due to factors such as low-light environments and occlusions caused by covers. Convolutional neural networks (CNNs) are commonly used in these type of vision tasks but struggle with capturing long-range dependencies. To address this, we propose a transformer-based deep learning model utilizing a pre-trained Swin Transformer combined with a pose estimation head. This design allows the model to integrate multi-scale features effectively and model spatial dependencies among joints. Simultaneously-collected multimodal lying pose (SLP) dataset is used for training and testing of our methodology. Our approach is uni-modal, relying solely on the long-wave infrared (LWIR) modality to predict 2D joint positions, without the need for additional modalities such as depth or pressure data. Experiments show that our proposed approach surpasses most prior methods in in-bed human pose estimation accuracy, highlighting its effectiveness.en_US
dc.subjectHuman pose estimationen_US
dc.subjectSwin transformeren_US
dc.subjectUni-modalen_US
dc.subjectLWIRen_US
dc.titleTowards Accurate In-Bed Human Pose Estimation Using Swin Transformeren_US
dc.typeArticleen_US
Appears in Collections:Conference Papers

Files in This Item:
File Description SizeFormat 
2025_ASPCON_SMondal_Towards.pdf863.85 kBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.