Please use this identifier to cite or link to this item: http://hdl.handle.net/2080/5676
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBanerjee, Pratyay-
dc.contributor.authorBhattacharjee, Panthadeep-
dc.contributor.authorJana, Angshuman-
dc.date.accessioned2026-02-11T05:37:27Z-
dc.date.available2026-02-11T05:37:27Z-
dc.date.issued2026-01-
dc.identifier.citation22nd International Conference on Distributed Computing and Intelligent Technology (ICDCIT), KIIT University, Bhubaneswar,16-19 January 2026en_US
dc.identifier.urihttp://hdl.handle.net/2080/5676-
dc.descriptionCopyright belongs to the proceeding publisher.en_US
dc.description.abstractRecent advancements in the development of large language models (LLMs) have highlighted their remarkable capabilities in a range of reasoning and decision-related challenges. Nevertheless, the clarity and logical flow of their reasoning can still be enhanced through improved self-evaluation and reflective analysis. In this work, we propose Self-Assessing Chain-of-Draft (SACoD), an approach that allows LLMs to emulate a form of self-assessment during the reasoning process by employing dual Chain-of-Draft CoD thinking. This technique draws inspiration from human cognitive mechanisms, where the model produces concise yet meaningful intermediate outputs while addressing tasks. SACoD harnesses the potential of iterative thinking, wherein the model first generates an initial sequence of thoughts and then critically evaluates and distills these thoughts through a subsequent round of reasoning. This recursive strategy allows for more consistent, rational, and reliable responses, thereby enhancing the overall quality of decision-making at a significantly lower cost than the traditional Chain-of-Thought (CoT) thinking. We also demonstrate an effective integration of this methodology into existing LLM frameworks using simple prompt engineering. In this process, we achieved outcomes akin to that of the Learning- Refinement Model (LRM) without any extra training.en_US
dc.subjectLarge Language Models (LLMs)en_US
dc.subjectPrompt Engineeringen_US
dc.subjectToken Efficiencyen_US
dc.subjectRobust Reasoningen_US
dc.subjectCritique and Refinementen_US
dc.titleRevise to Precise: Self-Assessing Chain-of-Draft for Robust Decision-Making in LLMsen_US
dc.typeArticleen_US
Appears in Collections:Conference Papers

Files in This Item:
File Description SizeFormat 
2026_ICDCIT_PBanerjee_Revise.pdf3.82 MBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.