Please use this identifier to cite or link to this item:
http://hdl.handle.net/2080/5676Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Banerjee, Pratyay | - |
| dc.contributor.author | Bhattacharjee, Panthadeep | - |
| dc.contributor.author | Jana, Angshuman | - |
| dc.date.accessioned | 2026-02-11T05:37:27Z | - |
| dc.date.available | 2026-02-11T05:37:27Z | - |
| dc.date.issued | 2026-01 | - |
| dc.identifier.citation | 22nd International Conference on Distributed Computing and Intelligent Technology (ICDCIT), KIIT University, Bhubaneswar,16-19 January 2026 | en_US |
| dc.identifier.uri | http://hdl.handle.net/2080/5676 | - |
| dc.description | Copyright belongs to the proceeding publisher. | en_US |
| dc.description.abstract | Recent advancements in the development of large language models (LLMs) have highlighted their remarkable capabilities in a range of reasoning and decision-related challenges. Nevertheless, the clarity and logical flow of their reasoning can still be enhanced through improved self-evaluation and reflective analysis. In this work, we propose Self-Assessing Chain-of-Draft (SACoD), an approach that allows LLMs to emulate a form of self-assessment during the reasoning process by employing dual Chain-of-Draft CoD thinking. This technique draws inspiration from human cognitive mechanisms, where the model produces concise yet meaningful intermediate outputs while addressing tasks. SACoD harnesses the potential of iterative thinking, wherein the model first generates an initial sequence of thoughts and then critically evaluates and distills these thoughts through a subsequent round of reasoning. This recursive strategy allows for more consistent, rational, and reliable responses, thereby enhancing the overall quality of decision-making at a significantly lower cost than the traditional Chain-of-Thought (CoT) thinking. We also demonstrate an effective integration of this methodology into existing LLM frameworks using simple prompt engineering. In this process, we achieved outcomes akin to that of the Learning- Refinement Model (LRM) without any extra training. | en_US |
| dc.subject | Large Language Models (LLMs) | en_US |
| dc.subject | Prompt Engineering | en_US |
| dc.subject | Token Efficiency | en_US |
| dc.subject | Robust Reasoning | en_US |
| dc.subject | Critique and Refinement | en_US |
| dc.title | Revise to Precise: Self-Assessing Chain-of-Draft for Robust Decision-Making in LLMs | en_US |
| dc.type | Article | en_US |
| Appears in Collections: | Conference Papers | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 2026_ICDCIT_PBanerjee_Revise.pdf | 3.82 MB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
