Abstract
We introduce Diagram of Thought (DoT), a framework that models iterative reasoning in large language models (LLMs) as the construction of a directed acyclic graph (DAG) within a single model. Unlike traditional approaches that represent reasoning as linear chains or trees, DoT organizes propositions, critiques, refinements, and verifications into a cohesive DAG structure, allowing the model to explore complex reasoning pathways while maintaining logical consistency. Each node in the diagram corresponds to a proposition that has been proposed, critiqued, refined, or verified, enabling the LLM to iteratively improve its reasoning through natural language feedback. By leveraging auto-regressive next-token prediction with role-specific tokens, DoT facilitates seamless transitions between proposing ideas and critically evaluating them, providing richer feedback than binary signals. Furthermore, we formalize the DoT framework using Topos Theory, providing a mathematical foundation that ensures logical consistency and soundness in the reasoning process. This approach enhances both the training and inference processes within a single LLM, eliminating the need for multiple models or external control mechanisms. DoT offers a conceptual framework for designing next-generation reasoning-specialized models, emphasizing training efficiency, robust reasoning capabilities, and theoretical grounding. The code is available at https://github.com/diagram-of-thought/diagram-of-thought.
Community
On the Diagram of Thought (DoT)
Diagram of Thought (DoT) enhances the reasoning capabilities of large language models (LLMs) by modeling iterative reasoning as a directed acyclic graph (DAG) grounded in Topos Theory.
By integrating propositions, critiques, refinements, and verifications into a unified DAG structure, DoT enables complex logical deductions beyond traditional linear or tree-based approaches.
Utilizing role-specific tokens, DoT generates detailed reasoning processes without external intervention.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Expediting and Elevating Large Language Model Reasoning via Hidden Chain-of-Thought Decoding (2024)
- CoT Rerailer: Enhancing the Reliability of Large Language Models in Complex Reasoning Tasks through Error Detection and Correction (2024)
- Strategic Chain-of-Thought: Guiding Accurate Reasoning in LLMs through Strategy Elicitation (2024)
- Critic-CoT: Boosting the reasoning abilities of large language model via Chain-of-thoughts Critic (2024)
- Self-Harmonized Chain of Thought (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper