VoladorLuYu
's Collections
Symbolic LLM Reasoning
updated
CRUXEval: A Benchmark for Code Reasoning, Understanding and Execution
Paper
•
2401.03065
•
Published
•
11
DeepSeek-Coder: When the Large Language Model Meets Programming -- The
Rise of Code Intelligence
Paper
•
2401.14196
•
Published
•
47
WaveCoder: Widespread And Versatile Enhanced Instruction Tuning with
Refined Data Generation
Paper
•
2312.14187
•
Published
•
49
On the Effectiveness of Large Language Models in Domain-Specific Code
Generation
Paper
•
2312.01639
•
Published
•
1
AST-T5: Structure-Aware Pretraining for Code Generation and
Understanding
Paper
•
2401.03003
•
Published
•
12
Magicoder: Source Code Is All You Need
Paper
•
2312.02120
•
Published
•
79
InstructCoder: Empowering Language Models for Code Editing
Paper
•
2310.20329
•
Published
•
2
Can It Edit? Evaluating the Ability of Large Language Models to Follow
Code Editing Instructions
Paper
•
2312.12450
•
Published
•
1
LLM-Assisted Code Cleaning For Training Accurate Code Generators
Paper
•
2311.14904
•
Published
•
4
The Program Testing Ability of Large Language Models for Code
Paper
•
2310.05727
•
Published
•
1
Binding Language Models in Symbolic Languages
Paper
•
2210.02875
•
Published
•
1
Small LLMs Are Weak Tool Learners: A Multi-LLM Agent
Paper
•
2401.07324
•
Published
•
3
From Good to Great: Improving Math Reasoning with Tool-Augmented
Interleaf Prompting
Paper
•
2401.05384
•
Published
T-Eval: Evaluating the Tool Utilization Capability Step by Step
Paper
•
2312.14033
•
Published
•
2
Chain-of-Thought Reasoning Without Prompting
Paper
•
2402.10200
•
Published
•
100
Deductive Beam Search: Decoding Deducible Rationale for Chain-of-Thought
Reasoning
Paper
•
2401.17686
•
Published
Large Language Models Are Neurosymbolic Reasoners
Paper
•
2401.09334
•
Published
•
1
PathFinder: Guided Search over Multi-Step Reasoning Paths
Paper
•
2312.05180
•
Published
•
9
Interpreting Pretrained Language Models via Concept Bottlenecks
Paper
•
2311.05014
•
Published
•
1
Beyond A*: Better Planning with Transformers via Search Dynamics
Bootstrapping
Paper
•
2402.14083
•
Published
•
47
ReWOO: Decoupling Reasoning from Observations for Efficient Augmented
Language Models
Paper
•
2305.18323
•
Published
•
1
Why think step by step? Reasoning emerges from the locality of
experience
Paper
•
2304.03843
•
Published
Chain-of-Instructions: Compositional Instruction Tuning on Large
Language Models
Paper
•
2402.11532
•
Published
Do Large Language Models Latently Perform Multi-Hop Reasoning?
Paper
•
2402.16837
•
Published
•
24
CodeS: Towards Building Open-source Language Models for Text-to-SQL
Paper
•
2402.16347
•
Published
StarCoder 2 and The Stack v2: The Next Generation
Paper
•
2402.19173
•
Published
•
136
Common 7B Language Models Already Possess Strong Math Capabilities
Paper
•
2403.04706
•
Published
•
16
Inference via Interpolation: Contrastive Representations Provably Enable
Planning and Inference
Paper
•
2403.04082
•
Published
Quiet-STaR: Language Models Can Teach Themselves to Think Before
Speaking
Paper
•
2403.09629
•
Published
•
72
Boosting of Thoughts: Trial-and-Error Problem Solving with Large
Language Models
Paper
•
2402.11140
•
Published
Structured Chain-of-Thought Prompting for Code Generation
Paper
•
2305.06599
•
Published
•
1
STaR-GATE: Teaching Language Models to Ask Clarifying Questions
Paper
•
2403.19154
•
Published
LLM-R2: A Large Language Model Enhanced Rule-based Rewrite System for
Boosting Query Efficiency
Paper
•
2404.12872
•
Published
•
11
Iterative Reasoning Preference Optimization
Paper
•
2404.19733
•
Published
•
47
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language
Models
Paper
•
2406.04271
•
Published
•
28
On Memorization of Large Language Models in Logical Reasoning
Paper
•
2410.23123
•
Published
•
16