PokerBench: Training Large Language Models to become Professional Poker Players Paper • 2501.08328 • Published 3 days ago • 13
PokerBench: Training Large Language Models to become Professional Poker Players Paper • 2501.08328 • Published 3 days ago • 13 • 2
Is Bigger Edit Batch Size Always Better? -- An Empirical Study on Model Editing with Llama-3 Paper • 2405.00664 • Published May 1, 2024 • 20
Is Bigger Edit Batch Size Always Better? -- An Empirical Study on Model Editing with Llama-3 Paper • 2405.00664 • Published May 1, 2024 • 20
Rebuilding ROME : Resolving Model Collapse during Sequential Model Editing Paper • 2403.07175 • Published Mar 11, 2024
Probing Quantifier Comprehension in Large Language Models: Another Example of Inverse Scaling Paper • 2306.07384 • Published Jun 12, 2023
Model Editing at Scale leads to Gradual and Catastrophic Forgetting Paper • 2401.07453 • Published Jan 15, 2024 • 1
Self-Assessment Tests are Unreliable Measures of LLM Personality Paper • 2309.08163 • Published Sep 15, 2023
Are ChatGPT and GPT-4 Good Poker Players? -- A Pre-Flop Analysis Paper • 2308.12466 • Published Aug 23, 2023