File size: 1,568 Bytes
a48618e 2ee098f 538356f a48618e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
license: mit
license_link: https://huggingface.co/OpenPipe/Deductive-Reasoning-Qwen-32B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model:
- OpenPipe/Deductive-Reasoning-Qwen-32B
tags:
- chat
library_name: transformers
---
Credits to [@DavidAU](https://huggingface.co/OpenPipe/Deductive-Reasoning-Qwen-32B/discussions/4#67ce59237c6e6ea1cc54fc22) for his findings.
Used llama.cpp-b4837 for quantization.
Original model: [OpenPipe/Deductive-Reasoning-Qwen-32B](https://huggingface.co/OpenPipe/Deductive-Reasoning-Qwen-32B)
# Deductive-Reasoning-Qwen-32B

Deductive Reasoning Qwen 32B is a reinforcement fine-tune of [Qwen 2.5 32B Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) to solve challenging deduction problems from the [Temporal Clue](https://github.com/bradhilton/temporal-clue) dataset, trained by [OpenPipe](https://openpipe.ai)!
Here are some additional resources to check out:
- [Blog Post](https://openpipe.ai/blog/using-grpo-to-beat-o1-o3-mini-and-r1-on-temporal-clue)
- [Training Recipe](https://github.com/openpipe/deductive-reasoning)
- [RL Experiments](https://github.com/openpipe/rl-experiments)
- [Deductive Reasoning Qwen 14B](https://huggingface.co/OpenPipe/Deductive-Reasoning-Qwen-14B)
If you're interested in training your own models with reinforcement learning or just chatting, feel free to [reach out](https://openpipe.ai/contact) or email Kyle directly at kyle@openpipe.ai! |