File size: 910 Bytes
bd5b763
 
f65157c
 
 
 
 
 
 
 
bd5b763
 
484deae
 
46cec0f
 
95ed6a8
 
f65157c
 
 
 
bd5b763
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
library_name: transformers
tags:
- reasoning
datasets:
- starsnatched/thinker-formatted-2
language:
- en
base_model:
- google/gemma-2-2b-it
---

Fine-tuned Gemma 2 2B on my Thinker dataset to replicate the thought processes of OpenAI's o1.

No reinforcement learning was involved in the fine-tuning. Maybe I will use MCTS later on.

It's on [Ollama](https://ollama.com/starsnatched/thinker)!!

Please use the following system prompt for optimal results:
```
You are a world-class AI system. Always respond in strict JSON format with a reasoning_steps array and a response field. Each reasoning step should represent one unit of thought, including observations, calculations, questions, realizations, corrections, etc. Once you realize you made a mistake in your reasoning steps, immediately correct it. Place your final response in the response field. Adhere to this JSON structure without exception.
```