aashish1904 commited on
Commit
a767a9c
·
verified ·
1 Parent(s): bf19c78

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
5
+ language:
6
+ - en
7
+ license: apache-2.0
8
+ tags:
9
+ - text-generation-inference
10
+ - transformers
11
+ - unsloth
12
+ - llama
13
+ - trl
14
+ - sft
15
+ datasets:
16
+ - Lyte/Reasoning-Paused
17
+ pipeline_tag: text-generation
18
+
19
+ ---
20
+
21
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
22
+
23
+
24
+ # QuantFactory/Llama-3.2-3B-Overthinker-GGUF
25
+ This is quantized version of [Lyte/Llama-3.2-3B-Overthinker](https://huggingface.co/Lyte/Llama-3.2-3B-Overthinker) created using llama.cpp
26
+
27
+ # Original Model Card
28
+
29
+
30
+
31
+ # Model Overview:
32
+
33
+ - **Training Data**: This model was trained on a dataset with columns for initial reasoning, step-by-step thinking, verifications after each step, and final answers based on full context. Is it better than the original base model? Hard to say without proper evaluations, and I don’t have the resources to run them manually.
34
+
35
+ - **Context Handling**: The model benefits from larger contexts (minimum 4k up to 16k tokens, though it was trained on 32k tokens). It tends to "overthink," so providing a longer context helps it perform better.
36
+
37
+ - **Performance**: Based on my very few manual tests, the model seems to excel in conversational settings—especially for mental health, creative tasks and explaining stuff. However, I encourage you to try it out yourself using this [Colab Notebook](https://colab.research.google.com/drive/1dcBbHAwYJuQJKqdPU570Hddv_F9wzjPO?usp=sharing).
38
+
39
+ - **Dataset Note**: The publicly available dataset is only a partial version. The full dataset was originally designed for a custom Mixture of Experts (MoE) architecture, but I couldn't afford to run the full experiment.
40
+
41
+ - **Acknowledgment**: Special thanks to KingNish for reigniting my passion to revisit this project. I almost abandoned it after my first attempt a month ago. Enjoy this experimental model!
42
+
43
+ # Inference Code:
44
+
45
+ - Feel free to make the steps and verifications collapsable and the initial reasoning too, you can show only the final answer to get an o1 feel(i don't know)
46
+ - **Note:** A feature we have here is the ability to control how many steps and verifications you want.
47
+
48
+ ```python
49
+ from transformers import AutoModelForCausalLM, AutoTokenizer
50
+
51
+ model_name = "Lyte/Llama-3.2-3B-Overthinker"
52
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
53
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
54
+
55
+ def generate_response(prompt, max_tokens=16384, temperature=0.8, top_p=0.95, repeat_penalty=1.1, num_steps=3):
56
+ messages = [{"role": "user", "content": prompt}]
57
+
58
+ # Generate reasoning
59
+ reasoning_template = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True)
60
+ reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device)
61
+
62
+ reasoning_ids = model.generate(
63
+ **reasoning_inputs,
64
+ max_new_tokens=max_tokens // 3,
65
+ temperature=temperature,
66
+ top_p=top_p,
67
+ repetition_penalty=repeat_penalty
68
+ )
69
+ reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)
70
+
71
+ # Generate thinking (step-by-step and verifications)
72
+ messages.append({"role": "reasoning", "content": reasoning_output})
73
+ thinking_template = tokenizer.apply_chat_template(messages, tokenize=False, add_thinking_prompt=True, num_steps=num_steps)
74
+ thinking_inputs = tokenizer(thinking_template, return_tensors="pt").to(model.device)
75
+
76
+ thinking_ids = model.generate(
77
+ **thinking_inputs,
78
+ max_new_tokens=max_tokens // 3,
79
+ temperature=temperature,
80
+ top_p=top_p,
81
+ repetition_penalty=repeat_penalty
82
+ )
83
+ thinking_output = tokenizer.decode(thinking_ids[0, thinking_inputs.input_ids.shape[1]:], skip_special_tokens=True)
84
+
85
+ # Generate final answer
86
+ messages.append({"role": "thinking", "content": thinking_output})
87
+ answer_template = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
88
+ answer_inputs = tokenizer(answer_template, return_tensors="pt").to(model.device)
89
+
90
+ answer_ids = model.generate(
91
+ **answer_inputs,
92
+ max_new_tokens=max_tokens // 3,
93
+ temperature=temperature,
94
+ top_p=top_p,
95
+ repetition_penalty=repeat_penalty
96
+ )
97
+ answer_output = tokenizer.decode(answer_ids[0, answer_inputs.input_ids.shape[1]:], skip_special_tokens=True)
98
+ return reasoning_output, thinking_output, answer_output
99
+
100
+ # Example usage:
101
+ prompt = "Explain the process of photosynthesis."
102
+ response = generate_response(prompt, num_steps=5)
103
+
104
+ print("Response:", response)
105
+ ```
106
+
107
+ # Uploaded model
108
+
109
+ - **Developed by:** Lyte
110
+ - **License:** apache-2.0
111
+ - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
112
+
113
+ This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
114
+
115
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)