Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,149 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
|
8 |
-
|
9 |
|
|
|
10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
-
|
13 |
-
|
14 |
-
### Model Description
|
15 |
-
|
16 |
-
<!-- Provide a longer summary of what this model is. -->
|
17 |
-
|
18 |
-
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
-
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
-
|
28 |
-
### Model Sources [optional]
|
29 |
-
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
-
|
36 |
-
## Uses
|
37 |
-
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
-
### Direct Use
|
41 |
-
|
42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
-
|
52 |
-
### Out-of-Scope Use
|
53 |
-
|
54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
-
|
58 |
-
## Bias, Risks, and Limitations
|
59 |
-
|
60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
-
|
66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
-
|
70 |
-
## How to Get Started with the Model
|
71 |
-
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
-
|
76 |
-
## Training Details
|
77 |
-
|
78 |
-
### Training Data
|
79 |
-
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
|
159 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
160 |
|
161 |
-
|
162 |
|
163 |
-
|
|
|
164 |
|
165 |
-
|
|
|
166 |
|
167 |
-
|
|
|
168 |
|
169 |
-
|
|
|
|
|
170 |
|
171 |
-
|
|
|
|
|
|
|
172 |
|
173 |
-
|
|
|
|
|
|
|
174 |
|
175 |
-
|
|
|
|
|
176 |
|
177 |
-
|
178 |
|
179 |
-
**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
180 |
|
181 |
-
|
182 |
|
183 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
184 |
|
185 |
-
|
186 |
|
187 |
-
|
|
|
|
|
|
|
|
|
188 |
|
189 |
-
|
|
|
|
|
190 |
|
191 |
-
|
192 |
|
193 |
-
##
|
|
|
|
|
|
|
|
|
194 |
|
195 |
-
|
196 |
|
197 |
-
##
|
|
|
198 |
|
199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: mit
|
4 |
+
datasets:
|
5 |
+
- eagle0504/openai-gsm8k-enhanced-using-together-ai-deepseek-train8k-test1k-v1
|
6 |
+
new_version: Qwen/Qwen2.5-0.5B
|
7 |
+
pipeline_tag: text-generation
|
8 |
---
|
9 |
|
10 |
+
# Qwen2.5-0.5B Fine-Tuned on GSM8K with DeepSeek Augmentation
|
11 |
|
12 |
+
## Model Overview π
|
13 |
|
14 |
+
This model is a **fine-tuned version of Qwen2.5-0.5B**, specifically trained for **mathematical reasoning tasks** using the **GSM8K dataset**, with additional **Chain-of-Thought (CoT) reasoning augmentation** from **DeepSeek-V3**. The model has been fine-tuned to generate detailed **step-by-step solutions** to grade school math problems, ensuring **better logical reasoning and interpretability**.
|
15 |
|
16 |
+
### πΉ **Key Features**
|
17 |
+
- **Base Model:** `Qwen/Qwen2.5-0.5B`
|
18 |
+
- **Fine-Tuned On:** `eagle0504/openai-gsm8k-enhanced-using-together-ai-deepseek-train8k-test1k-v1`
|
19 |
+
- **Optimized for:** **Mathematical problem-solving & step-by-step reasoning**
|
20 |
+
- **Fine-tuned with:** **LoRA (Low-Rank Adaptation) for parameter-efficient training**
|
21 |
+
- **Chain-of-Thought (CoT):** Generates clear and structured reasoning for each problem
|
22 |
+
- **Inference-ready:** Available on π€ Hugging Face Hub
|
23 |
|
24 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
+
## **Model Details π**
|
27 |
+
|
28 |
+
### **π Description**
|
29 |
+
- **Developed by:** [Your Name or Organization]
|
30 |
+
- **Funded by:** [Optional: Mention if funded]
|
31 |
+
- **Shared by:** Hugging Face Hub
|
32 |
+
- **Model Type:** Causal Language Model (**Text Generation**)
|
33 |
+
- **Languages:** English (`en`)
|
34 |
+
- **License:** MIT License
|
35 |
+
- **Fine-tuned from:** `Qwen/Qwen2.5-0.5B`
|
36 |
+
|
37 |
+
### π **Model Repository**
|
38 |
+
- **Hugging Face Model Page:**
|
39 |
+
π [Fine-tuned Qwen2.5-0.5B](https://huggingface.co/your-repo-id)
|
40 |
|
41 |
+
---
|
42 |
|
43 |
+
## **π₯ How to Load & Use This Model**
|
44 |
+
You can load this model using π€ `transformers` as follows:
|
45 |
|
46 |
+
```python
|
47 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
48 |
|
49 |
+
# Define model repo ID (Replace with actual HF repo)
|
50 |
+
model_name = "your-repo-id"
|
51 |
|
52 |
+
# Load tokenizer and model
|
53 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
54 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
55 |
|
56 |
+
# Move model to GPU (if available)
|
57 |
+
import torch
|
58 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
59 |
+
model.to(device)
|
60 |
|
61 |
+
# Example inference
|
62 |
+
question = "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
|
63 |
+
inputs = tokenizer(question, return_tensors="pt").to(device)
|
64 |
+
output = model.generate(**inputs, max_length=200)
|
65 |
|
66 |
+
# Decode and print response
|
67 |
+
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
68 |
+
```
|
69 |
|
70 |
+
---
|
71 |
|
72 |
+
## **π¬ Training Details**
|
73 |
+
### **ποΈ Training Data**
|
74 |
+
The model was fine-tuned on the **GSM8K dataset**, specifically the augmented dataset:
|
75 |
+
πΉ [`eagle0504/openai-gsm8k-enhanced-using-together-ai-deepseek-train8k-test1k-v1`](https://huggingface.co/datasets/eagle0504/openai-gsm8k-enhanced-using-together-ai-deepseek-train8k-test1k-v1)
|
76 |
+
|
77 |
+
This dataset contains:
|
78 |
+
- **8K training samples** (`train` split)
|
79 |
+
- **1K testing samples** (`test` split)
|
80 |
+
- Features: `"question"`, `"answer"`, and `"cot"` (Chain-of-Thought)
|
81 |
+
|
82 |
+
### **βοΈ Training Procedure**
|
83 |
+
- **Preprocessing**: Each question was formatted using a prompt template to encourage step-by-step reasoning.
|
84 |
+
- **Training Framework**: Used `transformers`, `trl`, and `unsloth` for efficient fine-tuning.
|
85 |
+
- **Fine-Tuning Strategy**: **LoRA (Low-Rank Adaptation)**
|
86 |
+
- Applied to **query and value projection layers** (`q_proj`, `v_proj`)
|
87 |
+
- **LoRA hyperparameters:**
|
88 |
+
- `r=8`, `lora_alpha=16`, `lora_dropout=0.1`
|
89 |
+
- **Optimization**:
|
90 |
+
- **Mixed Precision Training** (`fp16`)
|
91 |
+
- **Batch Size:** 16
|
92 |
+
- **Gradient Accumulation:** 1
|
93 |
+
- **Learning Rate:** 2e-4
|
94 |
+
- **Training Time:** Approx. **10,446 seconds (~3 hours)**
|
95 |
|
96 |
+
---
|
97 |
|
98 |
+
## **π Performance & Evaluation**
|
99 |
+
### **β
Training Performance**
|
100 |
+
| Step | Loss | Grad Norm | Learning Rate | Epoch |
|
101 |
+
|------|------|-----------|---------------|-------|
|
102 |
+
| 10 | 2.1319 | 3.656 | 2e-4 | 0.0107 |
|
103 |
+
| 1000 | 0.2013 | 0.328 | 2.3e-7 | 9.98 |
|
104 |
+
| 9340 | 0.2048 | 0.341 | 2.1e-8 | 9.99 |
|
105 |
+
|
106 |
+
### **π§ͺ Testing & Expected Results**
|
107 |
+
The model was evaluated on the **1K test samples** and showed **strong accuracy in multi-step problem-solving**.
|
108 |
+
|
109 |
+
Example expected response:
|
110 |
+
```text
|
111 |
+
To solve the problem, we first find the clips sold in May:
|
112 |
+
Clips in May = 48 / 2 = 24
|
113 |
+
Next, we find the total:
|
114 |
+
Total Clips = 48 + 24 = 72
|
115 |
+
#### Answer: 72
|
116 |
+
```
|
117 |
|
118 |
+
---
|
119 |
|
120 |
+
## **π Bias, Risks, and Limitations**
|
121 |
+
### β οΈ **Potential Risks**
|
122 |
+
- May **hallucinate** incorrect reasoning steps if prompts are unclear.
|
123 |
+
- Could struggle with **complex mathematical problems** outside its training data.
|
124 |
+
- **Limited generalization** to non-math reasoning tasks.
|
125 |
|
126 |
+
### π― **Recommendations**
|
127 |
+
- If using this model for **critical applications**, verify outputs with human review.
|
128 |
+
- For **better performance**, fine-tune on **larger datasets** with real-world numerical reasoning.
|
129 |
|
130 |
+
---
|
131 |
|
132 |
+
## **π Environmental Impact**
|
133 |
+
**Estimated Carbon Emissions:**
|
134 |
+
- **Hardware Used:** NVIDIA A100 GPU
|
135 |
+
- **Training Time:** ~3 hours
|
136 |
+
- **Estimated CO2 Emitted:** ~5.6 kg CO2eq (using [ML Impact Calculator](https://mlco2.github.io/impact#compute))
|
137 |
|
138 |
+
---
|
139 |
|
140 |
+
## **π Citation**
|
141 |
+
If you use this model in your research, please cite it as:
|
142 |
|
143 |
+
```bibtex
|
144 |
+
@misc{Upcoming,
|
145 |
+
title={Upcoming},
|
146 |
+
author={Yiqiao},
|
147 |
+
year={2025}
|
148 |
+
}
|
149 |
+
```
|