Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ tags: []
|
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
|
10 |
|
11 |
|
@@ -13,10 +13,6 @@ tags: []
|
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
<!-- Provide a longer summary of what this model is. -->
|
17 |
-
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
-
|
20 |
- **Developed by:** [More Information Needed]
|
21 |
- **Funded by [optional]:** [More Information Needed]
|
22 |
- **Shared by [optional]:** [More Information Needed]
|
@@ -27,8 +23,6 @@ This is the model card of a 🤗 transformers model that has been pushed on the
|
|
27 |
|
28 |
### Model Sources [optional]
|
29 |
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
- **Repository:** [More Information Needed]
|
33 |
- **Paper [optional]:** [More Information Needed]
|
34 |
- **Demo [optional]:** [More Information Needed]
|
@@ -37,23 +31,41 @@ This is the model card of a 🤗 transformers model that has been pushed on the
|
|
37 |
|
38 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
@@ -67,76 +79,109 @@ This is the model card of a 🤗 transformers model that has been pushed on the
|
|
67 |
|
68 |
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
-
## How to Get Started with the Model
|
71 |
-
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
-
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
126 |
|
127 |
### Results
|
128 |
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
140 |
|
141 |
## Environmental Impact
|
142 |
|
|
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
+
The Prob-Gen-8B Large Language Model (LLM) is a fine-tuned model of the [Llama-3-8B from Meta](https://huggingface.co/meta-llama/Meta-Llama-3-8B). The intend of the Prob-Gen-8B LLM is to generate math problems under different contexts and tested knowledge for 8th graders.
|
9 |
|
10 |
|
11 |
|
|
|
13 |
|
14 |
### Model Description
|
15 |
|
|
|
|
|
|
|
|
|
16 |
- **Developed by:** [More Information Needed]
|
17 |
- **Funded by [optional]:** [More Information Needed]
|
18 |
- **Shared by [optional]:** [More Information Needed]
|
|
|
23 |
|
24 |
### Model Sources [optional]
|
25 |
|
|
|
|
|
26 |
- **Repository:** [More Information Needed]
|
27 |
- **Paper [optional]:** [More Information Needed]
|
28 |
- **Demo [optional]:** [More Information Needed]
|
|
|
31 |
|
32 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
33 |
|
34 |
+
The model can be loaded with HuggingFace's Transformers library:
|
35 |
+
``` python
|
36 |
+
import transformers
|
37 |
+
import torch
|
38 |
+
|
39 |
+
model_id = "duke-nlp/Porb-Gen-8B"
|
40 |
+
|
41 |
+
model = AutoModelForCausalLM.from_pretrained(
|
42 |
+
model_id,
|
43 |
+
device_map="auto",
|
44 |
+
torch_dtype=torch.bfloat16
|
45 |
+
)
|
46 |
+
|
47 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
48 |
+
model_id,
|
49 |
+
use_fast=False,
|
50 |
+
legacy=False
|
51 |
+
)
|
52 |
+
|
53 |
+
model_input = tokenizer(
|
54 |
+
"""Please generate a math problem and 2 to 4 options for 8th graders with the following requirements:
|
55 |
+
Problem context: <specified-context>
|
56 |
+
Tested knowledge: <specified-knowledge>""",
|
57 |
+
return_tensors="pt",
|
58 |
+
).to("cuda")
|
59 |
+
|
60 |
+
model_output = model.generate(
|
61 |
+
model_input['input_ids'],
|
62 |
+
max_new_tokens=256,
|
63 |
+
do_sample=True,
|
64 |
+
...
|
65 |
+
)
|
66 |
+
|
67 |
+
tokenizer.batch_decode(model_output)
|
68 |
+
```
|
69 |
|
70 |
## Bias, Risks, and Limitations
|
71 |
|
|
|
79 |
|
80 |
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
## Training Details
|
83 |
|
84 |
### Training Data
|
85 |
|
86 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
87 |
|
88 |
+
The model is finetuned on 3,644 GPT-4 generated 8th-grade problems, which are also annotated and evaluated by humans, an example of our data point is shown below:
|
89 |
+
``` json
|
90 |
+
"options": [
|
91 |
+
{
|
92 |
+
"optionText": "Multiply 500 by 3/5 to get 300 tons.",
|
93 |
+
"correct": true
|
94 |
+
},
|
95 |
+
{
|
96 |
+
"optionText": "Divide 500 by 3 to get 166.67 tons.",
|
97 |
+
"correct": false
|
98 |
+
}
|
99 |
+
],
|
100 |
+
"problemContext": "Environmental issues",
|
101 |
+
"evaluated_problem": "A town's recycling plant recycles plastic and glass in a ratio of 3:2. If the plant processes 500 tons of recyclables, how much of it is plastic?",
|
102 |
+
"unitTitle": "Solving Multi-Step Problems with Proportional Relationships"
|
103 |
+
```
|
104 |
+
|
105 |
+
### Prompting
|
106 |
+
The model is trained by the following prompt:
|
107 |
+
``` python
|
108 |
+
"""Please generate a math problem and 2 to 4 options for 8th graders with the following requirements:
|
109 |
+
Problem context: <specified-context>
|
110 |
+
Tested knowledge: <specified-knowledge>"""
|
111 |
+
```
|
112 |
+
Where the contexts shown in the dataset are:
|
113 |
+
```
|
114 |
+
"Video Games",
|
115 |
+
"Fashion",
|
116 |
+
"Influencers/YouTubers",
|
117 |
+
"Apps and Technology",
|
118 |
+
"Movies/TV shows",
|
119 |
+
"Sports",
|
120 |
+
"Music and Concerts",
|
121 |
+
"Social Media",
|
122 |
+
"Environmental issues"
|
123 |
+
```
|
124 |
+
And the tested knowledge shown in the dataset are:
|
125 |
+
```
|
126 |
+
"Operations with Rational Numbers",
|
127 |
+
"Expressions and Equations",
|
128 |
+
"Surface Area and Volume",
|
129 |
+
"Arithmetic in Base Ten",
|
130 |
+
"Evaluating Numeric Expressions",
|
131 |
+
"Properties and Theorems of Angles",
|
132 |
+
"Data Sets",
|
133 |
+
"Rational Number Arithmetic",
|
134 |
+
"Functions and Volume",
|
135 |
+
"Linear Equations and Linear Systems",
|
136 |
+
"Representing Data and Distributions",
|
137 |
+
"Algebraic Expressions",
|
138 |
+
"Ratios and Rates",
|
139 |
+
"Solving Equations and Systems of Equations",
|
140 |
+
"Operations with Integers",
|
141 |
+
"Scatter Plots",
|
142 |
+
"Solving Percentage Problems with Proportional Relationships",
|
143 |
+
"Associations in Data",
|
144 |
+
"Expressions, Equations, and Inequalities",
|
145 |
+
"Linear Relationships",
|
146 |
+
"Representing Data",
|
147 |
+
"Solving Multi-Step Problems with Proportional Relationships",
|
148 |
+
"Dividing Fractions",
|
149 |
+
"Area, Surface Area, and Volume",
|
150 |
+
"Equivalent Algebraic Expressions",
|
151 |
+
"Key Features of Linear Equations",
|
152 |
+
"Proportional Relationships and Percentages",
|
153 |
+
"Transformations",
|
154 |
+
"Representing Proportional Relationships"
|
155 |
+
```
|
156 |
|
157 |
### Results
|
158 |
|
159 |
+
Here is an example passage from the training data:
|
160 |
+
```
|
161 |
+
Please generate a math problem and options for 8th graders with the following requirements:
|
162 |
+
Problem context: Movies/TV shows
|
163 |
+
Tested knowledge: Representing Data and Distributions
|
164 |
+
Question: Lucas counted the number of episodes in 12 seasons of a TV show. He recorded: 48, 51, 52, 55, 56, 58, 59, 60, 61, 62, 65, 67. How should he create a frequency table for the number of episodes?
|
165 |
+
Option 1: Group the data into intervals of 10, then count the number of seasons in each interval.
|
166 |
+
Is correct: False
|
167 |
+
Option 2: Group the data into intervals of 5 starting from 45, then count the number of seasons in each interval.
|
168 |
+
Is correct: True
|
169 |
+
```
|
170 |
+
And here is an example passage generated from the fine-tuned model:
|
171 |
+
```
|
172 |
+
Please generate a math problem and 2 to 4 options for 8th graders with the following requirements:
|
173 |
+
Problem context: Video Games
|
174 |
+
Tested knowledge: Expressions and Equations
|
175 |
+
Question: In a video game, the power of a character's weapon is calculated by the formula \(a^b\). If the weapon's power is \(2^{4}\), what is the value of \(a\) and \(b\)?
|
176 |
+
Option 1: \(a = 2\) and \(b = 4\)
|
177 |
+
Is correct: True
|
178 |
+
Option 2: \(a = 4\) and \(b = 2\)
|
179 |
+
Is correct: False
|
180 |
+
Option 3: \(a = 2\) and \(b = 2\)
|
181 |
+
Is correct: False
|
182 |
+
Option 4: \(a = 2\) and \(b = 8\)
|
183 |
+
Is correct: False
|
184 |
+
```
|
185 |
|
186 |
## Environmental Impact
|
187 |
|