Datasets:
Tasks:
Text2Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
math-word-problems
License:
Update README.md (#2)
Browse files- Update README.md (75b6ad5f3cc2c88d86e676c39d143d090169c948)
Co-authored-by: Aymeric Roucher <A-Roucher@users.noreply.huggingface.co>
README.md
CHANGED
@@ -89,10 +89,15 @@ dataset_info:
|
|
89 |
### Dataset Summary
|
90 |
|
91 |
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
|
|
|
|
|
|
|
|
|
92 |
|
93 |
### Supported Tasks and Leaderboards
|
94 |
|
95 |
-
|
|
|
96 |
|
97 |
### Languages
|
98 |
|
@@ -146,9 +151,9 @@ The data fields are the same among `main` and `socratic` configurations and thei
|
|
146 |
|
147 |
#### Initial Data Collection and Normalization
|
148 |
|
149 |
-
From the paper:
|
150 |
|
151 |
-
> We initially collected a starting set of a thousand problems and natural language solutions by hiring freelance contractors on Upwork (upwork.com). We then worked with Surge AI (surgehq.ai), an NLP data labeling platform, to scale up our data collection. After collecting the full dataset, we asked workers to re-solve all problems, with no workers re-solving problems they originally wrote. We checked whether their final answers agreed with the original
|
152 |
|
153 |
#### Who are the source language producers?
|
154 |
|
|
|
89 |
### Dataset Summary
|
90 |
|
91 |
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
|
92 |
+
- These problems take between 2 and 8 steps to solve.
|
93 |
+
- Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the final answer.
|
94 |
+
- A bright middle school student should be able to solve every problem: from the paper, "Problems require no concepts beyond the level of early Algebra, and the vast majority of problems can be solved without explicitly defining a variable."
|
95 |
+
- Solutions are provided in natural language, as opposed to pure math expressions. From the paper: "We believe this is the most generally useful data format, and we expect it to shed light on the properties of large language models’ internal monologues""
|
96 |
|
97 |
### Supported Tasks and Leaderboards
|
98 |
|
99 |
+
This dataset is generally used to test logic and math in language modelling.
|
100 |
+
It has been used for many benchmarks, including the [LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
101 |
|
102 |
### Languages
|
103 |
|
|
|
151 |
|
152 |
#### Initial Data Collection and Normalization
|
153 |
|
154 |
+
From the paper, appendix A:
|
155 |
|
156 |
+
> We initially collected a starting set of a thousand problems and natural language solutions by hiring freelance contractors on Upwork (upwork.com). We then worked with Surge AI (surgehq.ai), an NLP data labeling platform, to scale up our data collection. After collecting the full dataset, we asked workers to re-solve all problems, with no workers re-solving problems they originally wrote. We checked whether their final answers agreed with the original solutions, and any problems that produced disagreements were either repaired or discarded. We then performed another round of agreement checks on a smaller subset of problems, finding that 1.7% of problems still produce disagreements among contractors. We estimate this to be the fraction of problems that contain breaking errors or ambiguities. It is possible that a larger percentage of problems contain subtle errors.
|
157 |
|
158 |
#### Who are the source language producers?
|
159 |
|