File size: 2,896 Bytes
d0bc662 c1c4242 b2a0cae dced2e1 b2a0cae dced2e1 b2a0cae dced2e1 b2a0cae dced2e1 b2a0cae d0bc662 c1c4242 38814a9 c1c4242 dced2e1 c1c4242 065f5b5 c1c4242 065f5b5 c1c4242 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- math world problems
- math
- arithmetics
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: problem_type
dtype: string
splits:
- name: test
num_examples: 1000
---
# Dataset Card for Calc-SVAMP
## Summary
The dataset is a collection of simple math world problems focused on arithmetics. It is derived from <https://github.com/arkilpatel/SVAMP/>.
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer of the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can out-source the computations in the reasoning chain to a calculator.
## Attributes:
- `id`: problem id from the original dataset
- `question`: the question intended to answer
- `chain`: series of simple operations (derived from `equation`) that leads to the solution
- `result`: the result (number) as a string
- `result_float`: result converted to a floating point
- `equation`: an nested expression that evaluates to the correct result
- `problem_type`: a category of the problem
## Content and data splits
The dataset contains the same data instances ad the original dataset except for a correction of inconsistency between `equation` and `answer` in one data instance.
To the best of our knowledge, the original dataset does not contain an official train-test split, and we do not create one. However, original authors have used cross-validation in the official repository - for more info, see <https://github.com/arkilpatel/SVAMP/>.
## Licence
MIT, consistent with the original source dataset linked above.
## Related work
If you are interested in related datasets (or models), check out the MU-NLPC organization here on HuggingFace. We have released a few other datasets in a compatible format, and several models that use external calculator during inference.
## Cite
If you use this version of dataset in research, please cite the original [SVAMP paper](https://www.semanticscholar.org/paper/Are-NLP-Models-really-able-to-Solve-Simple-Math-Patel-Bhattamishra/13c4e5a6122f3fa2663f63e49537091da6532f35).
TODO |