datasets:
- MU-NLPC/Calc-gsm8k
- MU-NLPC/Calc-aqua_rat
- MU-NLPC/Calc-math_qa
- MU-NLPC/Calc-ape210k
metrics:
- exact_match
- rouge
model-index:
- name: calc-flan-xl
results:
- task:
type: question-answering
name: Question Answering
dataset:
type: gsm8k
name: GSM8K
split: validation
metrics:
- type: exact_match
value: 0.495
- type: rouge
value: 0.655
license: apache-2.0
language:
- en
Model Card for calc-flan-xl
This model generates reasoning chains over mathematical questions while using an external tool: Sympy calculator.
Model Details
Model Description
With the idea to offload a symbolic reasoning from the stochastic language model, we train this model to utilize a calculator for all applicable numeric operations. This is achieved by training the model to construct calls to the tool's API in this format:
<gadget id="calculator">100/2</gadget> <output>50</output>
where <gadget>
segment triggers a call of the tool,
which is subsequently served by extending model's decoder input context by adding the output of the tool within the <output>
segment.
- Developed by: Anonymous
- Model type: Autoregressive Encoder-Decoder
- Language(s): en
- Finetuned from: google/flan-t5-xl
Model Sources
- Repository: https://github.com/emnlp2023sub/gadgets
- Paper: Stay tuned!
Usage
Additionally to conventional generation, using Tool-augmented generation requires (1) implementation of the tool(s) and (2) a customization of generate() method augmenting input context on-demand with the outputs of the tools.
You can find these two components implemented in the attached gadgets/model.py and gadgets/gadget.py in this model's repo and the project's home repo.
After adding these two scripts to your directory, you can use the model as follows:
from transformers import T5ForConditionalGeneration, T5Tokenizer
from gadgets.model import gadget_assisted_model
from gadgets.gadget import Calculator
GadgetAssistedT5 = gadget_assisted_model(T5ForConditionalGeneration)
model = GadgetAssistedT5.from_pretrained("emnlp2023/calc-flan-xl")
tokenizer = T5Tokenizer.from_pretrained("emnlp2023/calc-flan-xl")
model.prepare_for_generate(tokenizer,
enabled_gadgets=[Calculator()],
default_max_tokens=512)
query = """
The profit from a business transaction is shared among 2 business partners,
Mike and Johnson in the ratio 2:5 respectively.
If Johnson got $2500, how much will Mike have
after spending some of his share on a shirt that costs $200?
"""
inputs = tokenizer(query, return_tensors="pt")
output_ids = model.generate(**inputs)
tokenizer.decode(output_ids[0], spaces_between_special_tokens=False)
This returns:
According to the ratio, for every 5 parts that Johnson gets, Mike gets 2 parts Since Johnson got $2500,
each part is therefore $2500/5 = $<gadget id="calculator">2500/5</gadget><output>500</output> 500
Mike will get 2*$500 = $<gadget id="calculator">2*500</gadget><output>1_000</output> 1000
After buying the shirt he will have $1000-$200 = $<gadget id="calculator">1000-200</gadget><output>800</output> 800 left.
Final result is<result>800</result></s>
Out-of-Scope Usage
Note that given the limited scope of the exercises' complexity in the training, this model will not work well for tasks requiring more complex algebraic operations, including equations, variables and operations outside the scope of (+-*/).
Training Details
Training Data
This model was trained on our Calculator-augmented set of
- Calc Ape210k (original Ape210k on github)
- Calc MathQA (original MathQA on HF)
- Calc GSM8K (original GSM8K on HF)
- Calc Aqua-RAT (original Aqua-RAT on HF)
in a standard auto-regressive setup i.e. for a conditional next-token prediction with teacher-forced prefix.
Cite
Please cite the Calcformers paper as follows:
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = december,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}