--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards datasets: - MU-NLPC/Calc-gsm8k - MU-NLPC/Calc-aqua_rat - MU-NLPC/Calc-math_qa - MU-NLPC/Calc-ape210k metrics: - exact_match - rouge model-index: - name: calc-flan-xl results: - task: type: question-answering name: Question Answering dataset: type: gsm8k name: GSM8K split: validation metrics: - type: exact_match value: 0.495 - type: rouge value: 0.655 license: apache-2.0 language: - en --- # Model Card for calc-flan-xl This model generates reasoning chains over mathematical questions while **using an external tool: Sympy calculator**. ## Model Details ### Model Description With the idea to offload a symbolic reasoning from the stochastic language model, we train this model to utilize a calculator **for all applicable numeric operations**. This is achieved by training the model to construct calls to the tool's API in this format: ```html 100/2 50 ``` where `` segment triggers a call of the tool, which is subsequently served by extending model's decoder input context by adding the output of the tool within the `` segment. - **Developed by:** Anonymous - **Model type:** Autoregressive Encoder-Decoder - **Language(s):** en - **Finetuned from:** google/flan-t5-xl ### Model Sources - **Repository:** https://github.com/emnlp2023sub/gadgets - **Paper:** Stay tuned! ## Usage Additionally to conventional generation, using Tool-augmented generation requires (1) implementation of the tool(s) and (2) a customization of generate() method augmenting input context on-demand with the outputs of the tools. You can find these two components implemented in the attached **gadgets/model.py** and **gadgets/gadget.py** in this model's repo and the project's [home repo](https://github.com/emnlp2023sub/gadgets). After adding these two scripts to your directory, you can use the model as follows: ```python from transformers import T5ForConditionalGeneration, T5Tokenizer from gadgets.model import gadget_assisted_model from gadgets.gadget import Calculator GadgetAssistedT5 = gadget_assisted_model(T5ForConditionalGeneration) model = GadgetAssistedT5.from_pretrained("emnlp2023/calc-flan-xl") tokenizer = T5Tokenizer.from_pretrained("emnlp2023/calc-flan-xl") model.prepare_for_generate(tokenizer, enabled_gadgets=[Calculator()], default_max_tokens=512) query = """ The profit from a business transaction is shared among 2 business partners, Mike and Johnson in the ratio 2:5 respectively. If Johnson got $2500, how much will Mike have after spending some of his share on a shirt that costs $200? """ inputs = tokenizer(query, return_tensors="pt") output_ids = model.generate(**inputs) tokenizer.decode(output_ids[0], spaces_between_special_tokens=False) ``` This returns: ```html According to the ratio, for every 5 parts that Johnson gets, Mike gets 2 parts Since Johnson got $2500, each part is therefore $2500/5 = $2500/5500 500 Mike will get 2*$500 = $2*5001_000 1000 After buying the shirt he will have $1000-$200 = $1000-200800 800 left. Final result is800 ``` ### Out-of-Scope Usage Note that given the limited scope of the exercises' complexity in the training, this model will not work well for tasks requiring more complex algebraic operations, including equations, variables and operations outside the scope of (+-*/). ## Training Details ### Training Data This model was trained on our Calculator-augmented set of - [Calc Ape210k](https://huggingface.co/datasets/emnlp2023/Calc-ape210k) ([original Ape210k on github](https://github.com/Chenny0808/ape210k)) - [Calc MathQA](https://huggingface.co/datasets/emnlp2023/Calc-math_qa) ([original MathQA on HF](https://huggingface.co/datasets/math_qa)) - [Calc GSM8K](https://huggingface.co/datasets/emnlp2023/Calc-gsm8k) ([original GSM8K on HF](https://huggingface.co/datasets/gsm8k)) - [Calc Aqua-RAT](https://huggingface.co/datasets/emnlp2023/Calc-aqua_rat) ([original Aqua-RAT on HF](https://huggingface.co/datasets/aqua_rat)) in a standard auto-regressive setup i.e. for a conditional next-token prediction with teacher-forced prefix. ## Cite Please cite the [Calcformers paper](https://arxiv.org/abs/2305.15017) as follows: ```bibtex @inproceedings{kadlcik-etal-2023-soft, title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems", author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek", booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track", month = december, year = "2023", address = "Singapore, Singapore", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2305.15017", } ```