--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: instruction dtype: string - name: response dtype: string - name: category dtype: string splits: - name: train num_bytes: 40388948.1573514 num_examples: 76011 - name: test num_bytes: 2125957.8426486026 num_examples: 4001 download_size: 19780999 dataset_size: 42514906.0 --- ## Dataset Card: MTG-Eval ### Dataset Summary The MTG-Eval dataset is designed to enhance and evaluate the understanding of Magic: the Gathering (MTG) rules and card interactions by large language models. It consists of synthetic question-answer pairs generated from MTG card descriptions, official rulings, and card interactions. This dataset aims to reduce the hallucinations of language models and improve their performance in MTG-related tasks such as deck generation and in-game decision-making. ### Dataset Details - **Name**: MTG-Eval - **Number of Instances**: 80,032 - Card Descriptions: 26,702 - Rules Questions: 27,104 - Card Interactions: 26,226 - **Source**: Data derived from MTGJSON and Commander Spellbook ### Data Generation Methodology 1. **Card Descriptions**: Reformatted rules text from MTG cards. 2. **Rules Questions**: Reformatted rulings from the MTG Rules Committee. 3. **Card Interactions**: Synthetic questions generated from combo descriptions in the Commander Spellbook database. ### Use Cases - **Training Language Models**: Improve language models' understanding of MTG rules and card interactions. - **Evaluation Benchmark**: Assess performance of language models on MTG-related tasks. ### Acknowledgments Thanks to the MTGJSON project and the team at Commander Spellbook for generously sharing their dataset, without which this dataset would not be possible. All generated data is unofficial Fan Content permitted under the Fan Content Policy. Not approved/endorsed by Wizards. Portions of the materials used are property of Wizards of the Coast. ©Wizards of the Coast LLC. ### Resources - **Fine-Tuned Model**: [MTG-Llama on HuggingFace](https://huggingface.co/jakeboggs/MTG-Llama) - **Training Code**: [GitHub Repository](https://github.com/JakeBoggs/Large-Language-Models-for-Magic-the-Gathering) - **Blog Post**: [boggs.tech](https://boggs.tech/posts/large-language-models-for-magic-the-gathering/)