library_name: transformers
tags: []
Model Card for ChessGPT_d12 Model
Model Details
Model Description
This model is a GPT-2 architecture with 12 layers and 12 attention heads, each with a hidden state dimension of 768. It was trained using Andrey Karpathy's llm.c
library to predict UCI chess moves. The training data consists of all games played on Lichess.org in January 2024, and the model was validated on games from January 2013. It was designed to assist with tasks related to chess move prediction and analysis.
- Developed by: Austin Davis
- Model type: GPT-2
- Language(s): UCI Chess Notation
- License: Apache 2.0
- Training: Pre-trained from random initialization
Model Sources
- Repository: Lichess GPT2 Model
Uses
Direct Use
The model can be used directly to predict chess moves based on UCI notation.
Downstream Use
The model can be fine-tuned or adapted for chess analysis, game annotations, or training new models for chess-based tasks.
Bias, Risks, and Limitations
While the model performs well on chess move prediction, its limitations stem from the scope of the training data. The model was trained on historical Lichess games, and its predictions may reflect common play patterns from these datasets. Users should be cautious about generalizing the model’s performance to other chess platforms or styles of play.
How to Get Started with the Model
To load and use the model, you can follow the instructions below:
from transformers import GPT2LMHeadModel, AutoTokenizer
from uci_tokenizers import UciTileTokenizer
model = GPT2LMHeadModel.from_pretrained("austindavis/ChessGPT_d12")
tokenizer = UciTileTokenizer()
# Example: Predict the next chess move
inputs = tokenizer("e2e4", return_tensors="pt")
outputs = model.generate(inputs.input_ids)
print(tokenizer.decode(outputs[0]))
Training Details
Training Data
The model was trained on all Lichess games played in January 2024. Validation was conducted on games played in January 2013.
Training Procedure
The model was trained for 541,548 steps, with a final loss of 0.8139. It was trained using a padded vocabulary size of 8192, which was later reduced to 72 tokens to optimize for chess-specific UCI notation. The tokenizer used is based on UCI chess moves and is implemented in uci_tokenizers.py
.
Preprocessing
The tokenizer follows a subword tokenization approach and handles UCI chess tokens. Promotion tokens are represented in uppercase letters (Q, B, R, N), and the vocab includes 64 square tokens (a1 to h8), along with 4 special tokens and a set of special tokens (i.e., BOS, PAD, EOS, UNK).
Training Hyperparameters
- Training regime: Mixed precision (fp16)
- Learning rate: 5e-5
- Batch size: 64
- Steps: 541,548
- Final eval loss: 0.8139
Evaluation
Testing Data, Factors & Metrics
The model was validated on a dataset of Lichess games played in January 2013. The key evaluation metric used was validation loss, with a final validation loss of 0.8139 achieved at the end of training.
Environmental Impact
Training for the model was conducted on a GPU infrastructure, but specific details on the environmental impact, such as the total carbon emissions, were not recorded.
Technical Specifications
Model Architecture and Objective
- Model type: GPT-2
- Layers: 12
- Attention heads: 12
- Hidden size: 768
- Vocabulary size: 72
Compute Infrastructure
- Hardware: NVIDIA RTX 3060 Mobile GPU
- Software: Trained using Andrey Karpathy’s llm.c library
Citation
BibTeX:
@misc{chessgpt_d12,
author = {Austin Davis},
title = {ChessGPT_d12 Model for UCI Move Prediction},
year = {2024},
url = {https://huggingface.co/austindavis/ChessGPT_d12},
}