austindavis
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -2,198 +2,112 @@
|
|
2 |
library_name: transformers
|
3 |
tags: []
|
4 |
---
|
5 |
-
|
6 |
-
# Model Card for Model ID
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
-
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
19 |
|
20 |
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
-
|
52 |
-
### Out-of-Scope Use
|
53 |
-
|
54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
69 |
|
70 |
-
|
|
|
|
|
71 |
|
72 |
-
|
|
|
73 |
|
74 |
-
|
|
|
|
|
|
|
|
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
|
90 |
-
|
91 |
|
|
|
92 |
|
93 |
#### Training Hyperparameters
|
94 |
|
95 |
-
- **Training regime:**
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
### Testing Data, Factors & Metrics
|
108 |
|
109 |
-
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
|
141 |
## Environmental Impact
|
142 |
|
143 |
-
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
|
147 |
-
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
|
155 |
### Model Architecture and Objective
|
156 |
|
157 |
-
|
|
|
|
|
|
|
|
|
158 |
|
159 |
### Compute Infrastructure
|
160 |
|
161 |
-
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
|
167 |
-
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
|
175 |
**BibTeX:**
|
176 |
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
2 |
library_name: transformers
|
3 |
tags: []
|
4 |
---
|
5 |
+
# Model Card for ChessGPT_d12 Model
|
|
|
|
|
|
|
|
|
|
|
6 |
|
7 |
## Model Details
|
8 |
|
9 |
### Model Description
|
10 |
|
11 |
+
This model is a GPT-2 architecture with 12 layers and 12 attention heads, each with a hidden state dimension of 768. It was trained using Andrey Karpathy's `llm.c` library to predict UCI chess moves. The training data consists of all games played on Lichess.org in January 2024, and the model was validated on games from January 2013. It was designed to assist with tasks related to chess move prediction and analysis.
|
12 |
|
13 |
+
- **Developed by:** Austin Davis
|
14 |
+
- **Model type:** GPT-2
|
15 |
+
- **Language(s):** UCI Chess Notation
|
16 |
+
- **License:** Apache 2.0
|
17 |
+
- **Training:** Pre-trained from random initialization
|
18 |
|
19 |
+
### Model Sources
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
+
- **Repository:** [Lichess GPT2 Model](https://huggingface.co/austindavis/ChessGPT_d12)
|
22 |
+
<!-- - **Demo:** [Lichess GPT2 Demo](https://demo-url.com) -->
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
## Uses
|
25 |
|
|
|
|
|
26 |
### Direct Use
|
27 |
|
28 |
+
The model can be used directly to predict chess moves based on UCI notation.
|
29 |
|
30 |
+
### Downstream Use
|
31 |
|
32 |
+
The model can be fine-tuned or adapted for chess analysis, game annotations, or training new models for chess-based tasks.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
## Bias, Risks, and Limitations
|
35 |
|
36 |
+
While the model performs well on chess move prediction, its limitations stem from the scope of the training data. The model was trained on historical Lichess games, and its predictions may reflect common play patterns from these datasets. Users should be cautious about generalizing the model’s performance to other chess platforms or styles of play.
|
|
|
|
|
|
|
|
|
37 |
|
38 |
+
## How to Get Started with the Model
|
39 |
|
40 |
+
To load and use the model, you can follow the instructions below:
|
41 |
|
42 |
+
```python
|
43 |
+
from transformers import GPT2LMHeadModel, AutoTokenizer
|
44 |
+
from uci_tokenizers import UciTileTokenizer
|
45 |
|
46 |
+
model = GPT2LMHeadModel.from_pretrained("austindavis/ChessGPT_d12")
|
47 |
+
tokenizer = UciTileTokenizer()
|
48 |
|
49 |
+
# Example: Predict the next chess move
|
50 |
+
inputs = tokenizer("e2e4", return_tensors="pt")
|
51 |
+
outputs = model.generate(inputs.input_ids)
|
52 |
+
print(tokenizer.decode(outputs[0]))
|
53 |
+
```
|
54 |
|
55 |
## Training Details
|
56 |
|
57 |
### Training Data
|
58 |
|
59 |
+
The model was trained on all Lichess games played in January 2024. Validation was conducted on games played in January 2013.
|
|
|
|
|
60 |
|
61 |
### Training Procedure
|
62 |
|
63 |
+
The model was trained for 541,548 steps, with a final loss of 0.8139. It was trained using a padded vocabulary size of 8192, which was later reduced to 72 tokens to optimize for chess-specific UCI notation. The tokenizer used is based on UCI chess moves and is implemented in `uci_tokenizers.py`.
|
|
|
|
|
64 |
|
65 |
+
#### Preprocessing
|
66 |
|
67 |
+
The tokenizer follows a subword tokenization approach and handles UCI chess tokens. Promotion tokens are represented in uppercase letters (Q, B, R, N), and the vocab includes 64 square tokens (a1 to h8), along with 4 special tokens and a set of special tokens (i.e., BOS, PAD, EOS, UNK).
|
68 |
|
69 |
#### Training Hyperparameters
|
70 |
|
71 |
+
- **Training regime:** Mixed precision (fp16)
|
72 |
+
- **Learning rate:** 5e-5
|
73 |
+
- **Batch size:** 64
|
74 |
+
- **Steps:** 541,548
|
75 |
+
- **Final eval loss:** 0.8139
|
|
|
|
|
76 |
|
77 |
## Evaluation
|
78 |
|
|
|
|
|
79 |
### Testing Data, Factors & Metrics
|
80 |
|
81 |
+
The model was validated on a dataset of Lichess games played in January 2013. The key evaluation metric used was validation loss, with a final validation loss of 0.8139 achieved at the end of training.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
|
83 |
## Environmental Impact
|
84 |
|
85 |
+
Training for the model was conducted on a GPU infrastructure, but specific details on the environmental impact, such as the total carbon emissions, were not recorded.
|
|
|
|
|
86 |
|
87 |
+
## Technical Specifications
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |
|
89 |
### Model Architecture and Objective
|
90 |
|
91 |
+
- **Model type:** GPT-2
|
92 |
+
- **Layers:** 12
|
93 |
+
- **Attention heads:** 12
|
94 |
+
- **Hidden size:** 768
|
95 |
+
- **Vocabulary size:** 72
|
96 |
|
97 |
### Compute Infrastructure
|
98 |
|
99 |
+
- **Hardware:** NVIDIA RTX 3060 Mobile GPU
|
100 |
+
- **Software:** Trained using Andrey Karpathy’s [llm.c](https://github.com/karpathy/llm.c) library
|
|
|
|
|
|
|
101 |
|
102 |
+
## Citation
|
|
|
|
|
|
|
|
|
|
|
|
|
103 |
|
104 |
**BibTeX:**
|
105 |
|
106 |
+
```bibtex
|
107 |
+
@misc{chessgpt_d12,
|
108 |
+
author = {Austin Davis},
|
109 |
+
title = {ChessGPT_d12 Model for UCI Move Prediction},
|
110 |
+
year = {2024},
|
111 |
+
url = {https://huggingface.co/austindavis/ChessGPT_d12},
|
112 |
+
}
|
113 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|