Add some suggestion to flesh out the dataset description
#1
by
davanstrien
HF staff
- opened
README.md
CHANGED
@@ -1,4 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Chess
|
|
|
|
|
|
|
2 |
This dataset contains 3.2 billion games, equating to approximately 608 billion individual moves.
|
3 |
it is generated through self-play by Stockfish engine using Fugaku and we add initial moves to expand its diversity.
|
4 |
|
@@ -8,4 +16,12 @@ Each game has three columns: 'Moves', 'Termination' and 'Result',
|
|
8 |
- Please check this for detail information
|
9 |
|
10 |
https://python-chess.readthedocs.io/en/latest/core.html#chess.Outcome.termination
|
11 |
-
- 'Result': result of this game, 1-0, 1/2-1/2, 0-1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- game
|
4 |
+
pretty_name: The Chess Dataset
|
5 |
+
---
|
6 |
# Chess
|
7 |
+
|
8 |
+
> Recent advancements in artificial intelligence (AI) underscore the progress of reasoning and planning shown by recent generalist machine learning (ML) models. The progress can be boosted by datasets that can further boost these generic capabilities when used for training foundation models of various kind. This research initiative has generated extensive synthetic datasets from complex games — chess, Rubik's Cube, and mazes — to study facilitation and the advancement of these critical generic skills in AI models.
|
9 |
+
|
10 |
This dataset contains 3.2 billion games, equating to approximately 608 billion individual moves.
|
11 |
it is generated through self-play by Stockfish engine using Fugaku and we add initial moves to expand its diversity.
|
12 |
|
|
|
16 |
- Please check this for detail information
|
17 |
|
18 |
https://python-chess.readthedocs.io/en/latest/core.html#chess.Outcome.termination
|
19 |
+
- 'Result': result of this game, 1-0, 1/2-1/2, 0-1.
|
20 |
+
|
21 |
+
### Call for Collaboration
|
22 |
+
|
23 |
+
We invite interested researchers and ML practitioners to explore these datasets' potential. Whether training GPT models from scratch or fine-tuning pre-existing models, we encourage the exploration of various pre-training and fine-tuning strategies using these game-based datasets standalone or as enhancement of other already composed large-scale data.
|
24 |
+
|
25 |
+
Our team is prepared to assist in securing necessary GPU resources for these explorations. We are particularly interested in collaborators eager to pre-train models of small to medium scale on our game data, subsequently transition to standard text-based training, and then perform comparative analyses against models of similar architecture trained exclusively on text data.
|
26 |
+
|
27 |
+
Conclusively, this initiative marks a significant stride toward intricate problem-solving and strategic planning in AI, extending an open invitation to the research community for collaborative advancement in this domain.
|