yentinglin
commited on
Commit
•
adfc828
1
Parent(s):
990d362
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,93 @@
|
|
1 |
-
---
|
2 |
-
license: odc-by
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: odc-by
|
3 |
+
---
|
4 |
+
# FineWeb Miniseries Dataset
|
5 |
+
|
6 |
+
The FineWeb Miniseries Dataset is a collection of random subsets of the FineWeb dataset, created for training and experimenting with language models of different sizes. The subsets are generated based on target token counts, providing a range of dataset sizes suitable for various computational resources and research purposes.
|
7 |
+
|
8 |
+
## Inspiration
|
9 |
+
|
10 |
+
The FineWeb Miniseries Dataset was inspired by a [tweet](https://x.com/karpathy/status/1786504106347221498) from Andrej Karpathy ([@karpathy](https://twitter.com/karpathy)) on May 4, 2024, where he was looking for a manageable ~1GB sample of a larger dataset for debugging and mentioned the idea of having subsets at different scales. The goal of the FineWeb Miniseries Dataset is to make experimenting with large language models more approachable for the GPU & Disk Poor.
|
11 |
+
|
12 |
+
## Dataset Subsets
|
13 |
+
|
14 |
+
The dataset consists of the following subsets:
|
15 |
+
|
16 |
+
- **1B**: A subset containing approximately 1 billion GPT2 tokens.
|
17 |
+
- **10B**: A subset containing approximately 10 billion GPT2 tokens.
|
18 |
+
- **100B**: A subset containing approximately 100 billion GPT2 tokens.
|
19 |
+
- **350B**: A subset containing approximately 350 billion GPT2 tokens.
|
20 |
+
|
21 |
+
Each subset is created by randomly sampling rows from the original FineWeb dataset while maintaining the average GPT2 tokens per row. The random sampling is performed with a fixed seed (42) to ensure reproducibility.
|
22 |
+
|
23 |
+
## Usage
|
24 |
+
|
25 |
+
To use the FineWeb Miniseries Dataset, you can load the desired subset using the Hugging Face Datasets library. Here's an example of how to load a subset:
|
26 |
+
|
27 |
+
```python
|
28 |
+
from datasets import load_dataset
|
29 |
+
|
30 |
+
# Load the "1B" subset
|
31 |
+
subset_1b = load_dataset("yentinglin/fineweb_miniseries", "1B")
|
32 |
+
```
|
33 |
+
|
34 |
+
Replace `"1B"` with the desired subset name (`"10B"`, `"100B"`, or `"350B"`) to load the corresponding subset.
|
35 |
+
|
36 |
+
## Dataset Creation
|
37 |
+
|
38 |
+
The subsets of the FineWeb Miniseries Dataset are created using the following code:
|
39 |
+
|
40 |
+
```python
|
41 |
+
from datasets import load_dataset
|
42 |
+
|
43 |
+
# Load the "fineweb" dataset
|
44 |
+
fw = load_dataset("HuggingFaceFW/fineweb")
|
45 |
+
|
46 |
+
# Calculate the average GPT2 tokens per row
|
47 |
+
average_tokens_per_row = 15352.9e9 / 22335106879
|
48 |
+
|
49 |
+
# Define the target token counts for each subset
|
50 |
+
target_token_counts = [1e9, 10e9, 100e9, 350e9]
|
51 |
+
|
52 |
+
# Shuffle the dataset
|
53 |
+
shuffled_dataset = fw.shuffle(seed=42)
|
54 |
+
|
55 |
+
# Create and push the subsets to the Hugging Face Hub
|
56 |
+
for target_tokens in target_token_counts:
|
57 |
+
# Calculate the number of rows needed for the target token count
|
58 |
+
num_rows = int(target_tokens / average_tokens_per_row)
|
59 |
+
|
60 |
+
# Select a random subset of the shuffled dataset
|
61 |
+
subset = shuffled_dataset.select(range(num_rows))
|
62 |
+
|
63 |
+
# Push the subset to the Hugging Face Hub
|
64 |
+
subset_name = f"{int(target_tokens/1e9)}B"
|
65 |
+
subset.push_to_hub(f"yentinglin/fineweb_miniseries", subset_name)
|
66 |
+
|
67 |
+
print(f"Pushed {subset_name} subset to the Hugging Face Hub.")
|
68 |
+
```
|
69 |
+
|
70 |
+
## Original Dataset
|
71 |
+
|
72 |
+
The FineWeb Miniseries Dataset is derived from the original FineWeb dataset, which can be found at [HuggingFaceFW/fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb). The original dataset contains a large collection of web pages suitable for training language models.
|
73 |
+
|
74 |
+
## License
|
75 |
+
|
76 |
+
The FineWeb Miniseries Dataset is released under the same license as the original FineWeb dataset. Please refer to the original dataset's license for more information.
|
77 |
+
|
78 |
+
## Citation
|
79 |
+
|
80 |
+
If you use the FineWeb Miniseries Dataset in your research or projects, please cite the original FineWeb dataset:
|
81 |
+
|
82 |
+
```
|
83 |
+
@software{penedo2024fineweb,
|
84 |
+
author = {Penedo, Guilherme and Kydlíček, Hynek and von Werra, Leandro and Wolf, Thomas},
|
85 |
+
title = {FineWeb},
|
86 |
+
month = April,
|
87 |
+
year = 2024,
|
88 |
+
doi = { 10.57967/hf/2092 },
|
89 |
+
url = {https://huggingface.co/datasets/HuggingFaceFW/fineweb}
|
90 |
+
}
|
91 |
+
```
|
92 |
+
|
93 |
+
Feel free to customize the README based on your specific requirements and add any additional information that you think would be helpful for users of the FineWeb Miniseries Dataset.
|