Datasets:
Update README.md
#10
by
Muennighoff
- opened
README.md
CHANGED
@@ -110,3 +110,24 @@ Then you can download them all in parallel using:
|
|
110 |
|
111 |
You can also add `-s` to increase the number of connections, e.g. `-s 10` (defaults to 5).
|
112 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
110 |
|
111 |
You can also add `-s` to increase the number of connections, e.g. `-s 10` (defaults to 5).
|
112 |
|
113 |
+
|
114 |
+
|
115 |
+
To get the exact file counts that are used for The Stack in the above script (`LANG_TO_FILES`), you can follow the below:
|
116 |
+
|
117 |
+
Fetch all files (does not download them, so should be fast): `GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/allenai/dolma.git`
|
118 |
+
Then run:
|
119 |
+
```python
|
120 |
+
import os
|
121 |
+
|
122 |
+
directory = "dolma/data/stack-code"
|
123 |
+
folder_dict = {}
|
124 |
+
|
125 |
+
for folder in os.listdir(directory):
|
126 |
+
folder_path = os.path.join(directory, folder)
|
127 |
+
if os.path.isdir(folder_path):
|
128 |
+
file_count = len([f for f in os.listdir(folder_path) if os.path.isfile(os.path.join(folder_path, f))])
|
129 |
+
folder_dict[folder] = file_count
|
130 |
+
|
131 |
+
print(folder_dict)
|
132 |
+
```
|
133 |
+
|