added datasets descriptions and links
Browse files
README.md
CHANGED
@@ -20,13 +20,25 @@ You will find below descriptions for the various input files provided, to help y
|
|
20 |
|
21 |
## Community provided files
|
22 |
|
23 |
-
**8k_random_data**\
|
24 |
-
|
25 |
-
**20k_random_data**\
|
26 |
-
|
27 |
**groups_merged**\
|
|
|
|
|
|
|
|
|
28 |
|
29 |
**group_10_merged**\
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
**ptb.train**\
|
32 |
Penn Treebank (PTB) is a widely used preprocessed large dataset designed for language training. Casing,
|
@@ -48,8 +60,11 @@ https://huggingface.co/datasets/wikitext
|
|
48 |
or "good articles".\
|
49 |
https://huggingface.co/datasets/asi/wikitext_fr
|
50 |
|
51 |
-
**c4
|
52 |
-
|
|
|
|
|
|
|
53 |
|
54 |
**code** (exllamav2)\
|
55 |
Programming
|
|
|
20 |
|
21 |
## Community provided files
|
22 |
|
|
|
|
|
|
|
|
|
23 |
**groups_merged**\
|
24 |
+
_"Here is a decent general purpose imatrix calibration dataset. It should be more diverse than wikitext at ~30k tokens, as it is excerpts of a larger dataset which includes coding examples (which seems quite important!)
|
25 |
+
This means it's generally higher entropy data compared to wikitext, and it's real data rather than pseudo-randomly generated data.
|
26 |
+
I get lower KL div than wikitext for the same length and the outputs seem qualitatively better."_ (kalomaze)\
|
27 |
+
https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384
|
28 |
|
29 |
**group_10_merged**\
|
30 |
+
(superseeded by groups_merged)\
|
31 |
+
_"This is about ~50k pseudo-random tokens.
|
32 |
+
I am getting the best balance between the maximum divergence and the other divergence statistics using this file when quantizing 7b"_ (kalomaze)\
|
33 |
+
https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8349233
|
34 |
+
|
35 |
+
**20k_random_data**\
|
36 |
+
(superseeded by groups_10_merged)\
|
37 |
+
https://github.com/ggerganov/llama.cpp/discussions/5006#discussioncomment-8163190
|
38 |
+
|
39 |
+
**8k_random_data**\
|
40 |
+
(superseeded by 20k_random_data)\
|
41 |
+
https://github.com/ggerganov/llama.cpp/discussions/5006#discussion-6087829
|
42 |
|
43 |
**ptb.train**\
|
44 |
Penn Treebank (PTB) is a widely used preprocessed large dataset designed for language training. Casing,
|
|
|
60 |
or "good articles".\
|
61 |
https://huggingface.co/datasets/asi/wikitext_fr
|
62 |
|
63 |
+
**c4**\
|
64 |
+
The C4 dataset is a collection text sourced from the public Common Crawl web scrape.
|
65 |
+
It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish)
|
66 |
+
in addition to extensive deduplication. C4 dataset was explicitly designed to be English only:
|
67 |
+
any page that was not given a probability of at least 99% of being English by langdetect was discarded.
|
68 |
|
69 |
**code** (exllamav2)\
|
70 |
Programming
|