Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -58,4 +58,69 @@ dataset_info:
|
|
58 |
---
|
59 |
# Dataset Card for "wikipedia-deduped"
|
60 |
|
61 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
---
|
59 |
# Dataset Card for "wikipedia-deduped"
|
60 |
|
61 |
+
|
62 |
+
# wikipedia - 20230901.en - deduped
|
63 |
+
|
64 |
+
> purpose: train with less data while maintaining (most) of the quality
|
65 |
+
|
66 |
+
This is really more of a "high quality diverse sample" rather than _"we are trying to remove literal duplicate documents"_. Source dataset: [graelo/wikipedia](https://huggingface.co/datasets/graelo/wikipedia).
|
67 |
+
|
68 |
+
## default config
|
69 |
+
|
70 |
+
command:
|
71 |
+
|
72 |
+
```sh
|
73 |
+
python -m text_dedup.minhash \
|
74 |
+
--path $ds_name \
|
75 |
+
--name $dataset_config \
|
76 |
+
--split $data_split \
|
77 |
+
--cache_dir "./cache" \
|
78 |
+
--output $out_dir \
|
79 |
+
--column $text_column \
|
80 |
+
--ngram 4 --threshold 0.6 \
|
81 |
+
--hash_func xxh3 --hash_bits 16 --num_perm 64 \
|
82 |
+
--batch_size 10000
|
83 |
+
```
|
84 |
+
|
85 |
+
dedup:
|
86 |
+
|
87 |
+
```sh
|
88 |
+
Fingerprinting... (num_proc=40): 100% 6705754/6705754 [06:57<00:00, 16063.27 examples/s]
|
89 |
+
Iterating MinHashes...: 100% 671/671 [04:13<00:00, 2.65it/s]
|
90 |
+
Clustering...: 100% 10/10 [00:21<00:00, 2.18s/it]
|
91 |
+
Finding clusters... (num_proc=40): 100% 6705754/6705754 [06:38<00:00, 16839.42 examples/s]
|
92 |
+
Filtering clusters... (num_proc=40): 100% 6705754/6705754 [02:25<00:00, 46058.39 examples/s]
|
93 |
+
Saving the dataset (39/39 shards): 100% 5971972/5971972 [03:47<00:00, 26266.10 examples/s]
|
94 |
+
[10/23/23 02:29:41] INFO Loading : 78.82s
|
95 |
+
```
|
96 |
+
|
97 |
+
result:
|
98 |
+
|
99 |
+
```python
|
100 |
+
DatasetDict({
|
101 |
+
train: Dataset({
|
102 |
+
features: ['id', 'url', 'title', 'text'],
|
103 |
+
num_rows: 5673373
|
104 |
+
})
|
105 |
+
validation: Dataset({
|
106 |
+
features: ['id', 'url', 'title', 'text'],
|
107 |
+
num_rows: 149299
|
108 |
+
})
|
109 |
+
test: Dataset({
|
110 |
+
features: ['id', 'url', 'title', 'text'],
|
111 |
+
num_rows: 149300
|
112 |
+
})
|
113 |
+
})
|
114 |
+
```
|
115 |
+
|
116 |
+
### text-only
|
117 |
+
|
118 |
+
This is the same thing but with all columns except for 'text' removed.
|
119 |
+
|
120 |
+
```python
|
121 |
+
from datasets import load_dataset
|
122 |
+
|
123 |
+
# If the dataset is gated/private, make sure you have run huggingface-cli login
|
124 |
+
config_name = "text-only"
|
125 |
+
dataset = load_dataset("BEE-spoke-data/wikipedia-deduped", config_name)
|
126 |
+
```
|