ali619 commited on
Commit
f6d608d
1 Parent(s): 8c44a79

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -17
README.md CHANGED
@@ -1,17 +1,31 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: text
5
- dtype: string
6
- splits:
7
- - name: train
8
- num_bytes: 2180800569
9
- num_examples: 384589
10
- download_size: 980379692
11
- dataset_size: 2180800569
12
- configs:
13
- - config_name: default
14
- data_files:
15
- - split: train
16
- path: data/train-*
17
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: text
5
+ dtype: string
6
+ splits:
7
+ - name: train
8
+ num_bytes: 2180800569
9
+ num_examples: 384589
10
+ download_size: 980379692
11
+ dataset_size: 2180800569
12
+ configs:
13
+ - config_name: default
14
+ data_files:
15
+ - split: train
16
+ path: data/train-*
17
+ language:
18
+ - fa
19
+ tags:
20
+ - farsi
21
+ - persian
22
+ - corpus
23
+ ---
24
+
25
+ # Dataset Summary
26
+
27
+ Persian data of this dataset is a collection of 400k blog posts ([RohanAiLab/persian_blog](https://huggingface.co/datasets/RohanAiLab/persian_blog/blob/main/README.md)). these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling, creating tokenizer and text generation tasks.
28
+
29
+ * **The data in this dataset have been normalized and unnecessary tokens have been removed.**
30
+
31
+ **Note:** If you need Persian and Engish corpus together, click [here](https://huggingface.co/datasets/ali619/corpus-dataset-normalized-for-persian-and-english)