File size: 970 Bytes
1bab3e7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f6d608d
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 2180800569
    num_examples: 384589
  download_size: 980379692
  dataset_size: 2180800569
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
language:
- fa
tags:
- farsi
- persian
- corpus
- normalized
---

# Dataset Summary

Persian data of this dataset is a collection of 400k blog posts ([RohanAiLab/persian_blog](https://huggingface.co/datasets/RohanAiLab/persian_blog/blob/main/README.md)). these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling, creating tokenizer and text generation tasks.

  * **The data in this dataset have been normalized and unnecessary tokens have been removed.**

**Note:** If you need Persian and Engish corpus together, click [here](https://huggingface.co/datasets/ali619/corpus-dataset-normalized-for-persian-and-english)