File size: 1,726 Bytes
b04770c 83d3b52 68be86c b04770c 1b8f6f0 b447be5 68be86c d36a603 68be86c b445555 68be86c cca1abc 5e64540 b445555 5e64540 b445555 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
license: creativeml-openrail-m
task_categories:
- text-classification
language:
- ar
tags:
- Tunizi
- arabic
size_categories:
- 1M<n<10M
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62bc971ad2c8a6542f45d930/7EjSbamsvYpSRFKpCOcOt.png)
# TuniziBigBench: The Largest Open-Source Tunizi (Darija) Dataset
## Overview
This dataset was created by scraping over 14,000 Tunisian YouTube videos, providing a rich repository of Tunisian language data. It covers a wide range of topics and text types, including politics, news, football, and more. The dataset is particularly valuable for training and fine-tuning natural language processing models specific to Tunisian Arabic and other local dialects.
We are working One 1B dataset stay tuned
- **Curated by:** Nehdi Taha Mustapha & Oussama Sassi.
- **Contact:** https://www.linkedin.com/in/taha-mustapha-nehdi-240585203/
## Dataset Structure
The dataset is structured in the following way:
- **Fields:** The dataset contains fields for the text content
### Licensing
This dataset is released under the [CreativeML Open RAIL-M license](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE).
### Usage
```python
from datasets import load_dataset
# Load the dataset from Hugging Face
dataset = load_dataset('Nehdi/TuniziBigBench')
# Example: Print the first 5 entries from the dataset
for entry in dataset['train'][:5]:
print(entry['text'])
```
### Citation
If you use this dataset in your research, please cite it as follows:
```
@dataset{TuniziBigBench,
author = {Nehdi Taha Mustapha},
title = {Tunizi Tunisian Dialectic Derja Dataset},
year = {2025},
url = {https://huggingface.co/datasets/Nehdi/TuniziBigBench}
}
``` |