|
--- |
|
license: creativeml-openrail-m |
|
task_categories: |
|
- text-classification |
|
language: |
|
- ar |
|
tags: |
|
- Tunizi |
|
- arabic |
|
size_categories: |
|
- 1M<n<10M |
|
--- |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62bc971ad2c8a6542f45d930/7EjSbamsvYpSRFKpCOcOt.png) |
|
|
|
# TuniziBigBench: The Largest Open-Source Tunizi (Darija) Dataset |
|
|
|
## Overview |
|
|
|
This dataset was created by scraping over 14,000 Tunisian YouTube videos, providing a rich repository of Tunisian language data. It covers a wide range of topics and text types, including politics, news, football, and more. The dataset is particularly valuable for training and fine-tuning natural language processing models specific to Tunisian Arabic and other local dialects. |
|
We are working One 1B dataset stay tuned |
|
|
|
- **Curated by:** Nehdi Taha Mustapha & Oussama Sassi. |
|
- **Contact:** https://www.linkedin.com/in/taha-mustapha-nehdi-240585203/ |
|
|
|
## Dataset Structure |
|
|
|
The dataset is structured in the following way: |
|
- **Fields:** The dataset contains fields for the text content |
|
|
|
### Licensing |
|
This dataset is released under the [CreativeML Open RAIL-M license](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE). |
|
|
|
### Usage |
|
```python |
|
from datasets import load_dataset |
|
|
|
# Load the dataset from Hugging Face |
|
dataset = load_dataset('Nehdi/TuniziBigBench') |
|
|
|
# Example: Print the first 5 entries from the dataset |
|
for entry in dataset['train'][:5]: |
|
print(entry['text']) |
|
``` |
|
|
|
|
|
### Citation |
|
If you use this dataset in your research, please cite it as follows: |
|
``` |
|
@dataset{TuniziBigBench, |
|
author = {Nehdi Taha Mustapha}, |
|
title = {Tunizi Tunisian Dialectic Derja Dataset}, |
|
year = {2025}, |
|
url = {https://huggingface.co/datasets/Nehdi/TuniziBigBench} |
|
} |
|
``` |