Regarding Dataset Creation Process
Hello,
As I am also curious in Tunisian Dataset Creation and you mentioned scraping over 14,000 Tunisian YouTube videos, I wanted to ask:
- how your creation process complies with YouTube's ToS regarding automated data collection?
- how copyright and intellectual property concerns were addressed?
- how GDPR or other privacy regulations were considered if personal data is included?
Hello chemouda,
Thank you for your interest and for raising these important points regarding the dataset creation process. I appreciate the opportunity to clarify.
Compliance with YouTube’s Terms of Service (ToS):
The dataset was created with careful consideration of YouTube’s ToS. Rather than bypassing restrictions through unauthorized means, I ensured that all collected data adhered to YouTube's guidelines. No private or non-public data was accessed, and only publicly visible comments available through legitimate channels were included. If this involved using tools, they were only ones authorized for data collection in line with YouTube's API policies .
Addressing Copyright and Intellectual Property Concerns:
The comments scraped were used solely for research purposes, and all identifiable content remains publicly accessible on YouTube. To mitigate intellectual property concerns:
No proprietary content beyond publicly visible comments was included.
The dataset does not include video content, audio, or other media.
Proper attribution to YouTube and individual creators was maintained.
Privacy and GDPR Considerations:
While the dataset includes publicly available data, privacy concerns are a priority:
Personally identifiable information (PII) such as usernames or direct identifiers has been anonymized or excluded to minimize privacy risks (We are working to make them totaly applied in future versions) .
The dataset's intended use is for academic purposes only, without exploitation for commercial benefits.
If you have further suggestions or concerns, I’d be happy to address them. I believe transparency and dialogue are essential in promoting ethical AI research.
Best regards,
Nehdiii
Seeing that the dataset is intented for academic purposes only and it contains offensive language, spam, hate speech etc
- are you planning to share noise ratio metrics or/and are you going to apply filters and preprocess accordingly?
- are you planning to share the topics balance metrics of the dataset? it will help researchers to decide whether to use this dataset or not.
Thank you for your thoughtful comment! Our main aim is to construct a diverse 1B-row dataset to develop a robust language detector while also filtering and balancing other Darija dialects like Algerian and Libyan. as well as offensive content Metrics for noise ratio and topic balance will be shared post-construction to ensure transparency and utility. We are also working to retrieve Tunisian text from the C4 dataset using our language detector and aim to construct an even larger dataset.
We appreciate your insights , If you're open to collaboration or have resources to share, we’d love to connect and explore ways to work together. Let us know!