text
stringlengths
1
63k
language
class label
201 classes
dataset_source
stringclasses
19 values
Dia hoy ny mpanapaka tamin'i Baroka: Mandehana, ka miere ianao sy Jeremia, ary aoka tsy hisy olona hahita izay itoeranareo.
0plt_Latn
mt560
Lamun urang soméah bari ngucapkeun salam, manéhna bisa boga pandangan nu alus ngeunaan urang jeung Allah nu disembah ku urang.
1sun_Latn
mt560
Натомість ті райони, які мають найнижчі рівні імміграції, найбільш ворожі й нетерпимі щодо іммігрантів.
2ukr_Cyrl
mt560
¿Serás exaltada hasta el cielo?
3spa_Latn
lti
O tom da pergunta do senhor deputado surpreende um pouco a Comissão.
4por_Latn
mt560
ဒါ မှ မဟုတ ် ကျွန ် ုပ ် တို ့ ရဲ ့ အပြုအမူ ၊
5mya_Mymr
mt560
(Разни извори -- 03 / 02 / 09-10 / 02 / 09)
6mkd_Cyrl
mt560
Syempre, an paghimo hin sugad kadaku nga desisyon nagkikinahanglan hin maopay nga pagplano.
7war_Latn
mt560
Eupša go be go sa felele moo, gobane Jehofa o be a sa kgahlišwe ke seo.
8nso_Latn
mt560
Ca kanamu gàngune ma amoon na fa juróom-ñaari làmp yu yànj, di juróom-ñaari Xeli Yàlla yi.
9wol_Latn
mt560
Ndethĩwa aathĩnĩka takeka nĩwaanangĩie kĩveti kĩu mũtwaano wakyo, ndaĩa yakyo, na ngwatanĩo yakyo na Yeova.
10kam_Latn
mt560
ഇത്, സംസാരിക്കാൻ സമയമുണ്ടാക്കേണ്ടത് ആവശ്യമാക്കിത്തീർക്കുന്നു. "- വീക്ഷാഗോപുരം, 1993 ആഗസ്റ്റ് 1, പേജ് 4.
11mal_Mlym
mt560
forbairt modheolaíochta i gcás suirbhé deonach maidir le foréigin bunaithe ar inscne, i gcomhar leis na hinstitiúidí, comhlachtaí, oifigí agus gníomhaireachtaí Eorpacha a ghníomhaíonn sa réimse seo;
12gle_Latn
mt560
Jesusan jiwäwip amtañatakejj Biblia liyisaw wakichtʼassna.
13ayr_Latn
mt560
Участники попросили дать разъяснения по классификации расходов на наркотические средства, услуги проституток, безмоторные средства, используемые в транспортных, а не в рекреационных целях, услуги городского и пригородного транспорта, осуществляемые с использованием более одного вида транспорта.
14rus_Cyrl
mt560
چې کله روبين واپس کُوهى ته راغلو اؤ وې ليدل چې يُوسف هلته نۀ وو ، نو هغۀ د خفګان په وجه خپلې جامې وشلولې.
15pbt_Arab
mt560
Nen inmunan siglo, saray Kristiano so mabetbet a puntirya na panagpauges, a sikara so ipapasen a managgaway - kaugsan.
16pag_Latn
mt560
(AП, АФП, Радио Б92, Европска комисија - 30 / 03 / 04)
6mkd_Cyrl
mt560
Osii apini sɛ: "Ma ɛda a wɔwoo me mu no nyera...
17twi_Latn
mt560
Ne gjetem qe semundja e Lindes nuk dukej si semundja tipike e meshkujve.
18als_Latn
mt560
Kiekviena narė turi vieną nedalijamą balsą, o jai atstovaujama, kai posėdyje pagal bendrojo susirinkimo darbo tvarkos taisykles asmeniškai arba surengus telekonferenciją dalyvauja bent vienas atstovas.
19lit_Latn
mt560
ታማኙ ባሪያ ከቅቡዓን ክርስቲያኖች የተውጣጣ ነው ብሎ ማሰቡ ተገቢ የሆነው ለምንድን ነው?
20amh_Ethi
mt560
Fakat kimse prensesi yıldıramadı.
21tur_Latn
mt560
దానికి ఆధారం, ఇక్కడ చూపించబడిన ఉత్తరంలో కనిపిస్తుంది.
22tel_Telu
mt560
Ła łinea dei albari ła se cata in genare sui monti, marcando, par ogni łatitudine e condision climatica, l'altitudine masima de espansion dei albari e, par quanto riguarda łe rejoni połari, ła łatitudine più ełevà parmesa par i boschi.
23vec_Latn
wili2018
Sesetengah bertujuan untuk memberitahu dan memberikan ukuran untuk mencegah dan juga secara berkesan menghentikan siberbuli dan siber-gangguan berterusan.
24zsm_Latn
leipzig
ھونەرمەند لە سەردەمی نازییەکان ڕوبەڕوی زۆر ئەزیەت و داپلۆسین ھات تەنیا لەبەر ئەوەی بۆچونەکانی لەگەڵ بیر و باوەڕی نازییەکان ناکۆک بوو ناوییان نابوو (ھونەرمەندە ھەڵوەشاوەکھ) لەو کاتە ھەر بە دزییەوە لە ژوورێکی بچوکی ماڵەکەی خۆیدا کاری ھونەری دەکرد پێوانەی تابلۆکانی لە پانایی دەستێک تێپەڕی نەدەکرد،بەڵام لە کۆتایی شەردا نۆلدە بە قەبارەی گەورە زیاترلە ١.٣٠٠ تابلۆی بە ئەنجام گەیاند کە ئێستا لە ماڵەکەی خۆی نیشان دەدرێت.
25ckb_Arab
lti
«Кимиё» ном китобе бо хати зебои ӯ рӯнавис шудааст, ки аз 303 варақ иборат буда, таҳти рақами 1561 дар Ганҷинаи дастнависҳои шарқии Институти шарқшиносии АИ Ҷумҳурии Ӯзбекистон маҳфуз аст.
26tgk_Cyrl
lti
เรา ได ้ รับ โอกาส ที ่ พระเจ ้ า จะ นับ เรา เป ็ น มิตร ของ พระองค ์.
27tha_Thai
mt560
Նա կենաց ծառ է նորանց համար որ բռնել են նորան, եւ նա իրան պինդ բռնողին երջանիկ կ'անէ "(Առակաց 3 ։
28hye_Armn
mt560
Da sprach die ältere zu der jüngeren: Unser Vater ist alt, und es giebt niemand mehr auf Erden, der Umgang mit uns haben könnte, wie es aller Welt Brauch ist.
29deu_Latn
lti
II гасырыннан башлап Бөек Бабыл әсирлегендә булган?
30tat_Cyrl
mt560
Baada ya kuhitimu timu ya Olimpiki ya 2008, kIpruto alishinda medali ya dhahabu katika mbio hiyo.
31swh_Latn
lti
Masha law law gaw, nse nsa sumla n yu ai sha nga lu na matu grai shakut ra ai.
32kac_Latn
mt560
Gowy dost edinmek üçin myhmansöýer bolmaly.
33tuk_Latn
mt560
regulas I pielikumu groza šādi:
34lvs_Latn
mt560
Cosnóidh an Coimisiún agus na Ballstáit faisnéis rúnda gnó.
12gle_Latn
mt560
U ri: "Laha ku nga ni loyi a funengetaka rivengo ku ni milomu ya mavunwa."
35tso_Latn
mt560
അദ്ദേഹം പറയുന്നു: "തീർച്ച യാ യും!
11mal_Mlym
mt560
Grein um fiskiørn sædd á Hálsvøtnum á Sandoynni; www.Portal.fo
36fao_Latn
lti
Olsem wanem pasin isi i stap ples klia long samting God i mekim long Kein?
37tpi_Latn
mt560
(b) Nye ovanyãli vafendeli va Suku va ka linga?
38umb_Latn
mt560
Eger siz muňa kanagatlanarly jogap berseňiz, begenersiňiz.
33tuk_Latn
mt560
Τηλ: + 357 22 871600 Latvija sanofi- aventis Latvia SIA Tel: + 371 67 33 24 51
39mlt_Latn
mt560
Dangosa gariad hyd yn oed pan fydd hi'n anodd
40cym_Latn
mt560
কমিশন
41ben_Beng
mt560
Ahab te gen yon palè nan Samari, e palè sa a te yon konstriksyon estrawòdinè!
42hat_Latn
mt560
Cu siguranță există o teamă că roboții le vor lua joburile oamenilor, ceea ce e adevărat în anumite domenii.
43ron_Latn
mt560
ብልኡም መንፈስ ኢዮም ዝዛረቡ: ንዘሎኒ ጥርጣረ ድማ ብሕያውነት ይሰምዑኒ ኢዮም ። "
44tir_Ethi
mt560
Il-Kummissjoni kkonkludiet li l-ksur tal-Artikolu 28 ma kienx iġġustifikat mill-Artikolu 31 tal-Ftehim TRIPS.
39mlt_Latn
mt560
Yonatan se egɔme be David ye Mawu tia be wòanye fia ɖe ye fofo teƒe, eye edze eŋu be yeakpe ɖe ye xɔlɔ ̃ a ŋu. - 1 Samuel 19: 1, 2; 20: 30 - 33; 23: 14 - 18.
45ewe_Latn
mt560
Yusuf sangat menghargai keloyalan; tapi istri Potifar tidak.
46ind_Latn
mt560
چرچ هنن ڏند ڪٿائن جي سدائين نندا ڪئي ۽ انکي جنسي بي راھہ روي جي تبليغ تي ٻڌل قرار اڏنو. اهو ئي ڪارڻ آهي تہ هن سال بہ عيسائي پادرين هن ڏينهن جي نندا ۾ سخت بيان ڏنا۔ بينڪاڪ ۾ تہ هڪ عيسائي پادريءَ ڪجھہ ماڻهن کي ساڻ وٺي هڪ اهڙي دوڪان کي نذرآتش ڪري ڇڏيو. جنهن تي ويلنٽائن ڪارڊ وڪرو ٿي رهيا هئا۔
47snd_Arab
lti
Ontwikkeling van de particuliere sector
48nld_Latn
mt560
มี การ ดำเนิน ตาม แบบ แผน คล ้ าย ๆ กัน นี ้ ใน ปัจจุบัน ด ้ วย.
27tha_Thai
mt560
Yehowa gaw lam shagu hta hkum tsup ai majaw tinang myit hpe hkang lu ai lam hta mung hkum tsup ai.
32kac_Latn
mt560
پھر وہ میرے بلانے سے اوربھی زيادہ بھاگتے رہے
49urd_Arab
mt560
Chúa Giê - su trả lời bằng một minh họa so sánh người không thuộc dân Do Thái với "chó con".
50vie_Latn
mt560
Tinklą sudaro Europos Sąjungos valstybių narių valdžios institucijos ir agentūros, kurioms pavesta užduotis užkirsti kelią korupcijai ar kovoti su ja.
19lit_Latn
mt560
एका तहात सम्राट फर्डीनंड तिसरा याने आणि स्वीडनने सह्या केल्या आणि दुसऱ्या तहात, सम्राटाने व फ्रान्सने सह्या केल्या.
51mar_Deva
mt560
La mondialisation a accru l 'espérance de vie et l 'amélioration du niveau de vie dans de nombreux pays pauvres.
52fra_Latn
mt560
Naye ekituufu kiri nti Yakuwa bulijjo ayagala okukuuma abantu be obutatuukibwako kabi.
53lug_Latn
mt560
Od czasu traktatu z Maastricht art. 157 traktatu WE określa inicjatywy w ramach polityki przemysłowej, za pomocą których Komisja może koordynować działania państw członkowskich.
54pol_Latn
mt560
Boni inggih punika sarwa woh-wohan sané rupannyané bunter-bunter alit.
55ban_Latn
lti
Euroopa Parlamendi ja nõukogu direktiiv 2009 / 66 / EÜ, 13. juuli 2009, põllu- või metsamajanduslike ratastraktorite roolisüsteemi kohta
56est_Latn
mt560
На граници са марзом Лори налази се планина Ачкасар (у склопу Џавахетских планина), највиши врх у марзу са надморском висином од 3.196 метара.
57srp_Cyrl
lti
Bibiliya igira iti "icyakora no muri ubwo burwayi bwe ntiyigeze ashaka Yehova, ahubwo yirukiye mu bavuzi."
58kin_Latn
mt560
(ข) เหตุ ใด คุณ จึง ตั ้ งใจ แน ่ ว แน ่ ว ่ า จะ ดำเนิน ใน ทาง แห ่ ง ความ รัก?
27tha_Thai
mt560
Namnet dukka først opp på eit kart basert på landmålingane til Discovery Investigations i 1926-30.
59nno_Latn
lti
Se i miràcuj
60fur_Latn
lti
Pian epektibo so panagbangat mo, nepeg mon ipapuso so Salita na Dios (Nengnengen so parapo 17)
16pag_Latn
mt560
Niha ew bêtir serbilind in ku ew aîdî teşkîlata Xwedê ne. "
61kmr_Latn
mt560
हालाँकि, दुनों अमेरिका महादीप पनामा थलजोड़ से जुड़ल हवें आ पनामा नहर से अलग होखे लें, ई दक्खिनी महादीप परंपरागत रूप से अलगा से एगो महादीप मानल जाला आ कभी-काल्ह के इनहन के एक्के सुपरमहादीप के रूप में भी देखल जाला आ अंगरेजी में अमेरिकाज़ कहि के बोलावल जाला। राजनीतिक सीमा के आधार पर पनामा–कोलंबिया के बाडर के एह दुनों महादीप सभ के बीचा के बिभाजक मानल जाला।
62bho_Deva
lti
Gezien het feit dat Japan een belangrijke handelspartner is - de bilaterale handel met de EU is jaarlijks goed voor 120 miljard euro en de EU is voor Japan de op twee na grootste handelspartner en de op een na grootste investeerder - moeten we beseffen dat een bilateraal akkoord gepaard gaat met het risico dat multilaterale akkoorden met de rest van de wereld, vooral de ontwikkelingslanden, worden ondermijnd.
48nld_Latn
lti
Paraiškos pateikimas ir sutarties sudarymas.
19lit_Latn
mt560
Аллаһы Патшалыгы җирдә яшәгән кешеләр өчен нәрсә эшләячәк?
30tat_Cyrl
mt560
Kolmanneksi Starbucksin rooli ei ollut Alki LP:n roolia vähäisempi missään näistä liiketoimista.
63fin_Latn
mt560
Is féidir teistiméireachtaí a fháil ó fhostóirí a bhí agat roimhe seo (ainm an fhostóra, ainm na cuideachta, an post a bhí agat ann, uimhir fostaí).
12gle_Latn
mt560
Erangi te haere o te kamera ra te kowhao o te ngira he mea takoto noa, he whakauaua rawa ia te haere o te tangata taonga ki roto ki te rangatiratanga o te Atua
64mri_Latn
mt560
Maiyarig ni Jehova iti sarikedked ken nalagda a torre iti panangsaluad ken panangsaranayna iti ilina.
65ilo_Latn
mt560
Io, e bibi ena vakawati na dau dina.
66fij_Latn
mt560
तर इतर काहींनी, इंटरनेटचा मुळीच वापर न करण्याचे ठरवले आहे.
51mar_Deva
mt560
Vývozné náhrady na sirupy a niektoré iné výrobky z cukru vyvezené bez ďalšieho spracovania uplatniteľné od 28. septembra 2007 [1] Poznámka: Miesta určenia sú definované takto:
67slk_Latn
mt560
Yo llamo a esto la ecología material.
3spa_Latn
mt560
غَشِبْتٗ مِيَ كٗلَ سِيَ دٗنْ مَدٗ، سُوَنْزٗلَنْ دَلِلْ دَ سَنْدِ كُرِ-كُرِ مِ سَلْس حَدَأَ كُرُ نَ فَلْيِنْ كٗنٗنْغَ نزَ سَجدِنْ كَمَدَ رَكْسَ كٗمْبُ كٗلَنْزَأَ سَدِنْبَمَ، ُرُ رَكْسَ سَسَمْبِنْ بَ، تِيِ سَْ دَ ندُبْتَنَ مَنْ غٗنْيِأَ
68knc_Arab
nllbSeed
Thật là một cuộc tàn sát khủng khiếp!
50vie_Latn
mt560
ગુજરાતના રાજકારણમાં એક વખતે સૌથી શક્તિશાળી ગણાતાં મહિલા નેતા આનંદીબહેન પટેલ આવનારી વિધાનસભા ચૂંટણીમાં કેટલું મહત્વ ધરાવે છે?
69guj_Gujr
xlsum
ئەگەر ئەم چەکانەی ڕوسیا لە کوبا جێگیرکرابان بەڕاستی ناتۆ دەکەوتە خەتەرەوە چونکە سیستەمی پارێزگاری ئەوان لە خاکی ئەمریکا شکستی
25ckb_Arab
lti
De hecho, ha limitado su presencia en el extranjero al negarse a contribuir a un rescate de la eurozona, intervenir en Siria o usar la fuerza para contener el desarrollo nuclear de Irán (a pesar del decidido apoyo israelí).
3spa_Latn
mt560
과거에하느님의충실한종들도하느님의뜻을행하기위해힘써노력하지않으면안되었습니다.
70kor_Hang
mt560
Ka baka leo, Jošua o itše: "Xe e le nna le ba lapa la - ka, rena re dirêla Morêna."
8nso_Latn
mt560
▪ Kasi vikenda wuli kuti Mariya waŵe nyina wa Yesu, ndipo vikacitika wuli kuti Yesu waŵe na awiske ŵaŵiri?
71tum_Latn
mt560
Ma yella ulac daɣen laman deg-wen ɣef wayen i wen-iwekkel walebɛad ̣, amek ara wen-d-yefk Sidi R ̣ ebbi ayen i wen-ihegga i kunwi?
72kab_Latn
mt560
Vor der Industriellen Revolution gab es kein nennbares Wirtschaftswachstum.
29deu_Latn
mt560
Ongelykheid in lewenstandaarde en in geleenthede vir gesondheidsorg en skoolopleiding is net 'n paar van hulle.
73afr_Latn
mt560
On Sunday, the US-backed and Kurdish-led Syrian Democratic Forces (SDF) insisted that foreign fighters would not be tried "on our land... they will be tried by their own countries".
74eng_Latn
xlsum
ادخل ، لو سمحت.
75acq_Arab
madar
Qalbigiinnu yuusan murugoon.
76som_Latn
lti
May pagmamalaking masasabi niya: "Ako'y isang kuta, at ang aking dibdib ay parang mga moog."
77tgl_Latn
mt560
لایا گیا
49urd_Arab
mt560

Dataset Card for "open-lid-dataset"

Dataset Summary

The OpenLID dataset covers 201 languages and is designed for training language identification models. The majority of the source datasets were derived from news sites, Wikipedia, or religious text, though some come from other domains (e.g. transcribed conversations, literature, or social media). A sample of each language in each source was manually audited to check it was in the attested language (see the paper) for full details.

Supported tasks

This dataset is intended for training high-coverage language identification models (e.g. OpenLID). It is compatible with the FLORES-200 evaluation benchmark.

Languages

There are 201 languages included in the dataset with varying amounts of data: the largest class (English) contains 7.5 million lines of data, and the smallest (South Azerbaijani) contains 532 lines of data. The mean number of lines per language is 602,812. A full breakdown of lines of data per language is available on the repo.

Dataset Structure

Data Instances

Each entry in the dataset consists of a line of data, a language label included script information, and a tag indicating the source.

{
  "text": "¿Serás exaltada hasta el cielo?",
  "language": "spa_Latn",
  "dataset_source": "lti" 
}

Data Splits

Only a train split is provided. The dataset is designed to be compatible with the FLORES-200 evaluation benchmark.

Dataset Creation

Curation Rationale

Recent work has found that existing language identification algorithms perform poorly in practice compared to test performance. The problem is particularly acute for low-resource languages: Kreutzer et al. (2022) found a positive Spearman rank correlation between quality of data and size of language for all of the \ac{lid}-filtered multilingual datasets they studied. In addition, for a significant fraction of the language corpora they studied, less than half of the sentences were in the correct language. They point out that such low-quality data not only leads to poor performance in downstream tasks, but that it also contributes to `representation washing', where the community is given a false view of the actual progress of low-resource natural language processing.

There are several open language identification models offering quick classification and high language coverage (e.g. CLD3, No Language Left Behind). However, to the best of our knowledge, none of the commonly-used scalable language identificaiton systems make their training data public.

This dataset aims to address that gap by curating and combining sources of open training data for language identification and by auditing a sample of all languages in each source to check reliability.

Source Data

The majority of the source datasets were derived from news sites, Wikipedia, or religious text, though some come from other domains (e.g. transcribed conversations, literature, or social media). We provide a full list at the end of this model card along with the licensing information for each source.

Initial Data Collection and Normalisation

Our initial aim was to cover the same languages present in the FLORES-200 Evaluation Benchmark so that we could use this dataset for evaluation. However, during the curation process, we decided to exclude three languages. Firstly, though Akan and Twi are both included as separate languages in FLORES-200, Akan is actually a macrolanguage covering a language continuum which includes Twi. Given the other languages in FLORES-200 are individual languages, we decided to exclude Akan. Secondly, FLORES-200 includes Modern Standard Arabic (MSA) written in Latin script. It is true that Arabic dialects are often written in Latin characters in informal situations (e.g. social media). However, MSA is a form of standardised Arabic which is not usually used in informal situations. Since we could not any find naturally-occurring training data, we excluded MSA from the dataset. Finally, we excluded Minangkabau in Arabic script because it is now rarely written this way, making it difficult to find useful training data.

The first step in our manual audit was to check and standardise language labels, as these are often inconsistent or idiosyncratic. We chose to copy the language codes in FLORES-200 and reassign macrolanguage or ambiguous language codes in the data sources we found to the dominant individual language. Whilst this resulted in more useful data for some languages, for other languages we had to be more conservative. For example, we originally reassigned text labelled as the macrolanguage Malay (msa_Latn) to Standard Malay, but this led to a large drop in performance as the former covers a very diverse set of languages.

Two of the authors then carried out a manual audit of a random sample of all data sources and languages: one a native Bulgarian speaker (able to read Cyrillic and Latin scripts and Chinese characters), and the other a native English speaker (able to read Latin, Arabic and Hebrew scripts). For languages we knew, we checked the language was what we expected. For unfamiliar languages in a script we could read, we compared the sample to the Universal Declaration of Human Rights or failing that, to a sample of text on Wikipedia. We compared features of the text which are common in previous language identification algorithms and could be identified easily by humans: similar diacritics, word lengths, common words, loan words matching the right cultural background, similar suffixes and prefixes, and vowel/consonant patterns. For scripts we could not read, we checked that all lines of the sample matched the script in the Universal Declaration of Human Rights.

We kept preprocessing minimal so that the process was as language agnostic as possible. We used the scripts provided with Moses to remove non-printing characters and detokenise the data where necessary. We then filtered the data so that each line contained at least one character in the expected script (as defined by Perl) to allow for borrowings. Finally, we sampled proportionally to $ p_l^{0.3} $, where $ p_l $ is the fraction of lines in the dataset which are in language $ l $. This aims to ameliorate class skew issues.

Considerations for Using the Data

Social Impact of Dataset

This dataset covers a number of low-resourced languages. This makes it a potentially useful resource, but due to the limited amount of data and domains, care must be taken not to overclaim performance or coverage.

Discussion of Biases

Our work aims to broaden natural language processing coverage by allowing practitioners to identify relevant data in more languages. However, we note that language identification is inherently a normative activity that risks excluding minority dialects, scripts, or entire microlanguages from a macrolanguage. Choosing which languages to cover may reinforce power imbalances, as only some groups gain access to language processing technologies.

In addition, errors in language identification can have a significant impact on downstream performance, particularly (as is often the case) when a system is used as a `black box'. The performance of our classifier is not equal across languages which could lead to worse downstream performance for particular groups. We mitigate this by providing metrics by class.

Additional information

The dataset was curated from the sources listed below by Laurie Burchell and Nikolay Bogoychev.

Licensing Information

License considerations for each source are given below. Open use for non-commercial purposes is covered by all licences.

If you view any part of this dataset as a violation of intellectual property rights, please let us know and we will remove it.

Source Description License
Arabic Dialects Dataset Dataset of Arabic dialects for Gulf, Egyptian, Levantine, and Tunisian Arabic dialects plus MSA No explicit license; website describes data as "some free and useful Arabic corpora that I have created for researchers working on Arabic Natural Language Processing, Corpus and Computational Linguistics."
BLTR Monolingual Bhojpuri corpus CC BY-NC-SA 4.0
Global Voices A parallel corpus of news stories from the web site Global Voices The website for Global Voices is licensed as Creative Commons Attribution 3.0. There is no explicit additional license accompanying the dataset.
Guaraní Parallel Set Parallel Guaraní-Spanish news corpus sourced from Paraguyan websites No explicit license
HKCanCor Transcribed conversations in Hong Kong Cantonese CC BY 4.0
IADD Arabic dialect identification dataset covering 5 regions (Maghrebi, Levantine, Egypt, Iraq, and Gulf) and 9 countries (Algeria, Morocco, Tunisia, Palestine, Jordan, Syria, Lebanon, Egypt and Iraq). It is created from five corpora: DART, SHAMI, TSAC, PADIC, and AOC. Multiple licenses: Apache License 2.0 (SHAMI); GNU Lesser General Public License v3.0 (TSAC); GNU General Public License v3 (PADIC). DART and AOC had no explicit license.
Leipzig Corpora Collection A collection of corpora in different languages with an identical format. The Terms of Usage states "Permission for use is granted free of charge solely for non-commercial personal and scientific purposes licensed under the Creative Commons License CC BY-NC."
LTI Training data for language identification From the README: "With the exception of the contents of the Europarl/, ProjectGutenberg/, and PublicDomain/ directories, all code and text in this corpus are copyrighted. However, they may be redistributed under the terms of various Creative Commons licenses and the GNU GPL. Copying the unmodified archive noncommercially is permitted by all of the licenses. For commercial redistribution or redistribution of modified versions, please consult the individual licenses."
MADAR Shared Task 2019, subtask 1 Dialectal Arabic in the travel domain The MADAR Corpus has a custom license, the text of which can be found in this repo.
EM corpus Parallel Manipuri-English sentences crawled from The Sangai Express CC BY-NC 4.0
MIZAN Parallel Persian-English corpus from literature domain CC BY 4.0
MT560 v1 A machine translation dataset for over 500 languages to English. We have filtered out data from OPUS-100, Europarl, Open Subtitles, Paracrawl, Wikimedia, Wikimatrix, Wikititles, and Common Crawl due to issues with the fidelity of the language labels. Apache License 2.0
NLLB Seed Around 6000 sentences in 39 languages sampled from Wikipedia, intended to cover languages lacking training data. CC BY-SA 4.0
SETIMES A parallel corpus of news articles in the Balkan languages CC-BY-SA 3.0
Tatoeba Collaborative sentence translations CC BY 2.0 FR
Tehran English-Persian parallel corpus (TEP) Parallel Persian-English sentences sourced from subtitles GNU General Public License
Turkic Interlingua (TIL) Corpus A large-scale parallel corpus combining most of the public datasets for 22 Turkic languages CC BY-NC-SA 4.0
WiLI-2018 Wikipedia language identification benchmark containing 235K paragraphs of 235 languages Open Data Commons Open Database License (ODbL) v1.0
XL-Sum Summarisation dataset covering 44 languages, sourced from BBC News CC BY-NC-SA 4.0

Citation Information

If you use this dataset, please cite all the authors in the citation file who compiled the source datasets, plus the OpenLID paper:

@inproceedings{burchell-etal-2023-open,
    title = "An Open Dataset and Model for Language Identification",
    author = "Burchell, Laurie  and
      Birch, Alexandra  and
      Bogoychev, Nikolay  and
      Heafield, Kenneth",
    editor = "Rogers, Anna  and
      Boyd-Graber, Jordan  and
      Okazaki, Naoaki",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-short.75",
    doi = "10.18653/v1/2023.acl-short.75",
    pages = "865--879",
    abstract = "Language identification (LID) is a fundamental step in many natural language processing pipelines. However, current LID systems are far from perfect, particularly on lower-resource languages. We present a LID model which achieves a macro-average F1 score of 0.93 and a false positive rate of 0.033{\%} across 201 languages, outperforming previous work. We achieve this by training on a curated dataset of monolingual data, which we audit manually to ensure reliability. We make both the model and the dataset available to the research community. Finally, we carry out detailed analysis into our model{'}s performance, both in comparison to existing open models and by language class.",
}

Contributions

Thanks to @hac541309 and @davanstrien for adding this dataset.

Downloads last month
1
Edit dataset card