mrinaldi commited on
Commit
31f148d
1 Parent(s): ea8a321

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +308 -0
README.md CHANGED
@@ -310,3 +310,311 @@ configs:
310
  - split: train
311
  path: wikimedia_others/train-*
312
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
310
  - split: train
311
  path: wikimedia_others/train-*
312
  ---
313
+ WARNING: THIS "README" IS JUST A STUB, IT WILL BE IMPROVED DURING THE
314
+ NEXT FEW DAYS, GRAMMATICALLY CHECKED AND FILLED WITH MANY OTHER
315
+ INFORMATIONS AND DETAILED STATISTICS
316
+
317
+ **Testimole** -- A multi-billion Italian text corpus
318
+
319
+ The goal of this work is to create a huge linguistic resource for the
320
+ Italian language that can be used for several NLP applications,
321
+ including but not limited to Large Language Models. The dataset is the
322
+ result of a massive web scraping effort going on from February 2024 to
323
+ May 2024, so the resources have a cut-off date within this time span.
324
+
325
+ This is probably one of the biggest linguistic resources in Italian at
326
+ the present day, as
327
+
328
+ To create the dataset, I developed several scripts using Python3 and
329
+ libraries such as BeautifulSoup and Selenium; the scripts were mostly
330
+ written and executed manually, making it an extremely time-consuming
331
+ project. The texts span different topics and periods, containing several
332
+ divergent opinions and beliefs, thus following the main ideas of the
333
+ \"Perspective Data Manifesto\" \[1\]. It is important to note that these
334
+ data alone are *not enough* to train an Italian large language model
335
+ from scratch, mainly not due to the size of the data but because, even
336
+ if they span over different topics, they are far from covering the broad
337
+ range of subjects, information, culture, and techniques required to
338
+ train a state-of-the-art model. Also, as will be better pointed out
339
+ later, while it is safe to use these data under Fair Use for research
340
+ purposes, users must investigate potential copyright infringement for
341
+ other possible purposes. The Tiktoken BPE tokenizer with the
342
+ cl100k\_base model \[2\] was used for tokenization. This dataset is
343
+ composed of several sub-datasets, each with different types of data and
344
+ goals.
345
+
346
+ Conversational (\~ 85 Billions tokens):
347
+
348
+ UsenetArchiveIT
349
+
350
+ This is the project that started the entire work: the goal was to
351
+ collect the largest possible amount of Usenet posts published in the
352
+ hierachies it.\* and italia.\* \[3\], as they were listed on
353
+ "[www.eternal-september.org](http://www.eternal-september.org/)" \[4\]
354
+ and gathered mainly from Google Groups archive.
355
+
356
+ This split contains 19.395.579.455 tokens. Texts were not checked for
357
+ language, but it is a safe assumption that most of the text contained is
358
+ in Italian as the selected Usenet hierarchies target only Italian users.
359
+
360
+ Detailed statistics, already computed, will follow very soon. For now,
361
+ here are general stats about this part of the dataset:
362
+
363
+ {
364
+
365
+ \"char\_count\": 59389804791,
366
+
367
+ \"token\_count\": 19395579455,
368
+
369
+ \"sent\_count\": 519535427,
370
+
371
+ \"post\_count\": 89499446,
372
+
373
+ \"thread\_count\": 14521548,
374
+
375
+ \"author\_count\": 3626546,
376
+
377
+ }
378
+
379
+ 83GB of JSONL file before the conversion to HuggingFace dataset
380
+
381
+ Forum
382
+
383
+ The second part of the project is the one that produced the largest
384
+ amount of data. 62.415.825.978 A list of Italian message boards based on
385
+ different platforms (phpBB, vBulletin, Simple Machines, Invision, Snitz,
386
+ XenForo\...) was created using both manual and semi-automatic web
387
+ searches. Then, for each forum, a generic script (forum\_scraper.py)
388
+ using Python3 and BeautifulSoup was adapted to fit the characteristics
389
+ of the forum (such as correct div classes for the different fields and
390
+ multiple page mechanisms). Then, the script ran over the entire range of
391
+ available pages and output a JSONL file with one post per line. Detailed
392
+ statistics, already computed, will follow very soon. For now, here are
393
+ general stats about this part of the dataset:
394
+
395
+ {
396
+
397
+ \"char\_count\": 199436329709,
398
+
399
+ \"token\_count\": 62415825978,
400
+
401
+ \"text\_bytes\": 201359756617,
402
+
403
+ \"sent\_count\": 1673025712,
404
+
405
+ \"post\_count\": 468391746,
406
+
407
+ \"thread\_count\": 25280745,
408
+
409
+ \"author\_count\": 37426524,
410
+
411
+ \"hasImage\": 46071
412
+
413
+ }
414
+
415
+ 303GB of JSONL files before the conversion to HuggingFace dataset.
416
+
417
+ Regarding multimodality, in short: this feature is not very well
418
+ implemented. More details will follow, but do not expect too much
419
+ regarding this point.
420
+
421
+ General notes on conversational datasets:
422
+
423
+ The data contained in the "usenet" and "forums" splits were generated by
424
+ Italian users of the Internet between 1995 and 2024. For this reason,
425
+ they may contain biases, problematic stances with respect to ethics,
426
+ grammatically wrong sentences and non-factually true information. On the
427
+ other hand, the kind of data can be considered safer than a random crawl
428
+ of the Internet, in particular regarding the "forum" subset because in
429
+ many forums there is a strict system of moderation that prohibit posts
430
+ to go beyond a certain treshold of acceptance (different from forum to
431
+ forum) with regards to language and thematics. Because the name of the
432
+ forum/newsgroup is always present in the dataset, it is possible for the
433
+ users of this dataset to filter the sources of data according to their
434
+ needs.
435
+
436
+ It is also important to note, for people less accustomed to internet
437
+ conversations, that data coming from forums are not just generic
438
+ conversations but are often a real goldmine of detailed and extremely
439
+ specific information about several topics written by people who are
440
+ often passionate and very knowledgeable about what they are discussing.
441
+ This is especially true for forums that discuss technical and scientific
442
+ topics.
443
+
444
+ This collection of conversational data is useful not only for general
445
+ language modelling but also for many NLP tasks that could take
446
+ advantages from a very large amount of conversational data, such as
447
+ sentiment analysis, hate/misoginy speech detection, parsing and so on;
448
+ on the other hand, the diacronic nature of data permits interesting
449
+ analysis on diachronic phenomena such as anaylysis of how the Italian
450
+ language used in the Internet changed over the year and the most
451
+ discussed topics for each historical period, just to mention a couple of
452
+ examples.
453
+
454
+ The post should not contain personal information as in all the forums
455
+ internal rules was asked to the user not to share personal information
456
+ as they would have been publicly available on the web.
457
+
458
+ OJS
459
+
460
+ This split of the dataset contains articles published as Open Access
461
+ using the platform OJS. It comprised mainly academic journals from
462
+ Italian universities, so it can be considered as a very high-quality
463
+ dataset. All the articles are published with Creative Commons licenses,
464
+ and the license used for the single article can be retrieved from the
465
+ metadata.
466
+
467
+ Blogs
468
+
469
+ This resource was gathered by scraping data from blogs written in
470
+ Italian. The project started with a collection of blogs regarding
471
+ left-wing activism, in order to help another person for his research
472
+ project, that it is still work in progress. The list of these blog was
473
+ obtained on a blog aggregator. The blogs that fall under this category
474
+ are labelled with the category "pol/ant" (Poltics/Antagonism). Because
475
+ from a quick analysis it seems that data coming from the "forum"
476
+ category are mainly biased toward right political stances (data about
477
+ this statement will follow in the next weeks), it could be useful to
478
+ integrate these data in a general language-modelling task in the optic
479
+ of the "Perspectivist Data Manifesto" \[1\]. The other two categories
480
+ are "let/litblog", containing blogs about literature (the list was
481
+ obtained from another aggregator) and "inf/linux", a very small category
482
+ containing blog posts from Italian Linux User Groups. The rest of the
483
+ data is not categorized. Here a breakdown of number of tokens per
484
+ category:
485
+
486
+ This sub-project started with the goal of collecting only blogs released
487
+ under Public Domain or Creative Commons license. However, due do the
488
+ automatic nature of the list creation process, I noticed that some blog
489
+ having an "All right reserved" license were scraped too. Some of these
490
+ license permits the reuse of the information with the only obligation of
491
+ mentioning the URL, and the URL is always present in the rows of the
492
+ dataset. I created a simple script that tried to guess from the home
493
+ page of the blog, but the results are not optimal and a better pipeline
494
+ should be implemented. This means that the direct use of this resource
495
+ is fine under Fair-Use for research purposes but the possibility of
496
+ usage should be checked by whom wants to use this dataset for other
497
+ purposes, especially for commercial purposes.
498
+
499
+ This resource can be considered as a "medium-high" quality dataset,
500
+ because it mostly contain blogs post, often from good sources with very
501
+ informative content. It is not possible to guarantee a total absence of
502
+ undesired content inside the resource, but this, depending from the use
503
+ case, probably constitutes a minority.
504
+
505
+ As for the Conversational data split, also this split is diachronically
506
+ annotated so it could be used for interesting diachronic analysis.
507
+
508
+ Finally, the blog split contains also an annotation for the language
509
+ used, as identified by the FastText library.
510
+
511
+ Wikimedia
512
+
513
+ This split doesn't need many explanation as it is simply a dump of
514
+ wikimedia resources in Italian (Wikipedia, Wikibooks, Wikinews,
515
+ Wikiquote, Wikisource, Wikiversity, Wikivoyage and Wiktionary). It can
516
+ be very important to include this resource in the training data of a
517
+ language model because it contains information, presented in a mostly
518
+ neutral language, about many possible subjects and topics that are not
519
+ covered by the rest of the dataset.
520
+
521
+ I decided to create also a category called "wikimedia\_others"
522
+ containing data from Wikimedia projects of other regional languages
523
+ related with Italian and spoken in Italy, as well as Latin for its
524
+ historical importance for Italian language and culture. Languages code
525
+ included in this split are: eml (emilian e rumagno) ,fur (furlan) ,la
526
+ (latin) ,lij (ligure) ,lld (ladin) ,lmo (lombarda) ,nap (napolitan) ,scn
527
+ (sicilianu) ,sc (sardu) and vec (veneto). Using this data, depending
528
+ from the goal of the project, could produce very interesting results.
529
+
530
+ Books
531
+
532
+ This collection contains mainly the books coming from LiberLiber's
533
+ project "Manuzio" \[2\]. The books were downloaded from the website in
534
+ many formats and converted to text. Liber Liber is a project akin to
535
+ Project Gutenberg as it contains many books with expired copyright and
536
+ thus in Public Domain. Many of these books are considered cornerstones
537
+ of Italian culture.
538
+
539
+ The collection contains also a smaller amount of data coming from other
540
+ sources, such as the Creative Commons licensed school books of
541
+ "Matematicamente" \[3\] and Oilproject-Weschool \[4\] as well as some
542
+ other CC and PD license book found online.
543
+
544
+ Websites
545
+
546
+ I created a very generic script that is able to extract all the text of
547
+ a website as well as the text contained in Office, PDF and TeX
548
+ documents. Now, the websites section is mainly composed of three very
549
+ high-quality and freely licensed websites: ArchivioAntimafia \[5\], that
550
+ contains many official documents about Mafia persecution in Italy,
551
+ Peacelink \[6\], an historical Italian website about peace activism and
552
+ HomoLaicus \[7\] a big collection of texts about various topics (mainly
553
+ history and politics) released under a CC license. Also other smaller
554
+ and randomly selected websites are included in this collection. This
555
+ section has to be considered experimental for two reasons: (1) It
556
+ containly only a very small subset of the entire high-quality Italian
557
+ web landscape and it could be increased and improved "ad libitum" (2) It
558
+ is the only section that can have some bigger issue with deduplication,
559
+ that we will discuss in the appropriate section.
560
+
561
+ Despite these two point, users are encouraged to use this section as it
562
+ is composed of medium-high and high quality contents.
563
+
564
+ Reddit
565
+
566
+ It contains a small subsets (4192672 messages) of conversations in some
567
+ Italian subreddits.
568
+
569
+ DEDUPLICATION
570
+
571
+ The presence of duplicate text can be, depending from the use cases, a
572
+ big problem for several machine learning tasks. I tried to avoid as much
573
+ as possible the presence of duplicate text in the dataset but still
574
+ there are some potential issues to be took into consideration. We will
575
+ distinguish two kind of duplications: (A): Full document duplication,
576
+ for example, if the same forum post is present more than one time (B):
577
+ Strings duplication: if some strings (often garbage) recurr several
578
+ times in the data.
579
+
580
+ Usenet: Safe regarding A-types of duplications; Could contain B-types
581
+ duplications, for example: - Users signatures; - Headers such as "reply
582
+ to message posted by X at Y";
583
+
584
+ Forums: Safe regarding A-types of duplications. The most problematic
585
+ forums under this respect were deduplicated using an ad-hoc created
586
+ script. It shares the same potential problems of Usenet with regard to
587
+ B-type duplications;
588
+
589
+ OJS: it should be safe regarding both A-type and B-type duplications;
590
+
591
+ Blogs: Safe regarding A-types of duplications and mostly safe regarding
592
+ B-type duplications. However, I noticed that some blogs were scraped
593
+ along with some html garbage at the beginning or end of the text blob,
594
+ that should be identified and removed
595
+
596
+ Wikimedia: it should be mostly safe, with the exception of the
597
+ recurrence of some Wikipedia-specific lexicon such as "this page is a
598
+ stub", "this page needs references" and so on;
599
+
600
+ Books: it should be safe regarding A-types of duplication, but there is
601
+ a very simple to identify B-type duplication, that is, the header of
602
+ Liber Liber books with a short presentation of the community-driven
603
+ project;
604
+
605
+ Websites: In this case A-type duplication could be in theory present if
606
+ some pages share the same content, but it should be rare (with the
607
+ exception of Archivio Antimafia, where files to download are often
608
+ available in PDF and Word Processing format, so they were downloaded
609
+ twice). B-type duplication here could be an issue as it is very present
610
+ in the form of 1) header of the website 2) list of links 3) footer of
611
+ the website. All the HTML was converted using HTML2TEXT so it should not
612
+ contain html code.
613
+
614
+ References
615
+
616
+ \* \[1\] <https://pdai.info/>
617
+
618
+ \* \[2\] https://github.com/openai/tiktoken
619
+
620
+ \* \[3\] <https://xmau.com/usenet/>