"cc_project_author","post_title","cc_project_url","cc_project_category","post_date","keywords","abstract","cc_author_affiliation","cc_class","cc_snippet","cc_dataset_used","cc_derived_dataset_about","cc_derived_dataset_used","cc_derived_dataset_cited" "Ahad Rana – Common Crawl","Common Crawl – Building an open web-scale crawl using Hadoop","https://www.slideshare.net/hadoopusergroup/common-crawlpresentation","papers","20100101Z00:00:00","","","Common Crawl","web-crawling, big data, Hadoop","","","","","" "Hannes Mühleisen, Christian Bizer – Freie Universität, Berlin, Germany","Web Data Commons – Extracting Structured Data from Two Large Web Corpora","http://ceur-ws.org/Vol-937/ldow2012-inv-paper-2.pdf","papers","20120101Z00:00:00","","","Freie Universität, Berlin, Germany","","","","","","" "Alexandra Birch, Nadir Durrani, Phillip Koehn – School of Informatics, University of Edinburgh","Edinburgh SLT and MT System Description for the IWSLT 2013","http://workshop2013.iwslt.org/downloads/Edinburgh_SLT_and_MT_System_Description_for_the_IWSLT_2013_Evaluation.pdf","papers","20130101Z00:00:00","","","School of Informatics, University of Edinburgh","","","","","","" "Jason R. Smith, Herve Saint-Amand, Magdalena Plamada, Phillipp Koehn, Chris Callison-Burch, Adam Lopez – Johns Hopkins University, University of Edinburgh, University of Zurich, University of Pennsylvania","Dirt Cheap Web-Scale Parallel Text from the Common Crawl","http://www.cs.jhu.edu/~ccb/publications/bitexts-from-common-crawl.pdf","papers","20130101Z00:00:00","","","Johns Hopkins University, University of Edinburgh, University of Zurich, University of Pennsylvania","","","","","","" "Sara Stymne, Christian Hardmeier, Jorg Tiedemann, Joakim Nivre – Uppsala University: Department of Linguistics and Philology","Tunable Distortion Limits and Corpus Cleaning for SMT","http://statmt.org/wmt13/pdf/WMT29.pdf","papers","20130101Z00:00:00","","","Uppsala University: Department of Linguistics and Philology","","","","","","" "Thanh-Le Ha, Teresa Herrmann, Jan Niehues, Mohammed Mediani, Eunah Cho, Yuqi Zhang, Isabel Slawik, Alex Waibel – Institute for Anthropomatics","The KIT Translation Systems for IWSLT 2013","http://workshop2013.iwslt.org/downloads/The_KIT_Translation_Systems_for_IWSLT_2013.pdf","papers","20130101Z00:00:00","","","Institute for Anthropomatics","","","","","","" "Wanno Drijfhout, Oliver Jundt, Lesley Wevers, Djoerd Hiemstra – University of Twente","Traitor: Associating Concepts using the World Wide Web","http://doc.utwente.nl/88328/","papers","20130101Z00:00:00","","","University of Twente","","","","","","" "Christian Bizer, Kai Eckert, Robert Meusel, Hannes Mühleisen, Michael Schuhmacher, Johanna Völker – Data and Web Science Group – University of Mannhein, Database Architectures Group, Centrum Wiskunde & Informatica, Netherlands","Deployment of RDFa, Microdata, and Microformats on the Web – A Quantitative Analysis","http://hannes.muehleisen.org/Bizer-etal-DeploymentRDFaMicrodataMicroformats-ISWC-InUse-2013.pdf","papers","20130101Z00:00:00","","","Data and Web Science Group – University of Mannhein, Database Architectures Group, Centrum Wiskunde & Informatica, Netherlands","","","","","","" "Jeffrey Pennington, Richard Socher, Christopher D. Manning – Stanford University, California, USA","GloVe: Global vectors for word representation","https://aclanthology.org/D14-1162.pdf","papers","20140101Z00:00:00","","","Stanford University, California, USA","nlp/word-embeddings","We trained our model on five corpora of varying sizes: [...] and on 42 billion tokens of web data, from Common Crawl⁵ [⁵ To demonstrate the scalability of the model, we also trained it on a much larger sixth corpus, containing 840 billion tokens of web data, but in this case we did not lowercase the vocabulary, so the results are not directly comparable.].","","","","" "Mohammed Mediani, Joshua Winebarger, Alexander Waibel – Karlsruhe Institute of Technology, Germany","Improving In-Domain Data Selection For Small In-Domain Sets","http://www.statmt.org/OSMOSES/IWSLT-36.pdf","papers","20140101Z00:00:00","","","Karlsruhe Institute of Technology, Germany","","","","","","" "Junfei Guo, Juan Liu, Qi Han, Andreas Maletti – School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany","A Tunable Language Model for Statistical Machine Translation","http://www.ims.uni-stuttgart.de/institut/mitarbeiter/maletti/pub/guoliuhanmal14.pdf","papers","20140101Z00:00:00","","","School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany","","","","","","" "Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, Andrew Y. Ng – Baidu Research – Silicon Valley AI Lab","Deep Speech: Scaling up end-to-end speech recognition","http://arxiv.org/pdf/1412.5567v2.pdf","papers","20140101Z00:00:00","","","Baidu Research – Silicon Valley AI Lab","","","","","","" "Eva Hasler, Philipp Koehn, Barry Haddow, Phil Blunsom – University of Edinburgh; University of Oxford","Dynamic Topic Adaptation for Phrase-based MT","http://www.aclweb.org/anthology/E/E14/E14-1035.pdf","papers","20140101Z00:00:00","","","University of Edinburgh; University of Oxford","","","","","","" "Michele Tortelli – Politecnico di Bari","Bloom filter-based Routing in NDN","http://www.poliba.it/Didattica/docs/scorepoliba2014_submission_179.pdf","papers","20140101Z00:00:00","","","Politecnico di Bari","","","","","","" "Filip Ginter, Jenna Kanerva – University of Turku","Fast Training of word2vec Representations Using N-gram Corpora","http://www2.lingfil.uu.se/SLTC2014/abstracts/sltc2014_submission_27.pdf","papers","20140101Z00:00:00","","","University of Turku","","","","","","" "Petar Petrovski, Volha Bryl, Christian Bizer – University of Mannheim, Germany- Research Group Data and Web Science","Learning Regular Expressions for the Extraction of Product Attributes from E-commerce Microdata","http://ceur-ws.org/Vol-1267/LD4IE2014_Petrovski.pdf","papers","20140101Z00:00:00","","","University of Mannheim, Germany- Research Group Data and Web Science","","","","","","" "Robert Meusel, Petar Petrovski, Christian Bizer – University of Mannheim, Germany- Research Group Data and Web Science","The Web Data Commons Microdata, RDFa and Microformat Dataset Series","http://link.springer.com/chapter/10.1007/978-3-319-11964-9_18#page-1","papers","20140101Z00:00:00","","","University of Mannheim, Germany- Research Group Data and Web Science","","","","","","" "Robert Meusel, Peter Mika, Roi Blanko – University of Mannheim; Yahoo Labs- Barcelona","Focused Crawling for Structured Data","http://dl.acm.org/citation.cfm?id=2661902","papers","20140101Z00:00:00","","","University of Mannheim; Yahoo Labs- Barcelona","","","","","","" "Chenchen Ding, Masao Utiyama, Eiichiro Sumita – National Institute of Information and Communications Technology Japan","Document-level Re-ranking with Soft Lexical and Semantic Features for Statistical Machine Translation","http://www.mibel.cs.tsukuba.ac.jp/~tei/AMTA2014.pdf","papers","20140101Z00:00:00","","","National Institute of Information and Communications Technology Japan","","","","","","" "Masumi Shirakawa, Kotaro Nakayama, Eiji Aramaki, Takahiro Hara, Shojiro Nishio – Osaka University","Collecting Conceptualized Relations from Terabytes of Web Texts for Understanding Unknown Terms","http://dl.acm.org/citation.cfm?id=2682777","papers","20140101Z00:00:00","","","Osaka University","","","","","","" "Jenna Kanerva, Juhani Luotolahti, Veronika Laippala, Filip Ginter – University of Turku","Syntactic N-gram Collection from a Large-Scale Corpus of Internet Finnish","http://ebooks.iospress.nl/volumearticle/38025","papers","20140101Z00:00:00","","","University of Turku","","","","","","" "Willem Robert van Hage, Thomas Ploeger, Jesper Hoeksema – SynerScope B.V., VU University Amsterdam","Number frequency on the web","http://dl.acm.org/citation.cfm?id=2576962","papers","20140101Z00:00:00","","","SynerScope B.V., VU University Amsterdam","","","","","","" "Christian Buck, Kenneth Heafield, Bas van Ooyen – University of Edinburgh, Stanford University, Owlin BV","N-gram Counts and Language Models from the Common Crawl","http://statmt.org/ngrams/BuckEtAl_LREC2014_CommonCrawlLM.pdf","papers","20140101Z00:00:00","","","University of Edinburgh, Stanford University, Owlin BV","","","","","","" "Christian Hardmeier, Sara Stymne, Jörg Tiedemann, Aaron Smith, Joakim Nivre – Uppsala University: Department of Linguistics and Philology","Anaphora Models and Reordering for Phrase-Based SMT","http://acl2014.org/acl2014/W14-33/pdf/W14-3312.pdf","papers","20140101Z00:00:00","","","Uppsala University: Department of Linguistics and Philology","","","","","","" "Lane O. B. Schwartz, Timothy Anderson, Jeremy Gwinnup, Katherine M. Young – Air Force Research Laboratory, SRA International, N-Space Analysis LLC","Machine Translation and Monolingual Postediting:The AFRL WMT-14 System","http://www.ling.uni-potsdam.de/~koller/aclpub/W14-33/cdrom/pdf/W14-3321.pdf","papers","20140101Z00:00:00","","","Air Force Research Laboratory, SRA International, N-Space Analysis LLC","","","","","","" "Hoang Cuong, Khalil Sima’an – University of Amsterdam - Institute for Logic, Language and Computation","Latent Domain Translation Models in Mix-of-Domains Haystack","http://www.aclweb.org/anthology/C/C14/C14-1182.pdf","papers","20140101Z00:00:00","","","University of Amsterdam - Institute for Logic, Language and Computation","","","","","","" "Thomas Steiner, Hannes Mühleisen, Ruben Verborgh, Pierre-Antoine Champin, Benoît Encelle, Yannick Prié – Université de Lyon, Database Architectures Group; Multimedia Lab, Ghent University; iMinds, Université de Nantes","Weaving the Web(VTT) of Data","http://telemedicina.unifesp.br/pub/Events/2013-05%20-%20WWW2013/www2013/www2013.org/companion/p1399.pdf","papers","20140101Z00:00:00","","","Université de Lyon, Database Architectures Group; Multimedia Lab, Ghent University; iMinds, Université de Nantes","","","","","","" "Marcin Wylot, Philippe Cudré-Mauroux, Paul Groth – eXascale Infolab, University of Fribourg; VU University Amsterdam","TripleProv: Efficient Processing of Lineage Queries in a Native RDF Store","http://exascale.info/sites/default/files/TipleProv.pdf","papers","20140101Z00:00:00","","","eXascale Infolab, University of Fribourg; VU University Amsterdam","","","","","","" "Robert Meusel, Sebastiano Vigna, Oliver Lehmberg, Christian Bizer – Data and Web Science Group - University of Mannheim, Laboratory for Web - Algorithmics Università degli Studi di Milano","Graph Structure in the Web — Revisited","http://vigna.di.unimi.it/ftp/papers/GraphStructureRevisited.pdf","papers","20140101Z00:00:00","","","Data and Web Science Group - University of Mannheim, Laboratory for Web - Algorithmics Università degli Studi di Milano","","","","","","" "Calvin Ardi, John Heidemann – USC/Information Sciences Institute","Web-scale Content Reuse Detection","ftp://ftp.isi.edu/isi-pubs/tr-692.pdf","papers","20140101Z00:00:00","","","USC/Information Sciences Institute","","","","","","" "Yuta Tsuboi – IBM Resarch","Neural Networks Leverage Corpus-wide Information for Part-of-speech Tagging","http://2boy.org/~yuta/publications/neuraltagger-emnlp2014-tsuboi.pdf","papers","20140101Z00:00:00","","","IBM Resarch","","","","","","" "Mauro Cettolo, Nicola Bertoldi, Marcello Federico, Holger Schwenk, Loïc Barrault, Christophe Servan – Fondazione Bruno Kessler, University of Le Mans, Xerox Research Centre Europe","Translation project adaptation for MT-enhanced computer assisted translation","http://link.springer.com/article/10.1007/s10590-014-9152-1","papers","20140101Z00:00:00","","","Fondazione Bruno Kessler, University of Le Mans, Xerox Research Centre Europe","","","","","","" "Germán Sanchis-Trilles, Daniel Ortiz-Martınez, Francisco Casacuberta – PRHLT Centre - Universidad Politécnica de Valencia","Efficient Wordgraph Pruning for Interactive Translation Prediction","http://www.casmacat.eu/uploads/Main/2eamt2014.pdf","papers","20140101Z00:00:00","","","PRHLT Centre - Universidad Politécnica de Valencia","","","","","","" "Vasilis Kolias, Ioannis Anagnostopoulos, Eleftherios Kayafas – National Technical University of Athens, University of Thessaly","Exploratory Analysis of a Terabyte Scale Web Corpus","http://arxiv.org/abs/1409.5443","papers","20140101Z00:00:00","","","National Technical University of Athens, University of Thessaly","","","","","","" "Masahiro Mizukami, Graham Neubig, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura – Nara Institute of Science and Technology","Building a Free General-Domain Paraphrase Database for Japanese","http://isw3.naist.jp/~masahiro-mi/paper/ma14cocosda.pdf","papers","20140101Z00:00:00","","","Nara Institute of Science and Technology","","","","","","" "Robert Meusel, Sebastiano Vigna, Oliver Lehmberg, Christian Bizer – University of Mannheim, Germany; Università degli Studi di Milano, Italy","The Graph Structure in the Web – Analyzed on Different Aggregation Levels","https://pdfs.semanticscholar.org/b5d5/88298e6845b4bfd40ea779ce21e628239ef3.pdf","papers","20150101Z00:00:00","","","University of Mannheim, Germany; Università degli Studi di Milano, Italy","web-science/hyperlinkgraph","","","","","" "Alex Stolz, Martin Hepp – Universitaet der Bundeswehr Munich, Germany","Towards Crawling the Web for Structured Data: Pitfalls of Common Crawl for E-Commerce","http://ceur-ws.org/Vol-1426/paper-04.pdf","papers","20150101Z00:00:00","","","Universitaet der Bundeswehr Munich, Germany","nlp/corpus-representativeness, semantic web, microdata, e-commerce","","","","","" "Julian Eberius, Maik Thiele, Katrin Braunschweig, Wolfgang Lehner – Technische Universität Dresden, Germany","Top-k Entity Augmentation Using Consistent Set Covering","https://www.semanticscholar.org/paper/Top-k-entity-augmentation-using-consistent-set-Eberius-Thiele/a554fe7c49837e2d2d995e00fd3b62a6ca5650f2","papers","20150101Z00:00:00","","","Technische Universität Dresden, Germany","semantic web, web tables, web mining","To enable repeatability we publish the implementation², but also include the web table corpus used for the evaluation³. This corpus contains 100M Web tables extracted from a publicly available Web crawl⁴ [4: http://commoncrawl.org]","","{DresdenWebTableCorpus}","","" "Matthew Malensek, Sangmi Lee Pallickara, Shrideep Pallickara – Colorado State University","Alleviation of Disk I/O Contention in Virtualized Settings for Data-Intensive Computing","http://galileo.cs.colostate.edu/papers/DiskInterference-BDC.pdf","papers","20150101Z00:00:00","","","Colorado State University","","","","","","" "Titus Barik, Kevin Lubick, Justin Smith, John Slankas, Emerson Murphy-Hill – ABB Corporate Research and North Carolina State University","FUSE: A Reproducible, Extendable, Internet-scale Corpus of Spreadsheets","http://kjlubick.github.io/pubs/MSR2015-Fuse_spreadsheet_corpus.pdf","papers","20150101Z00:00:00","","","ABB Corporate Research and North Carolina State University","","","","","","" "Joachim Daiber, Lautaro Quiroz, Roger Wechsler, Stella Frank – University of Amsterdam","Splitting Compounds by Semantic Analogy","https://ufal.mff.cuni.cz/~rosa/2015/docs/dmtw2015.pdf#page=26","papers","20150101Z00:00:00","","","University of Amsterdam","","","","","","" "Mikhail Galkin, Dmitry Mouromtsev, Sören Auer – IMTO University- St. Petersburg, Russia, University of Bonn- Germany","Identifying Web Tables –Supporting a Neglected Type of Content on the Web","http://arxiv.org/pdf/1503.06598.pdf","papers","20150101Z00:00:00","","","IMTO University- St. Petersburg, Russia, University of Bonn- Germany","","","","","","" "Brendan Juba – Washington University in St. Louis","Principled Sampling for Anomaly Detection","http://www.cse.wustl.edu/~bjuba/papers/anomaly_detection.pdf","papers","20150101Z00:00:00","","","Washington University in St. Louis","","","","","","" "Kowalczuk Ewa, Jedrzej Potoniec, Agnieszka Ławrynowicz – Institute of Computing Science, Poznan University of Technology, Poland","Extracting Usage Patterns of Ontologies on the Web: a Case Study on GoodRelations Vocabulary in RDFa","http://ceur-ws.org/Vol-1265/owled2014_submission_14.pdf","papers","20150101Z00:00:00","","","Institute of Computing Science, Poznan University of Technology, Poland","","","","","","" "Junfei Guo, Juan Liu, Qi Han, Andreas Maletti – School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany","A Tunable Language Model for Statistical Machine Translation","http://www.ims.uni-stuttgart.de/institut/mitarbeiter/maletti/pub/guoliuhanmal14.pdf","papers","20150101Z00:00:00","","","School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany","","","","","","" "Kay Ousterhout, Ryan Rasti, Sylvia Ratnasamy, Scott Shenker, Byung-Gon Chun – UC Berkeley, ICSI, Vmware, Seoul National University","Making Sense of Performance in Data Analytics Frameworks","http://www.eecs.berkeley.edu/~keo/publications/nsdi15-final147.pdf","papers","20150101Z00:00:00","","","UC Berkeley, ICSI, Vmware, Seoul National University","","","","","","" "Evan Jaffe, Lifeng Jin, David King, Marten van Schinjdel – Dept. of Linguistics, Ohio State University","Azmat: Sentence Similarity using Associative Matrices","http://www.ling.ohio-state.edu/~vanschm/resources/uploads/jaffe_etal-2015-semeval.pdf","papers","20150101Z00:00:00","","","Dept. of Linguistics, Ohio State University","","","","","","" "Alexander A Alemi, Paul Ginsparg – Dept. of Physics, Cornell University, Dept. of Physics and Information Science, Cornell University","Text Segmentation based on Semantic Word Embeddings","http://arxiv.org/pdf/1503.05543.pdf","papers","20150101Z00:00:00","","","Dept. of Physics, Cornell University, Dept. of Physics and Information Science, Cornell University","","","","","","" "Ivan Habernal, Omnia Zayed, Iryna Gurevych – University of Darmstadt, Germany","C4Corpus: Multilingual Web-Size Corpus with Free License","http://www.lrec-conf.org/proceedings/lrec2016/pdf/388_Paper.pdf","papers","20160101Z00:00:00","","Large Web corpora containing full documents with permissive licenses are crucial for many NLP tasks. In this article we present the construction of 12 million-pages Web corpus (over 10 billion tokens) licensed under CreativeCommons license family in 50+ languages that has been extracted from CommonCrawl, the largest publicly available general Web crawl to date with about 2 billion crawled URLs. Our highly-scalable Hadoop-based framework is able to process the full CommonCrawl corpus on 2000+ CPU cluster on the Amazon Elastic Map/Reduce infrastructure. The processing pipeline includes license identification, state-of-the-art boilerplate removal, exact duplicate and near-duplicate document removal, and language detection. The construction of the corpus is highly configurable and fully reproducible, and we provide both the framework (DKPro C4CorpusTools) and the resulting data (C4Corpus) to the research community.","University of Darmstadt, Germany","nlp/corpus-construction, legal/copyright, license/creative-commons, nlp/boilerplate-removal, ir/duplicate-detection","","CC-MAIN-2016-07","{DKPro-C4}","","" "Roland Schäfer – Freie Universität Berlin, Germany","CommonCOW: Massively Huge Web Corpora from CommonCrawl Data and a Method to Distribute them Freely under Restrictive EU Copyright Laws","http://rolandschaefer.net/?p=994","papers","20160101Z00:00:00","","In this paper, I describe a method of creating massively huge web corpora from the CommonCrawl data sets and redistributing the resulting annotations in a stand-off format. Current EU (and especially German) copyright legislation categorically forbids the redistribution of downloaded material without express prior permission by the authors. Therefore, stand-off annotations or other derivates are the only format in which European researchers (like myself) are allowed to re-distribute the respective corpora. In order to make the full corpora available to the public despite such restrictions, the stand-off format presented here allows anybody to locally reconstruct the full corpora with the least possible computational effort.","Freie Universität Berlin, Germany","nlp/corpus-construction, legal/copyright","","","{CommonCOW}","","" "Roland Schäfer – Freie Universität Berlin, Germany","Accurate and Efficient General-purpose Boilerplate Detection for Crawled Web Corpora","https://doi.org/10.1007/s10579-016-9359-2","papers","20170101Z00:00:00","Boilerplate, Corpus construction, Non-destructive corpus normalization, Web corpora","Removal of boilerplate is one of the essential tasks in web corpus construction and web indexing. Boilerplate (redundant and automatically inserted material like menus, copyright notices, navigational elements, etc.) is usually considered to be linguistically unattractive for inclusion in a web corpus. Also, search engines should not index such material because it can lead to spurious results for search terms if these terms appear in boilerplate regions of the web page. The size of large web corpora necessitates the use of efficient algorithms while a high accuracy directly improves the quality of the final corpus. In this paper, I present and evaluate a supervised machine learning approach to general-purpose boilerplate detection for languages based on Latin alphabets which is both very efficient and very accurate. Using a Multilayer Perceptron and a high number of carefully engineered features, I achieve between 95\% and 99\% correct classifications (depending on the input language) with precision and recall over 0.95. Since the perceptrons are trained on language-specific data, I also evaluate how well perceptrons trained on one language perform on other languages. The single features are also evaluated for the merit they contribute to the classification. I show that the accuracy of the Multilayer Perceptron is on a par with that of other classifiers such as Support Vector Machines. I conclude that the quality of general-purpose boilerplate detectors depends mainly on the availability of many well-engineered features and which are highly language-independent. The method has been implemented in the open-source texrex web page cleaning software, and large corpora constructed using it are available from the COW initiative, including the CommonCOW corpora created from CommonCrawl data sets.","Freie Universität Berlin, Germany","nlp/boilerplate-removal, nlp/web-as-corpus, nlp/corpus-construction","","","","","" "Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, Václava Kettnerová, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria dePaiva, Kira Droganova, Héctor Martínez Alonso, Çağrı Çöltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadová, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonca, Tatiana Lando, Rattima Nitisaroj, Josie Li – Charles University, Czech Republic; Uppsala University, Sweden; University of Turku, Finland; University of Cambridge; Google; Bauhaus-Universität Weimar, Germany; UiT The Arctic University of Norway; University of the Basque Country, Spain; Istanbul Technical University, Turkey; Stanford University; New York University Abu Dhabi; City University of Hong Kong; Ohio State University, USA; University of Turin, Italy; University of Pisa, Italy; IBM Research; Nuance Communications; INRIA – Paris 7, France; University of Tübingen, Germany; DFKI, Germany; text & form, Germany","CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies","http://www.aclweb.org/anthology/K/K17/K17-3001.pdf","papers","20170101Z00:00:00","","The Conference on Computational Natural Language Learning (CoNLL) features a shared task, in which participants train and test their learning systems on the same data sets. In 2017, the task was devoted to learning dependency parsers for a large number of languages, in a real-world setting without any gold-standard annotation on input. All test sets followed a unified annotation scheme, namely that of Universal Dependencies. In this paper, we define the task and evaluation methodology, describe how the data sets were prepared, report and analyze the main results, and provide a brief categorization of the different approaches of the participating systems.","Charles University, Czech Republic; Uppsala University, Sweden; University of Turku, Finland; University of Cambridge; Google; Bauhaus-Universität Weimar, Germany; UiT The Arctic University of Norway; University of the Basque Country, Spain; Istanbul Technical University, Turkey; Stanford University; New York University Abu Dhabi; City University of Hong Kong; Ohio State University, USA; University of Turin, Italy; University of Pisa, Italy; IBM Research; Nuance Communications; INRIA – Paris 7, France; University of Tübingen, Germany; DFKI, Germany; text & form, Germany","nlp/dependency-parsing, nlp/dependency-treebank, nlp/corpus-construction","The supporting raw data was gathered from CommonCrawl, which is a publicly available web crawl created and maintained by the non-profit CommonCrawl foundation.² The data is publicly available in the Amazon cloud both as raw HTML and as plain text. It is collected from a number of independent crawls from 2008 to 2017, and totals petabytes in size. We used cld2³ as the language detection engine because of its speed, available Python bindings and large coverage of languages. Language detection was carried out on the first 1024 bytes of each plaintext document. Deduplication was carried out using hashed document URLs, a simple strategy found in our tests to be effective for coarse duplicate removal. The data for each language was capped at 100,000 tokens per a single input file.","","conll-2017-shared-task","","" "Abu Bakr Soliman, Kareem Eissa, Samhaa El-Beltagy – Nile University, Egypt","AraVec: A set of Arabic Word Embedding Models for use in Arabic NLP","https://www.researchgate.net/publication/319880027_AraVec_A_set_of_Arabic_Word_Embedding_Models_for_use_in_Arabic_NLP","papers","20170101Z00:00:00","","","Nile University, Egypt","nlp/word-embeddings","we have used a subset of the January 2017 crawl dump. The dump contains more than 3.14 billion web pages and about 250 Terabytes of uncompressed content. [...] We used WET files as we were only interested in plain text for building the distributed word representation models. Due to the size of the dump, which requires massive processing power and time for handling, we only used 30\% of the data contained in it. As this subset comprises about one billion web pages (written in multiple language), we believed that it was large enough to provide sufficient Arabic Web pages from which we can build a representative word embeddings model. Here it is important to note that the Common Crawl project does not provide any technique for identifying or selecting the language of web pages to download. So, we had to download data first, and then discard pages that were not written in Arabic. The Arabic detection phase was performed using some regex commands and some NLP techniques to distinguish Arabic from other languages. After the completion of this phase we succeeded in obtaining 4,379,697 Arabic web pages which were then segmented into more than 180,000,000 paragraphs/documents for building our models.","","","","" "Tommy Dean, Ali Pasha, Brian Clarke, Casey J. Butenhoff – Virginia Polytechnic Institute and State University, USA; Eastman Chemical Company; USA","Common Crawl Mining","http://hdl.handle.net/10919/77629","papers","20170101Z00:00:00","","","Virginia Polytechnic Institute and State University, USA; Eastman Chemical Company; USA","information retrieval, market research, business intelligence","The main goal behind the Common Crawl Mining system is to improve Eastman Chemical Company’s ability to use timely knowledge of public concerns to inform key business decisions. It provides information to Eastman Chemical Company that is valuable for consumer chemical product marketing and strategy development. Eastman desired a system that provides insight into the current chemical landscape. Information about trends and sentiment towards chemicals over time is beneficial to their marketing and strategy departments. They wanted to be able to drill down to a particular time period and look at what people were writing about certain keywords. [...] The final Common Crawl Mining system is a search engine implemented using Elasticsearch. Relevant records are identified by first analyzing Common Crawl for Web Archive (WARC) files that have a high frequency of records from interesting domains.","","","","" "Yuheng Du, Alexander Herzog, Andre Luckow, Ramu Nerella, Christopher Gropp, Amy Apon – Clemson University, USA","Representativeness of latent dirichlet allocation topics estimated from data samples with application to common crawl","http://alexherzog.net/files/IEEE_BigData_2017_Representativeness_of_LDA.pdf","papers","20170101Z00:00:00","","","Clemson University, USA","nlp/topic-modeling, nlp/corpus-representativeness","Common Crawl is a massive multi-petabyte dataset hosted by Amazon. It contains archived HTML web page data from 2008 to date. Common Crawl has been widely used for text mining purposes. Using data extracted from Common Crawl has several advantages over a direct crawl of web data, among which is removing the likelihood of a user’s home IP address becoming blacklisted for accessing a given web site too frequently. However, Common Crawl is a data sample, and so questions arise about the quality of Common Crawl as a representative sample of the original data. We perform systematic tests on the similarity of topics estimated from Common Crawl compared to topics estimated from the full data of online forums. Our target is online discussions from a user forum for automotive enthusiasts, but our research strategy can be applied to other domains and samples to evaluate the representativeness of topic models. We show that topic proportions estimated from Common Crawl are not significantly different than those estimated on the full data. We also show that topics are similar in terms of their word compositions, and not worse than topic similarity estimated under true random sampling, which we simulate through a series of experiments. Our research will be of interest to analysts who wish to use Common Crawl to study topics of interest in user forum data, and analysts applying topic models to other data samples.","","","","" "Shalini Ghosh, Phillip Porras, Vinod Yegneswaran, Ken Nitz, Ariyam Das – CSL, SRI International, Menlo Park","ATOL: A Framework for Automated Analysis and Categorization of the Darkweb Ecosystem","https://www.aaai.org/ocs/index.php/WS/AAAIW17/paper/download/15205/14661","papers","20170101Z00:00:00","","","CSL, SRI International, Menlo Park","web-science, information retrieval, nlp/text-classification",".onion references from [...] and an open repository of (non-onion) Web crawling data, called Common Crawl (Common Crawl Foundation 2016).","","","","" "Filip Ginter, Jan Hajič, Juhani Luotolahti, Milan Straka, Daniel Zeman – Charles University, Czech Republic; University of Turku, Finland","CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings","http://hdl.handle.net/11234/1-1989","papers","20170101Z00:00:00","","","Charles University, Czech Republic; University of Turku, Finland","nlp/corpus-construction, nlp/word-embeddings, nlp/syntactic-annotations, nlp/dependency-parsing","Automatic segmentation, tokenization and morphological and syntactic annotations of raw texts in 45 languages, generated by UDPipe (http://ufal.mff.cuni.cz/udpipe), together with word embeddings of dimension 100 computed from lowercased texts by word2vec (https://code.google.com/archive/p/word2vec/). [...] Note that the CC BY-SA-NC 4.0 license applies to the automatically generated annotations and word embeddings, not to the underlying data, which may have different license and impose additional restrictions.","","conll-2017-shared-task","","" "Jakub Kúdela, Irena Holubová, Ondřej Bojar – Charles University, Czech Republic","Extracting Parallel Paragraphs from Common Crawl","https://ufal.mff.cuni.cz/pbml/107/art-kudela-holubova-bojar.pdf","papers","20170101Z00:00:00","","Most of the current methods for mining parallel texts from the web assume that web pages of web sites share same structure across languages. We believe that there still exists a non-negligible amount of parallel data spread across sources not satisfying this assumption. We propose an approach based on a combination of bivec (a bilingual extension of word2vec) and locality-sensitive hashing which allows us to efficiently identify pairs of parallel segments located anywhere on pages of a given web domain, regardless their structure. We validate our method on realigning segments from a large parallel corpus. Another experiment with real-world data provided by Common Crawl Foundation confirms that our solution scales to hundreds of terabytes large set of web-crawled data.","Charles University, Czech Republic","nlp/machine-translation, nlp/corpus-construction","","","","","" "Amir Mehmood, Hafiz Muhammad Shafiq, Abdul Waheed – UET, Lahore, Pakistan","Understanding Regional Context of World Wide Web using Common Crawl Corpus","https://www.researchgate.net/publication/321489200_Understanding_Regional_Context_of_World_Wide_Web_using_Common_Crawl_Corpus","papers","20170101Z00:00:00","","","UET, Lahore, Pakistan","web-science, webometrics","","CC-MAIN-2016-50","","","" "Alexander Panchenko, Eugen Ruppert, Stefano Faralli, Simone Paolo Ponzetto, Chris Biemann – University of Hamburg, Germany; University of Mannheim, Germany","Building a Web-Scale Dependency-Parsed Corpus from CommonCrawl","http://arxiv.org/abs/1710.01779","papers","20170101Z00:00:00","","","University of Hamburg, Germany; University of Mannheim, Germany","nlp/dependency-parsing, nlp/corpus-construction","","CC-MAIN-2016-07","depcc","","" "Ajinkya Kale, Thrivikrama Taula, Sanjika Hewavitharana, Amit Srivastava – eBay Inc.","Towards semantic query segmentation","https://arxiv.org/abs/1707.07835","papers","20170101Z00:00:00","","","eBay Inc.","ir/query-segmentation, nlp/word-embeddings, patent","","","","","GloVe-word-embeddings" "Kjetil Bugge Kristoffersen – University of Oslo, Norway","Common crawled web corpora: constructing corpora from large amounts of web data","http://urn.nb.no/URN:NBN:no-60569","papers","20170101Z00:00:00","","Efforts to use web data as corpora seek to provide solutions to problems traditional corpora suffer from, by taking advantage of the web's huge size and diverse type of content. This thesis will discuss the several sub-tasks that make up the web corpus construction process, like HTML markup removal, language identification, boilerplate removal, duplication detection, etc. Additionally, by using data provided by the Common Crawl Foundation, I develop a new very large English corpus with more than 135 billion tokens. Finally, I evaluate the corpus by training word embeddings and show that the trained model largely outperforms models trained on other corpora in a word analogy and word similarity task.","University of Oslo, Norway","nlp/corpus-construction, nlp/web-as-corpus","","","","","" "David Stuart – University of Wolverhampton, Wolverhampton, UK","Open bibliometrics and undiscovered public knowledge","https://doi.org/10.1108/OIR-07-2017-0209","papers","20170101Z00:00:00","","","University of Wolverhampton, Wolverhampton, UK","web-science/webometrics","Whether altmetrics is really any more open than traditional citation analysis is a matter of debate, although services such as Common Crawl (http://commoncrawl.org), an open repository of web crawl data, provides the opportunity for more open webometrics, [...]","","","","" "Mostafa Abdou, Artur Kulmizev, Vinit Ravishankar, Lasha Abzianidze, Johan Bos – University of Groningen, The Netherlands; University of Copenhagen, Denmark; University of Oslo, Norway;","What can we learn from Semantic Tagging?","https://arxiv.org/abs/1808.09716","papers","20180101Z00:00:00","","","University of Groningen, The Netherlands; University of Copenhagen, Denmark; University of Oslo, Norway;","nlp/semantics, nlp/word-embeddings, nlp/semantic-tagging","","","","GloVe-word-embeddings","" "Ameeta Agrawal, Aijun An, Manos Papagelis – York University, Toronto, Canada","Learning emotion-enriched word representations","https://www.aclweb.org/anthology/C18-1081","papers","20180101Z00:00:00","","Most word representation learning methods are based on the distributional hypothesis in linguistics, according to which words that are used and occur in the same contexts tend to possess similar meanings. As a consequence, emotionally dissimilar words, such as “happy” and “sad” occurring in similar contexts would purport more similar meaning than emotionally similar words, such as “happy” and “joy”. This complication leads to rather undesirable outcome in predictive tasks that relate to affect (emotional state), such as emotion classification and emotion similarity. In order to address this limitation, we propose a novel method of obtaining emotion-enriched word representations, which projects emotionally similar words into neighboring spaces and emotionally dissimilar ones far apart. The proposed approach leverages distant supervision to automatically obtain a large training dataset of text documents and two recurrent neural network architectures for learning the emotion-enriched representations. Through extensive evaluation on two tasks, including emotion classification and emotion similarity, we demonstrate that the proposed representations outperform several competitive general-purpose and affective word representations.","York University, Toronto, Canada","nlp/word-embeddings, nlp/emotion-detection, nlp/sentiment-analysis","","","","GloVe-word-embeddings","" "Manar Alohaly, Hassan Takabi, Eduardo Blanco – University of North Texas, USA","A Deep Learning Approach for Extracting Attributes of ABAC Policies","http://doi.acm.org/10.1145/3205977.3205984","papers","20180101Z00:00:00","access control policy, attribute-based access control, deep learning, natural language processing, policy authoring, relation extraction","","University of North Texas, USA","nlp/machine-translation, computer-security/access-restrictions","","","","","" "Milad Alshomary, Michael Völske, Tristan Licht, Henning Wachsmuth, Benno Stein, Matthias Hagen, Martin Potthast – Paderborn University, Germany; Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany","Wikipedia text reuse: within and without","https://link.springer.com/chapter/10.1007/978-3-030-15712-8_49","papers","20180101Z00:00:00","","We study text reuse related to Wikipedia at scale by compiling the first corpus of text reuse cases within Wikipedia as well as without (i.e., reuse of Wikipedia text in a sample of the Common Crawl). To discover reuse beyond verbatim copy and paste, we employ state-of-the-art text reuse detection technology, scaling it for the first time to process the entire Wikipedia as part of a distributed retrieval pipeline. We further report on a pilot analysis of the 100 million reuse cases inside, and the 1.6 million reuse cases outside Wikipedia that we discovered. Text reuse inside Wikipedia gives rise to new tasks such as article template induction, fixing quality flaws, or complementing Wikipedia’s ontology. Text reuse outside Wikipedia yields a tangible metric for the emerging field of quantifying Wikipedia’s influence on the web. To foster future research into these tasks, and for reproducibility’s sake, the Wikipedia text reuse corpus and the retrieval pipeline are made freely available.","Paderborn University, Germany; Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany","web-mining, ir/duplicate-detection","To foster research into Wikipedia textreuse, we compiled the first Wikipedia text reuse corpus, obtained from comparingthe entire Wikipedia to itself as well as to a 10\%-sample of the Common Crawl.","","","","" "Andrei Amatuni, Estelle He, Elika Bergelson – Duke University","Preserved Structure Across Vector Space Representations","https://arxiv.org/abs/1802.00840","papers","20180101Z00:00:00","","","Duke University","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Khaled Ammar, Frank McSherry, Semih Salihoglu, Manas Joglekar – University of Waterloo, Canada; ETH Zürich, Switzerland; Google, Inc.","Distributed Evaluation of Subgraph Queries Using Worstcase Optimal LowMemory Dataflows","https://arxiv.org/pdf/1802.03760.pdf","papers","20180101Z00:00:00","","","University of Waterloo, Canada; ETH Zürich, Switzerland; Google, Inc.","graph-processing","","","","WDC-hyperlinkgraph","" "Khaled Ammar, Frank McSherry, Semih Salihoglu, Manas Joglekar – University of Waterloo, Canada; ETH Zürich, Switzerland; Google, Inc.","Distributed evaluation of subgraph queries using worst-case optimal low-memory dataflows","https://dl.acm.org/citation.cfm?id=3199520","papers","20180101Z00:00:00","","","University of Waterloo, Canada; ETH Zürich, Switzerland; Google, Inc.","graph-processing","","","","WDC-hyperlinkgraph","" "Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E. Dahl, Geoffrey E. Hinton – Google; Google Brain; Google DeepMind","Large scale distributed neural network training through online distillation","https://arxiv.org/abs/1804.03235","papers","20180101Z00:00:00","","","Google; Google Brain; Google DeepMind","nlp/neural-networks","","CC-MAIN-2017-26","","","" "Sajjad Arshad, Seyed Ali Mirheidari, Tobias Lauinger, Bruno Crispo, Engin Kirda, William Robertson – Northeastern University, Boston, MA, USA; University of Trento, Trento, Italy","Large-Scale Analysis of Style Injection by Relative Path Overwrite","https://doi.org/10.1145/3178876.3186090","papers","20180101Z00:00:00","relative path overwrite, scriptless attack, style injection","","Northeastern University, Boston, MA, USA; University of Trento, Trento, Italy","web-science, computer-security/web-application-security","We extract pages using relative-path stylesheets from the Common Crawl dataset [9], automatically test if style directives can be injected using RPO, and determine whether they are interpreted by the browser. [...] For finding the initial seed set of candidate pages with relative-path stylesheets, we leverage the Common Crawl from August 2016, which contains more than 1.6 billion pages. By using an existing dataset, we can quickly identify candidate pages without creating any web crawl traffic. We use a Java HTML parser to filter any pages containing only inline CSS or stylesheets referenced by absolute URLs, leaving us with over 203 million pages on nearly 6 million sites.","CC-MAIN-2016-36","","","" "Mikel Artetxe, Gorka Labaka, Eneko Agirre – University of the Basque Country, Spain","Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations","https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16935/16781","papers","20180101Z00:00:00","","","University of the Basque Country, Spain","nlp/semantics, nlp/word-embeddings, nlp/bilingual-word-embeddings","","","","","" "Mikel Artetxe, Gorka Labaka, Eneko Agirre – University of the Basque Country, Spain","A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings","https://arxiv.org/abs/1805.06297","papers","20180101Z00:00:00","","","University of the Basque Country, Spain","nlp/semantics, nlp/word-embeddings, nlp/bilingual-word-embeddings","","","","WMT-16-translation-task-common-crawl-corpus","" "Mikel Artetxe, Gorka Labaka, Iñigo Lopez-Gazpio, Eneko Agirre – University of the Basque Country, Spain","Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation","https://arxiv.org/abs/1809.02094","papers","20180101Z00:00:00","","","University of the Basque Country, Spain","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings, fastText-word-embeddings","" "Mikel Artetxe, Holger Schwenk – University of the Basque Country, Spain; Facebook AI Research","Margin-based parallel corpus mining with multilingual sentence embeddings","https://arxiv.org/abs/1811.01136","papers","20180101Z00:00:00","","","University of the Basque Country, Spain; Facebook AI Research","cc-cited-not-used, nlp/word-embeddings, nlp/sentence-embeddings, nlp/parallel-corpus","","","","","" "Parnia Bahar, Christopher Brix, Hermann Ney – RWTH Aachen University, Germany","Towards two-dimensional sequence to sequence model in neural machine translation","https://arxiv.org/abs/1810.03975","papers","20180101Z00:00:00","","","RWTH Aachen University, Germany","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Krisztian Balog – University of Stavanger, Norway","Entity-oriented search","https://link.springer.com/content/pdf/10.1007/978-3-319-93935-3.pdf","papers","20180101Z00:00:00","","","University of Stavanger, Norway","information-retrieval, nlp/named-entity-recognition, linked data","Common CrawlCommon Crawl5is a nonprofit organization that regularly crawlsthe Web and makes the data publicly available. The datasets are hosted on AmazonS3 as part of the Amazon Public Datasets program.6As of May 2017, the crawlcontains 2.96 billion web pages and over 250 TB of uncompressed content (inWARC format). The Web Data Commons project7extracts structured data fromthe Common Crawl and makes those publicly available (e.g., the Hyperlink GraphDataset and the Web Table Corpus).","CC-MAIN-2017-22","","","" "Luciano Barbosa, Valter Crescenzi, Xin Luna Dong, Paolo Merialdo, Federico Piai, Disheng Qiu, Yanyan Shen, Divesh Srivastava – Universidade Federal de Pernambuco, Brazil; Roma Tre University, Italy; Amazon; Wanderio; Shanghai Jiao Tong University; AT&T Labs – Research","Big Data Integration for Product Specifications.","http://sites.computer.org/debull/A18june/A18JUN-CD.pdf#page=73","papers","20180101Z00:00:00","","","Universidade Federal de Pernambuco, Brazil; Roma Tre University, Italy; Amazon; Wanderio; Shanghai Jiao Tong University; AT&T Labs – Research","ir/information-extraction, ir/data-integration","About 68\% of the sources discovered by our approach were not present in Common Crawl. Only 20\% of our sources contained fewer pages than the same sources in Common Crawl, and a very small fraction of the pages in these sources were product pages: on a sample set of 12 websites where Common Crawl presented more pages than in our dataset, we evaluated that only 0.8\% of the pages were product pages.","","","","" "Luciano Barbosa, Valter Crescenzi, Xin Luna Dong, Paolo Merialdo, Federico Piai, Disheng Qiu, Yanyan Shen, Divesh Srivastava – Universidade Federal de Pernambuco, Brazil; Roma Tre University, Italy; Amazon; Wanderio; Shanghai Jiao Tong University; AT&T Labs – Research","Lessons Learned and Research Agenda for Big Data Integration of Product Specifications (Discussion Paper)","http://ceur-ws.org/Vol-2161/paper29.pdf","papers","20180101Z00:00:00","","","Universidade Federal de Pernambuco, Brazil; Roma Tre University, Italy; Amazon; Wanderio; Shanghai Jiao Tong University; AT&T Labs – Research","ir/information-extraction, ir/data-integration","Building a Benchmark Product Dataset – We compared the contents of our dataset with pages in Common Crawl, an open repository of web crawl data. About 68\% of the sources discovered by our approach were not present in Common Crawl. Only 20\% of our sources contained fewer pages than the same sources in Common Crawl, and a very small fraction of the pages in these sources were product pages: on a sample set of 12 websites where Common Crawl presented more pages than in our dataset, we evaluated that only 0.8\% of the pages were product pages.","","","","" "Michail Batikas, Jörg Claussen, Christian Peukert – LMU Munich, Germany; UCP – Católica Lisbon School of Business and Economics, Lisboa, Portugal","Follow The Money: Online Piracy and Self-Regulation in the Advertising Industry","http://www.cesifo-group.de/DocDL/cesifo1_wp6852.pdf","papers","20180101Z00:00:00","","","LMU Munich, Germany; UCP – Católica Lisbon School of Business and Economics, Lisboa, Portugal","web-science","We obtain archived versions of the HTML source code of all URLs for each domain in our gross sample from Common Crawl, a project that has crawled billions of webpages periodically since summer 2013.","","","","" "Leilani Battle, Peitong Duan, Zachery Miranda, Dana Mukusheva, Remco Chang, Michael Stonebraker – University of Washington, Seattle, WA, USA; Massachusetts Institute of Technology, Cambridge, MA, USA; Tufts University, Medford, MA, USA","Beagle: Automated Extraction and Interpretation of Visualizations from the Web","https://dl.acm.org/citation.cfm?id=3174168","papers","20180101Z00:00:00","","``How common is interactive visualization on the web?'' ``What is the most popular visualization design?'' ``How prevalent are pie charts really?'' These questions intimate the role of interactive visualization in the real (online) world. In this paper, we present our approach (and findings) to answering these questions. First, we introduce Beagle, which mines the web for SVG-based visualizations and automatically classifies them by type (i.e., bar, pie, etc.). With Beagle, we extract over 41,000 visualizations across five different tools and repositories, and classify them with 85\% accuracy, across 24 visualization types. Given this visualization collection, we study usage across tools. We find that most visualizations fall under four types: bar charts, line charts, scatter charts, and geographic maps. Though controversial, pie charts are relatively rare for the visualization tools that were studied. Our findings also suggest that the total visualization types supported by a given tool could factor into its ease of use. However this effect appears to be mitigated by providing a variety of diverse expert visualization examples to users.","University of Washington, Seattle, WA, USA; Massachusetts Institute of Technology, Cambridge, MA, USA; Tufts University, Medford, MA, USA","web-science, web-crawling","As found with other web crawling projects, such as the Common Crawl¹, our web crawls represent a specific point in time for the websites [...]","","","","" "Luigi Bellomarini, Ruslan R Fayzrakhmanov, Georg Gottlob, Andrey Kravchenko, Eleonora Laurenza, Yavor Nenov, Stephane Reissfelder, Emanuel Sallinger, Evgeny Sherkhonov, Lianlong Wu – University of Oxford, United Kingdom; Banca d’Italia, Italy; TU Wien, Austria","Data Science with Vadalog: Bridging Machine Learning and Reasoning","https://arxiv.org/abs/1807.08712","papers","20180101Z00:00:00","","","University of Oxford, United Kingdom; Banca d’Italia, Italy; TU Wien, Austria","ai/semantic-reasoning, ai/machine-learning","Enterprises increasingly depend on intelligent information systems that operationalise corporate knowledge as a unified source across system boundaries. [...] To maintain their competitive edge, companies need to incorporate multiple heterogeneous sources of information, including [...] external streams of unstructured data (e.g., news and social media feeds, and Common Crawl¹), [...]","","","","" "Luisa Bentivogli, Mauro Cettolo, Marcello Federico, Federmann Christian – FBK, Trento, Italy; Amazon AI, East Palo Alto, CA, USA, Microsoft Cloud+AI, Redmond, WA, USA","Machine Translation Human Evaluation: an investigation of evaluation based on Post-Editing and its relation with Direct Assessment","https://workshop2018.iwslt.org/downloads/Proceedings_IWSLT_2018.pdf#page=77","papers","20180101Z00:00:00","","","FBK, Trento, Italy; Amazon AI, East Palo Alto, CA, USA, Microsoft Cloud+AI, Redmond, WA, USA","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Janek Bevendorff, Benno Stein, Matthias Hagen, Martin Potthast – Bauhaus-Universität Weimar, Germany; Leipzig University, Germany","Elastic ChatNoir: Search Engine for the ClueWeb and the Common Crawl","https://doi.org/10.1007/978-3-319-76941-7_83","papers","20180101Z00:00:00","","","Bauhaus-Universität Weimar, Germany; Leipzig University, Germany","information-retrieval/search-engine","","CC-MAIN-2015-11","","","" "Paolo Boldi, Andrea Marino, Massimo Santini, Sebastiano Vigna – Università degli Studi di Milano, Italy","BUbiNG: Massive crawling for the masses","https://dl.acm.org/citation.cfm?id=3160017","papers","20180101Z00:00:00","","","Università degli Studi di Milano, Italy","web-crawling, web-science/hyperlinkgraph","","","","","WDC-hyperlinkgraph" "Fabienne Braune, Alex Fraser, Barry Haddow – University of Edinburgh","D1. 2: Report on Improving Translation with Monolingual Data","http://www.himl.eu/files/D1.2_Using_Non_Parallel.pdf","papers","20180101Z00:00:00","","","University of Edinburgh","nlp/machine-translation","","","","","" "Tomáš Brychcín, Tomáš Hercig, Josef Steinberger, Michal Konkol – University of West Bohemia, Czech Republic","UWB at SemEval-2018 Task 10: Capturing Discriminative Attributes from Word Distributions","http://www.aclweb.org/anthology/S18-1153","papers","20180101Z00:00:00","","","University of West Bohemia, Czech Republic","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Michael Cafarella, Alon Halevy, Hongrae Lee, Jayant Madhavan, Cong Yu, Daisy Zhe Wang, Eugene Wu – Google Inc.; University of Michigan, USA; Megagon Labs; University of Florida, USA; Columbia University, USA","Ten years of webtables","https://dl.acm.org/citation.cfm?id=3275614","papers","20180101Z00:00:00","","","Google Inc.; University of Michigan, USA; Megagon Labs; University of Florida, USA; Columbia University, USA","semantic web, web tables, web-mining","Several researchers produced web tables from the public Common Crawl [1, 24, 15], thereby making them available to a broad audience outside the large Web companies.","","","","WDCWebTables, DresdenWebTableCorpus" "Casey Casalnuovo, Kenji Sagae, Prem Devanbu – University of California, Davis, USA","Studying the Difference Between Natural and Programming Language Corpora","https://link.springer.com/article/10.1007/s10664-018-9669-7","papers","20180101Z00:00:00","","","University of California, Davis, USA","nlp/corpus-construction, nlp/text-corpora, programming-languages, nlp/syntax","The Germanand Spanish corpora were selected from a sample of files from the unlabeled datasets from the ConLL 2017 Shared Task (Ginter et al, 2017), which consist of web text obtained from CommonCrawl.⁸ Like the 1 billion token English corpus, we selected a random subsample to make these corpora size comparable with our other corpora. In this sample, we excluded files from the Wikipedia translations, as we observed Wikipedia formatting mixed in with some of the files.","","","conll-2017-shared-task","" "Xinghan Chen, Mingxing Zhang, Zheng Wang, Lin Zuo, Bo Li, Yang Yang – University of Electronic Science and Technology of China (UESTC), Chengdu, PR China","Leveraging Unpaired Out-of-Domain Data for Image Captioning","https://www.sciencedirect.com/science/article/abs/pii/S0167865518309358","papers","20180101Z00:00:00","","","University of Electronic Science and Technology of China (UESTC), Chengdu, PR China","nlp/text-generation, ai/image-classification, nlp/image-captioning, ai/deep-learning","","","","","" "Zewen Chi, Heyan Huang, Jiangui Chen, Hao Wu, Ran Wei – Beijing Institute of Technology, China","Zewen at SemEval-2018 Task 1: An Ensemble Model for Affect Prediction in Tweets","http://www.aclweb.org/anthology/S18-1046","papers","20180101Z00:00:00","","","Beijing Institute of Technology, China","nlp, nlp/sentiment-analysis, nlp/emotion-detection, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Mara Chinea-Rios, Alvaro Peris, Francisco Casacuberta – Universitat d'Alacant, Spain","Are Automatic Metrics Robust and Reliable in Specific Machine Translation Tasks?","http://rua.ua.es/dspace/handle/10045/76022","papers","20180101Z00:00:00","","","Universitat d'Alacant, Spain","nlp/machine-translation","In our setup, we trained a PB-SMT and a NMT system on the same data, from a general corpus extracted from websites (Common Crawl).","","","","" "Shamil Chollampatt, Hwee Tou Ng – NUS Graduate School for Integrative Sciences and Engineering; Department of Computer Science, National University of Singapore","A multilayer convolutional encoder-decoder neural network for grammatical error correction","https://arxiv.org/abs/1801.08831","papers","20180101Z00:00:00","","","NUS Graduate School for Integrative Sciences and Engineering; Department of Computer Science, National University of Singapore","nlp/grammatical-error-correction, nlp/word-embeddings, nlp/language-model","We also make use of the larger English corpora from Wikipedia (1.78B words) for pre-training the word embeddings, and a subset of the Common Crawl corpus (94B words) for training the language model for rescoring.","","","","" "Kenneth Clarkson, Anna Lisa Gentile, Daniel Gruhl, Petar Ristoski, Joseph Terdiman, Steve Welch – IBM Research Almaden, San Jose, USA","User-Centric Ontology Population","https://link.springer.com/chapter/10.1007/978-3-319-93417-4_8","papers","20180101Z00:00:00","","","IBM Research Almaden, San Jose, USA","semantic web, cc-cited-not-used, ontology extraction","","","","","" "Trevor Cohen, Dominic Widdows – University of Washington, Seattle, USA; Grab, Inc., Seattle, WA, USA","Bringing Order to Neural Word Embeddings with Embeddings Augmented by Random Permutations (EARP)","http://www.aclweb.org/anthology/K18-1045","papers","20180101Z00:00:00","","","University of Washington, Seattle, USA; Grab, Inc., Seattle, WA, USA","nlp/word-embeddings, cc-cited-not-used","","","","","" "Alexis Conneau, Douwe Kiela – Facebook Artificial Intelligence Research","SentEval: An evaluation toolkit for universal sentence representations","https://arxiv.org/abs/1803.05449","papers","20180101Z00:00:00","","","Facebook Artificial Intelligence Research","nlp/word-embeddings, nlp/sentence-embeddings, nlp/evaluation","","","","GloVe-word-embeddings, fastText-word-embeddings","" "Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R Bowman, Holger Schwenk, Veselin Stoyanov – Facebook AI Research, USA; New York University, USA","XNLI: Evaluating Cross-lingual Sentence Representations","https://arxiv.org/abs/1809.05053","papers","20180101Z00:00:00","","","Facebook AI Research, USA; New York University, USA","nlp/word-embeddings, nlp/sentence-embeddings","","","","fasttext-word-embeddings","" "Michael Conover, Matthew Hayes, Scott Blackburn, Pete Skomoroch, Sam Shah – Workday, Inc., San Francisco, CA, USA","Pangloss: Fast Entity Linking in Noisy Text Environments","https://dl.acm.org/citation.cfm?id=3219899","papers","20180101Z00:00:00","","","Workday, Inc., San Francisco, CA, USA","ir/information-extraction","The Common Crawl datasets represents a sample of web crawl data containing raw web page data, metadata and text extracts overseen by a 501(c)(3) nonprofit of the same name. Facilitating ease of access for industrial practitioners, the dataset is hosted for free on Amazon Web Services’ Public Data Set repository in addition to academic hosts the world over. As part of a batch Hadoop job run on a monthly basis we filter the Common Crawl data (∼70TB) down to records which contain at least one hyperlink that points to English Wikipedia. This corpus has proven particularly valuable as a source of signal for associating tokens with knowledge base entries in the context of domain-specific, messy natural language.","","","","" "Andreiwid Sheffer Correa, Pär-Ola Zander, Flavio Soares Correa da Silva – University of Sao Paulo, Sao Paulo, Brazil; Aalborg University, Aalborg, Denmark","Investigating open data portals automatically: a methodology and some illustrations","https://dl.acm.org/citation.cfm?id=3209292","papers","20180101Z00:00:00","","","University of Sao Paulo, Sao Paulo, Brazil; Aalborg University, Aalborg, Denmark","open data, information retrieval","","","","","" "J Shane Culpepper, Fernando Diaz, Mark D. Smucker – ACM","Research Frontiers in Information Retrieval: Report from the Third Strategic Workshop on Information Retrieval in Lorne (SWIRL 2018)","http://doi.acm.org/10.1145/3274784.3274788","papers","20180101Z00:00:00","","","ACM","cc-cited-not-used, information-retrieval","","","","","" "Alexander Czech – TU Wien, Austria","An Approach to Geotag a Web Sized Corpus of Documents with Addresses in Randstad, Netherlands","https://doi.org/10.3929/ethz-b-000225615","papers","20180101Z00:00:00","","","TU Wien, Austria","ir/geotagging","Common Crawl is a non-profit organization that provides raw web crawling data on a monthly basis. Their archives contain over 3.16 billion URLs with over 260 TiB of uncompressed content.","","","","" "Berkan Demirel, Ramazan Gokberk Cinbis, Nazli Ikizler-Cinbis – HAVELSAN Inc. Ankara, Turkey; Middle East Technical University Ankara, Turkey; Hacettepe University Ankara, Turkey","Zero-Shot Object Detection by Hybrid Region Embedding","https://arxiv.org/abs/1805.06157","papers","20180101Z00:00:00","","","HAVELSAN Inc. Ankara, Turkey; Middle East Technical University Ankara, Turkey; Hacettepe University Ankara, Turkey","ai/computer-vision, ai/pattern-recognition, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Pavel Denisov, Ngoc Thang Vu, Marc Ferras Font – University of Stuttgart, Germany","Unsupervised Domain Adaptation by Adversarial Learning for Robust Speech Recognition","https://arxiv.org/abs/1807.11284","papers","20180101Z00:00:00","","","University of Stuttgart, Germany","nlp, speech-recognition","..., 197 millions words of Italian Deduplicated CommonCrawl Text are used to build Italian language model.","","","","" "Sunipa Dev, Safia Hassan, Jeff M Phillips – University of Utah","Absolute Orientation for Word Embedding Alignment","https://arxiv.org/abs/1806.01330","papers","20180101Z00:00:00","","","University of Utah","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Sergey Edunov, Myle Ott, Michael Auli, David Grangier – Facebook AI Research, USA; Google Brain, Mountain View, CA, USA","Understanding Back-Translation at Scale","https://arxiv.org/abs/1808.09381","papers","20180101Z00:00:00","","","Facebook AI Research, USA; Google Brain, Mountain View, CA, USA","nlp/machine-translation","","","","","" "Julia Efremova, Ian Endres, Isaac Vidas, Ofer Melnik – HERE Technologies, Amsterdam, The Netherlands","A Geo-Tagging Framework for Address Extraction from Web Pages","https://link.springer.com/chapter/10.1007/978-3-319-95786-9_22","papers","20180101Z00:00:00","","","HERE Technologies, Amsterdam, The Netherlands","semantic-web/microformats","Common Crawl is a public corpus, mostly stored on Amazon Web Services³. A subset of the CommonCrawl dataset has schema information in the microdata format","","","","" "Samer El Zant, Katia Jaffrès-Runser, Klaus M. Frahm, Dima L. Shepelyansky – Université de Toulouse, France","Interactions and influence of world painters from the reduced Google matrix of Wikipedia networks","https://ieeexplore.ieee.org/abstract/document/8449078","papers","20180101Z00:00:00","","This paper concentrates on extracting painting art history knowledge from the network structure of Wikipedia. Therefore, we construct theoretical networks of webpages representing the hyper-linked structure of articles of seven Wikipedia language editions. These seven networks are analyzed to extract the most influential painters in each edition using Google matrix theory. Importance of webpages of over 3000 painters is measured using the PageRank algorithm. The most influential painters are enlisted and their ties are studied with the reduced Google matrix analysis. The reduced Google matrix is a powerful method that captures both direct and hidden interactions between a subset of selected nodes taking into account the indirect links between these nodes via the remaining part of large global network. This method originates from the scattering theory of nuclear and mesoscopic physics and field of quantum chaos. In this paper, we show that it is possible to extract from the components of the reduced Google matrix meaningful information on the ties between these painters. For instance, our analysis groups together painters that belong to the same painting movement and shows meaningful ties between painters of different movements. We also determine the influence of painters on world countries using link sensitivity between Wikipedia articles of painters and countries. The reduced Google matrix approach allows to obtain a balanced view of various cultural opinions of Wikipedia language editions. The world countries with the largest number of top painters of selected seven Wikipedia editions are found to be Italy, France, and Russia. We argue that this approach gives meaningful information about art and that it could be a part of extensive network analysis on human knowledge and cultures.","Université de Toulouse, France","web-science/hyperlinkgraph, graph-processing, cc-cited-not-used","","","","","" "Cristina Espana-Bonet, Juliane Stiller, Sophie Henning – Universität des Saarlandes, Germany; Humboldt-Universität zu Berlin, Germany","M1. 2--Corpora for the Machine Translation Engines","https://www.clubs-project.eu/assets/publications/project/M1.2_MTcorpora_v4.0.pdf","papers","20180101Z00:00:00","","","Universität des Saarlandes, Germany; Humboldt-Universität zu Berlin, Germany","nlp/machine-translation, nlp/corpora","","","","","WMT-13-translation-task-common-crawl-corpus" "Diego Esteves, Aniketh Janardhan Reddy, Piyush Chawla, Jens Lehmann – University of Bonn, Germany; University of Ohio, USA; Carnegie Mellon University, Pittsburgh, USA;","Belittling the Source: Trustworthiness Indicators to Obfuscate Fake News on the Web","https://arxiv.org/abs/1809.00494","papers","20180101Z00:00:00","","","University of Bonn, Germany; University of Ohio, USA; Carnegie Mellon University, Pittsburgh, USA;","nlp, text classification, content credibility, information retrieval","PageRankCC: PageRank information computed through the CommonCrawl Corpus","","","","" "Stefano Faralli, Els Lefever, Simone Paolo Ponzetto – University of Mannheim, Germany; Ghent University, Belgium","MIsA: Multilingual IsA Extraction from Corpora","https://biblio.ugent.be/publication/8562721","papers","20180101Z00:00:00","","","University of Mannheim, Germany; Ghent University, Belgium","nlp/semantics, data-mining, hypernymy","","","","","WDC-WebIsADb" "Ruslan R. Fayzrakhmanov, Emanuel Sallinger, Ben Spencer, Tim Furche, Georg Gottlob – University of Oxford, Oxford, United Kingdom","Browserless web data extraction: challenges and opportunities","https://dl.acm.org/citation.cfm?id=3186008","papers","20180101Z00:00:00","","","University of Oxford, Oxford, United Kingdom","information retrieval, web-crawling, web-scraping, web-mining","The random sites were chosen by randomly sampling URLs from the Common Crawl [10] search index dataset, which includes around 3 billion web pages.","","","","" "Agostino Funel – ENEA, Italy","Analysis of the Web Graph Aggregated by Host and Pay-Level Domain","https://arxiv.org/abs/1802.05435","papers","20180101Z00:00:00","","","ENEA, Italy","web-science/hyperlinkgraph","","hyperlinkgraph/cc-main-2017-aug-sep-oct/hostgraph, hyperlinkgraph/cc-main-2017-aug-sep-oct/domaingraph","","","" "Andres Garcia, Jose Manuel Gomez-Perez – expertsystem.com, Madrid, Spain","Not just about size-A Study on the Role of Distributed Word Representations in the Analysis of Scientific Publications","https://arxiv.org/abs/1804.01772","papers","20180101Z00:00:00","","","expertsystem.com, Madrid, Spain","nlp/word-embeddings","","","","fastText-word-embeddings, GloVe-word-embeddings","" "Andres Garcia, Jose Manuel Gomez-Perez – expertsystem.com, Madrid, Spain","Not just about size-A Study on the Role of Distributed Word Representations in the Analysis of Scientific Publications","http://ceur-ws.org/Vol-2106/paper3.pdf","papers","20180101Z00:00:00","","","expertsystem.com, Madrid, Spain","nlp/word-embeddings","","","","fastText-word-embeddings, GloVe-word-embeddings","" "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, James Zou – Stanford University, USA; Chan Zuckerberg Biohub, San Francisco, CA, USA","Word embeddings quantify 100 years of gender and ethnic stereotypes","https://www.pnas.org/content/115/16/E3635.short","papers","20180101Z00:00:00","","","Stanford University, USA; Chan Zuckerberg Biohub, San Francisco, CA, USA","nlp/semantics, nlp/word-embeddings, ai/ethics-of-machine-learning, ai/machine-learning","","","","GloVe-word-embeddings","" "Majid Ghasemi-Gol, Pedro Szekely – University of Southern California; Information Science Institute","TabVec: Table Vectors for Classification of Web Tables","https://arxiv.org/abs/1802.06290","papers","20180101Z00:00:00","","","University of Southern California; Information Science Institute","web-tables, information-extraction","[...] we use a random sample of July 2015 Common Crawl (WCC) as a generic domain to compare our system with the state of the art systems","CC-MAIN-2015-32","","","WDCWebTables, DresdenWebTableCorpus" "Michael Glass, Alfio Gliozzo – IBM Research AI","Discovering Implicit Knowledge with Unary Relations","http://www.aclweb.org/anthology/P18-1147","papers","20180101Z00:00:00","","","IBM Research AI","ai/knowledge-base","","","","","" "Michael Glass, Alfio Gliozzo – Knowledge Induction and Reasoning Group, IBM Research AINew YorkUSA","A Dataset for Web-Scale Knowledge Base Population","https://link.springer.com/chapter/10.1007/978-3-319-93417-4_17","papers","20180101Z00:00:00","","","Knowledge Induction and Reasoning Group, IBM Research AINew YorkUSA","ai/semantic-reasoning, ai/knowledge-base","We introduce and release CC-DBP, a web-scale dataset for training and benchmarking KBP systems. The dataset is based on Common Crawl as the corpus and DBpedia as the target knowledge base [...]","CC-MAIN-2017-26","CC-DBP","","" "Michael Glass, Alfio Gliozzo, Oktie Hassanzadeh, Nandana Mihindukulasooriya, Gaetano Rossiello – IBM Research AI, New York, USA; Universidad Politcnica de Madrid, Spain; University of Bari, Italy","Inducing implicit relations from text using distantly supervised deep nets","https://link.springer.com/chapter/10.1007/978-3-030-00671-6_3","papers","20180101Z00:00:00","","","IBM Research AI, New York, USA; Universidad Politcnica de Madrid, Spain; University of Bari, Italy","ai/knowledge-base, ai/deep-learning, semantic web","","","","CC-DBP","" "Pranav Goel, Yoichi Matsuyama, Michael Madaio, Justine Cassell – Indian Institute of Technology (BHU), India; Carnegie Mellon University","“I think it might help if we multiply, and not add”: Detecting Indirectness in Conversation","http://articulab.hcii.cs.cmu.edu/wordpress/wp-content/uploads/2018/04/Goel-IWSDS2018_camera-ready_13Mar.pdf","papers","20180101Z00:00:00","","","Indian Institute of Technology (BHU), India; Carnegie Mellon University","nlp/dialogue-systems, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Viktor Golem, Mladen Karan, Jan Šnajder – University of Zagreb, Croatia","Combining Shallow and Deep Learning for Aggressive Text Detection","www.aclweb.org/anthology/W18-4422","papers","20180101Z00:00:00","","","University of Zagreb, Croatia","nlp/text-classification, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Paul Gooding, Melissa Terras, Linda Berube – University of East Anglia, United Kingdom; University of Edinburgh, United Kingdom","Legal Deposit Web Archives and the Digital Humanities: A Universe of Lost Opportunity?","http://eprints.gla.ac.uk/168229/","papers","20180101Z00:00:00","","","University of East Anglia, United Kingdom; University of Edinburgh, United Kingdom","web-archiving/legal-aspects","Restricted deposit library access requires researchers to look elsewhere for portable web data: by undertaking their own web crawls, or by utilising datasets from Common Crawl (http://commoncrawl.org/) and the Internet Archive (https://archive.org). Both organisations provide vital services to researchers, and both innovate in areas that would traditionally fall under the deposit libraries’ purview. They support their mission by exploring the boundaries of copyright, including exceptions for non-commercial text and data mining (Intellectual Property Office, 2014). This contrast between risk-enabled independent organisations and deposit libraries, described by interviewees as risk averse, challenges library/DH collaboration models such as BL Labs (http://labs.bl.uk) and Library of Congress Labs (https://labs.loc.gov).","","","","" "Rajendra Banjade, Nabin Maharjan, Dipesh Gautam, Frank Adrasik, Arthur C. Graesser, Vasile Rus – University of Memphis, USA","Pooling Word Vector Representations Across Models","https://www.springer.com/de/book/9783319771151","papers","20180101Z00:00:00","","","University of Memphis, USA","nlp/word-embeddings, nlp/semantics","","","","GloVe-word-embeddings","" "Gabriel Grand, Idan Asher Blank, Francisco Pereira, Evelina Fedorenko – Harvard University; Massachusetts Institute of Technology; Siemens Healthineers; Massachusetts General Hospital; Harvard Medical School","Semantic projection: recovering human knowledge of multiple, distinct object features from word embeddings","https://arxiv.org/abs/1802.01241","papers","20180101Z00:00:00","","","Harvard University; Massachusetts Institute of Technology; Siemens Healthineers; Massachusetts General Hospital; Harvard Medical School","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, Tomas Mikolov – Facebook AI Research; École polytechnique fédérale de Lausanne EPFL, Switzerland","Learning word vectors for 157 languages","https://www.aclweb.org/anthology/L18-1550","papers","20180101Z00:00:00","","Distributed word representations, or word vectors, have recently been applied to many tasks in natural language processing, leading to state-of-the-art performance. A key ingredient to the successful application of these representations is to train t hem on very large corpora, and use these pre-trained models in downstream tasks. In this paper, we describe how we trained such high qualit y word representations for 157 languages. We used two sources of data to train these models: the free online encyclopedia Wikip edia and data from the common crawl project. We also introduce three new word analogy datasets to evaluate these word vectors, for Fren ch, Hindi and Polish. Finally, we evaluate our pre-trained word vectors on 10 languages for which evaluation datasets exists, sho wing very strong performance compared to previous models.","Facebook AI Research; École polytechnique fédérale de Lausanne EPFL, Switzerland","nlp/word-embeddings","The common crawl is a non profit organization which crawls the web and makes the resulting data publicly available. This large scale corpus was previously used to estimate n-gram language models (Buck et al., 2014) or to learn English word vectors (Pennington et al., 2014). To the best of our knowledge, it was not used yet to learn word vectors for a large set of languages. The data is distributed either as raw HTML pages, or as WET files which contain the extracted text data, converted to UTF-8. We decided to use the extracted text data, as it is much smaller in size, and easier to process (no need to remove HTML). We downloaded the May 2017 crawl, corresponding to roughly 24 terabytes of raw text data.","CC-MAIN-2017-22 (WET)","fastText-word-embeddings","","" "Roman Grundkiewicz, Marcin Junczys-Dowmunt – University of Edinburgh, United Kingdom; Microsoft","Near Human-Level Performance in Grammatical Error Correction with Hybrid Machine Translation","https://arxiv.org/abs/1804.05945","papers","20180101Z00:00:00","","","University of Edinburgh, United Kingdom; Microsoft","nlp/machine-translation, nlp/grammatical-error-correction","","","","Ngrams-LMs-2013","" "Amir Hazem, Emmanuel Morin – Université de Nantes, France","Leveraging Meta-Embeddings for Bilingual Lexicon Extraction from Specialized Comparable Corpora","http://www.aclweb.org/anthology/C18-1080","papers","20180101Z00:00:00","","","Université de Nantes, France","nlp/machine-translation, nlp/lexikon, nlp/dictionary-creation","","","","","" "Michael A. Hedderich, Dietrich Klakow – Saarland University, Saarbrücken, Germany","Training a Neural Network in a Low-Resource Setting on Automatically Annotated Noisy Data","https://arxiv.org/abs/1807.00745","papers","20180101Z00:00:00","","","Saarland University, Saarbrücken, Germany","nlp/word-embeddings, ai/neural-networks","","","","GloVe-word-embeddings","" "Lena Hettinger, Alexander Dallmann, Albin Zehe, Thomas Niebler, Andreas Hotho – University of Würzburg, Germany","ClaiRE at SemEval-2018 Task 7: Classification of Relations using Embeddings","http://www.aclweb.org/anthology/S18-1134","papers","20180101Z00:00:00","","","University of Würzburg, Germany","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Lena Hettinger, Alexander Dallmann, Albin Zehe, Thomas Niebler, Andreas Hotho – University of Würzburg, Germany","ClaiRE at SemEval-2018 Task 7-Extended Version","https://arxiv.org/abs/1804.05825","papers","20180101Z00:00:00","","","University of Würzburg, Germany","nlp/semantics, nlp/word-embeddings","we employ a publicly available set of 300-dimensional word embeddings trained with GloVe (Pennington et al., 2014) on the Common Crawl data","","","","" "Jiaji Huang, Yi Li, Wei Ping, Liang Huang – Baidu Research, Sunnyvale, CA, USA; School of EECS, Oregon State University, Corvallis, OR, USA","Large Margin Neural Language Model","https://arxiv.org/abs/1808.08987","papers","20180101Z00:00:00","","","Baidu Research, Sunnyvale, CA, USA; School of EECS, Oregon State University, Corvallis, OR, USA","nlp/language-model, nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Balázs Indig – MTA-PPKE Magyar Nyelvtechnológiai Kutatócsoport, Hungaria","Közös crawlnak is egy korpusz a vége-Korpuszépítés a CommonCrawl .hu domainjából","http://real.mtak.hu/73329/1/crawl.pdf","papers","20180101Z00:00:00","","","MTA-PPKE Magyar Nyelvtechnológiai Kutatócsoport, Hungaria","web-science","","CC-MAIN-2017-47","","","" "Mohit Iyyer, John Wieting, Kevin Gimpel, Luke Zettlemoyer – Allen Institute of Artificial Intelligence, Seattle, United States; UMass Amherst, United States; Carnegie Mellon University, Pittsburgh, PA, USA; Toyota Technological Institute at Chicago, IL, USA; University of Washington, Seattle, WA, USA","Adversarial example generation with syntactically controlled paraphrase networks","https://arxiv.org/abs/1804.06059","papers","20180101Z00:00:00","","","Allen Institute of Artificial Intelligence, Seattle, United States; UMass Amherst, United States; Carnegie Mellon University, Pittsburgh, PA, USA; Toyota Technological Institute at Chicago, IL, USA; University of Washington, Seattle, WA, USA","nlp/machine-translation, nlp/sentence-paraphrase, nlp/sentence-embeddings","","","WMT-16-translation-task-common-crawl-corpus, patent","","" "Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, Edouard Grave – Facebook AI Research","Loss in translation: Learning bilingual word mapping with a retrieval criterion","https://www.aclweb.org/anthology/papers/D/D18/D18-1330/","papers","20180101Z00:00:00","","","Facebook AI Research","nlp/word-embeddings, nlp/bilingual-word-embeddings","","","","fastText-word-embeddings","" "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, Kenneth Heafield – University of Edinburgh, United Kingdom; Microsoft","Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task","https://arxiv.org/abs/1804.05940","papers","20180101Z00:00:00","","","University of Edinburgh, United Kingdom; Microsoft","nlp/machine-translation, nlp/grammatical-error-correction","","","","","" "David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, Dan Jurafsky – University of Michigan, USA; Stanford University, USA","Measuring the evolution of a scientific field through citation frames","https://doi.org/10.1162/tacl_a_00028","papers","20180101Z00:00:00","","","University of Michigan, USA; Stanford University, USA","nlp/word-embeddings, nlp/text-analysis, nlp/citation-analysis","","","","","GloVe-word-embeddings" "Tomer Kaftan, Magdalena Balazinska, Alvin Cheung, Johannes Gehrke – University of Washington; Microsoft","Cuttlefish: A Lightweight Primitive for Adaptive Query Processing","https://arxiv.org/abs/1802.09180","papers","20180101Z00:00:00","","","University of Washington; Microsoft","information retrieval, regular expression matching, query planning, SQL processing","... to search through a contiguously-stored sample of approximately 256 thousand internet web pages collected by the Common Crawl project.","","","","" "Alexander Kagoshima, Kai Londenberg, Fang Xu – Searchmetrics GmbH","Determination of content score","https://patents.google.com/patent/US20180121430A1/en","papers","20180101Z00:00:00","","","Searchmetrics GmbH","patent, cc-cited-not-used","The crawler module [310] may automatically crawl a network and acquire contents from one or more resources in the network, acquire the contents from an open repository of web crawl data such as CommonCrawl.org.","","","","" "Ajinkya Gorakhnath Kale, Thrivikrama Taula, Amit Srivastava, Sanjika Hewavitharana – eBay Inc.","Methods and systems for query segmentation","https://patents.google.com/patent/US20180329999A1/en","papers","20180101Z00:00:00","","","eBay Inc.","ir/query-segmentation, nlp/word-embeddings, patent","","","","","GloVe-word-embeddings" "Kokas Károly, Drótos László – Országos Széchényi Könyvtár, Hungary; SZTE Klebelsberg Könyvtár, Hungary","Webarchiválás és a történeti kutatások / Web Archiving and Historical Research","http://ojs.elte.hu/index.php/digitalisbolcseszet/article/view/129","papers","20180101Z00:00:00","","","Országos Széchényi Könyvtár, Hungary; SZTE Klebelsberg Könyvtár, Hungary","web-archiving, cc-cited-not-used","","","","","" "Issa M. Khalil, Bei Guan, Mohamed Nabeel, Ting Yu – Qatar Computing Research Institute, Doha, Qatar","A domain is only as good as its buddies: detecting stealthy malicious domains via graph inference","https://dl.acm.org/citation.cfm?id=3176329","papers","20180101Z00:00:00","","","Qatar Computing Research Institute, Doha, Qatar","computer-security/malicious-domain-detection, computer-security/internet-security, graph-processing","","","","","" "Huda Khayrallah, Brian Thompson, Kevin Duh, Philipp Koehn – Johns Hopkins University, USA","Regularized Training Objective for Continued Training for Domain Adaptation in Neural Machine Translation","https://www.aclweb.org/anthology/papers/W/W18/W18-2705/","papers","20180101Z00:00:00","","","Johns Hopkins University, USA","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Douwe Kiela, Changhan Wang, Kyunghyun Cho – Facebook AI Research, USA; New York University, USA; CIFAR Global Scholar, Canada","Dynamic meta-embeddings for improved sentence representations","https://www.aclweb.org/anthology/D18-1176","papers","20180101Z00:00:00","","While one of the first steps in many NLP systems is selecting what pre-trained word embeddings to use, we argue that such a step is better left for neural networks to figure out by themselves. To that end, we introduce dynamic meta-embeddings, a simple yet effective method for the supervised learning of embedding ensembles, which leads to state-of-the-art performance within the same model class on a variety of tasks. We subsequently show how the technique can be used to shed new light on the usage of word embeddings in NLP systems.","Facebook AI Research, USA; New York University, USA; CIFAR Global Scholar, Canada","nlp/sentence-embeddings, nlp/word-embeddings","","","","GloVe-word-embeddings, fastText-word-embeddings","" "Johannes Kiesel, Florian Kneist, Milad Alshomary, Benno Stein, Matthias Hagen, Martin Potthast – Paderborn University, Germany; Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany; Ulm University, Germany","Reproducible Web Corpora: Interactive Archiving with Automatic Quality Assessment","https://dl.acm.org/citation.cfm?id=3239574","papers","20180101Z00:00:00","","","Paderborn University, Germany; Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany; Ulm University, Germany","web-mining, nlp/web-as-corpus","To build a solid benchmark dataset for web reproduction quality assessment, we carefully sampledweb pages with the goal of representing a wide cross-section of the different types and genres of webpages found on the web. As a population of web pages to draw a sample from, we resort to the recentbillion-page Common Crawl 2017-04 [36]. From there, we primarily sampled pages from most ofthe well-known sites—as defined by the website’s Alexa traffic rank [1]⁶—to ensure that our sampleencompasses pages using the most recent web technologies and design standards. Moreover, pagesfrom a number of less well-known sites have been included. Altogether, the Webis Web Archive 17 comprises 10,000 web pages.","CC-MAIN-2017-04","","","" "Daesik Kim, Seonhoon Kim, Nojun Kwak – Seoul National University, South Korea; V.DO Inc., South Korea; Naver Corporation, South Korea","Textbook Question Answering with Knowledge Graph Understanding and Unsupervised Open-set Text Comprehension","https://arxiv.org/abs/1811.00232","papers","20180101Z00:00:00","","","Seoul National University, South Korea; V.DO Inc., South Korea; Naver Corporation, South Korea","nlp/question-answering, nlp/word-embeddings, nlp/knowledge-graph, nlp/text-comprehension","","","","GloVe","" "Shun Kiyono, Jun Suzuki, Kentaro Inui – Tohoku University, Japan; Center for Advanced Intelligence Project, Japan","Mixture of Expert/Imitator Networks: Scalable Semi-supervised Learning Framework","https://arxiv.org/abs/1810.05788","papers","20180101Z00:00:00","","","Tohoku University, Japan; Center for Advanced Intelligence Project, Japan","cc-cited-not-used, nlp/text-classification, ai/deep-learning, ai/neural-networks","","","","","" "Rebecca Knowles, Philipp Koehn – Johns Hopkins University, USA","Context and Copying in Neural Machine Translation","http://www.aclweb.org/anthology/D18-1339","papers","20180101Z00:00:00","","","Johns Hopkins University, USA","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Jacob Krantz, Jugal Kalita – Gonzaga University, USA; University of Colorado, USA","Abstractive Summarization Using Attentive Neural Techniques","https://arxiv.org/abs/1810.08838","papers","20180101Z00:00:00","","","Gonzaga University, USA; University of Colorado, USA","nlp/text-summarization, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Dmitry Kravchenko, Lidia Pivovarova – Ben-Gurion University of the Negev, Israel; University of Helsinki, Finland","DL Team at SemEval-2018 Task 1: Tweet Affect Detection using Sentiment Lexicons and Embeddings","http://www.aclweb.org/anthology/S18-1025","papers","20180101Z00:00:00","","","Ben-Gurion University of the Negev, Israel; University of Helsinki, Finland","nlp/sentiment-analysis","","","","GloVe-word-embeddings","" "Artur Kulmizev – University of Groningen, The Netherlands","Multilingual word embeddings and their utility in cross-lingual learning","http://hdl.handle.net/10810/29083","papers","20180101Z00:00:00","","","University of Groningen, The Netherlands","nlp/semantics, nlp/word-embeddings, cc-cited-not-used","","","","","" "Artur Kulmizev, Mostafa Abdou, Vinit Ravishankar, Malvina Nissim – University of Groningen, The Netherlands; Institute of Formal and Applied Linguistics Charles University in Prague, Czech Republic","Discriminator at SemEval-2018 Task 10: Minimally Supervised Discrimination","http://www.aclweb.org/anthology/S18-1167","papers","20180101Z00:00:00","","","University of Groningen, The Netherlands; Institute of Formal and Applied Linguistics Charles University in Prague, Czech Republic","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "José Lages, Dima L. Shepelyansky, Andrei Zinovyev – Université de Franche-Comté, Besançon, France","Inferring hidden causal relations between pathway members using reduced Google matrix of directed biological networks","https://doi.org/10.1371/journal.pone.0190812","papers","20180101Z00:00:00","","","Université de Franche-Comté, Besançon, France","cc-cited-not-used, graph-processing, web-science/hyperlinkgraph, network analysis, biochemistry, proteine structure","At present directed networks of real systems can be very large (about 4.2 millions for the English Wikipedia edition in 2013 [18] or 3.5 billion web pages for a publicly accessible web crawl that was gathered by the Common Crawl Foundation in 2012 [53: Meusel R, Vigna S, Lehmberg O, Bizer C. The graph structure in the web—analyzed on different aggregation levels. J. Web Sci. 2015;1:33.]).","","","","" "Oliver Lehmberg, Oktie Hassanzadeh – University of Mannheim, Germany; IBM Research, Yorktown Heights, New York, USA","Ontology Augmentation Through Matching with Web Tables","http://disi.unitn.it/~pavel/om2018/papers/om2018_LTpaper4.pdf","papers","20180101Z00:00:00","","","University of Mannheim, Germany; IBM Research, Yorktown Heights, New York, USA","semantic web, ontology extraction, web tables","We perform an empirical study of the performance of this approach in using Web Tables extracted from the Common Crawl to augment the properties in DBpedia ontology.","","","WDCWebTables","" "Tao Li, Lei Lin, Minsoo Choi, Kaiming Fu, Siyuan Gong, Jian Wang – Purdue University, Indiana, USA","Youtube av 50k: an annotated corpus for comments in autonomous vehicles","https://arxiv.org/abs/1807.11227","papers","20180101Z00:00:00","","","Purdue University, Indiana, USA","cc-cited-not-used, nlp/corpus-construction, nlp/opinion-mining, nlp/sentiment-analysis","","","","","" "Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency – Carnegie Mellon University","Multimodal Language Analysis with Recurrent Multistage Fusion: Supplementary Material","https://arxiv.org/abs/1808.03920","papers","20180101Z00:00:00","","","Carnegie Mellon University","nlp/multi-modality, nlp/language-model","We used 300 dimensional Glove word embeddings trained on 840 billion tokens from the common crawl dataset (Pennington et al., 2014).","","","GloVe-word-embeddings","" "Xiaojing Liao, Sumayah Alrwais, Kan Yuan, Luyi Xing, XiaoFeng Wang, Shuang Hao, Raheem Beyah – Indiana University Bloomington, USA; King Saud University, Saudi Arabia; University of Texas at Dallas, USA; Georgia Institute of Technology, USA","Cloud repository as a malicious service: challenge, identification and implication","https://cybersecurity.springeropen.com/articles/10.1186/s42400-018-0015-6","papers","20180101Z00:00:00","","","Indiana University Bloomington, USA; King Saud University, Saudi Arabia; University of Texas at Dallas, USA; Georgia Institute of Technology, USA","computer-security/malicious-hosting-service, computer-security/internet-security","[...], we developed BarFinder, a scanner that automatically detects Bars through inspecting the topological relations between websites and the cloud bucket they use, in an attempt to capture Bars based on the external features of the websites they serve. [...] Running the scanner over all the data collected by the Common Crawl (Crawl 2015), which indexed five billion web pages, for those associated with all major cloud storage providers (including Amazon S3, Cloudfront, Google Drive, etc.), we found around 1 million sites utilizing 6885 repositories hosted on these clouds. [...] We built the site list with the help of Common Crawl (Crawl 2015), a public big data project that crawls about 5 billion webpages each month through a large-scale Hadoop-based crawler and maintains lists of the crawled websites and their embedded links. Searching the Common Crawl (Crawl 2015) dataset, collected in February 2015, for the websites loading content from the 400 clean and malicious buckets identified above, we found 141,149 websites, were used by our crawler. [...] We further developed a tool in Python to recover cloud URLs from the web content gathered by Common Crawl.","CC-MAIN-2015-11","","","" "Dan Liu, Junhua Liu, Wu Guo, Shifu Xiong, Zhiqiang Ma, Rui Song, Chongliang Wu, Quan Liu – University of Science and Technology of China, China; IFLYTEK Co. LTD.","The USTC-NEL Speech Translation system at IWSLT 2018","https://arxiv.org/abs/1812.02455","papers","20180101Z00:00:00","","","University of Science and Technology of China, China; IFLYTEK Co. LTD.","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Bingbin Liu, Serena Yeung, Edward Chou, De-An Huang, Li Fei-Fei, Juan Carlos Niebles – Stanford University, USA; Google Cloud AI, Mountain View, USA","Temporal Modular Networks for Retrieving Complex Compositional Activities in Videos","http://openaccess.thecvf.com/content_ECCV_2018/html/Bingbin_Liu_Temporal_Modular_Networks_ECCV_2018_paper.html","papers","20180101Z00:00:00","","","Stanford University, USA; Google Cloud AI, Mountain View, USA","ai/computer-vision, ir/video-retrieval, ai/action-recognition, nlp/word-embeddings","","","","","" "Chi-kiu Lo, Michel Simard, Darlene Stewart, Samuel Larkin, Cyril Goutte, Patrick Littell – National Research Council, Canada","Accurate semantic textual similarity for cleaning noisy parallel corpora using semantic machine translation evaluation metric: The NRC supervised submissions to the Parallel Corpus Filtering task","http://www.aclweb.org/anthology/W18-6481","papers","20180101Z00:00:00","","","National Research Council, Canada","cc-cited-not-used, nlp/machine-translation, nlp/corpus-construction","","","","","" "Colin Lockard, Xin Luna Dong, Arash Einolghozati, Prashant Shiralkar – amazon.com","CERES: Distantly Supervised Relation Extraction from the Semi-Structured Web","https://arxiv.org/abs/1804.04635","papers","20180101Z00:00:00","","","amazon.com","ir/information-extraction, ir/relation-extraction","The CommonCrawl corpus consists of monthly snapshots of pages from millions of websites [1] on the Web. We started with a few well-known sites, including rottentomatoes.com, boxofficemojo.com, and themoviedb.org. Based on a Wikipedia list of the largest global film industries by admissions, box office, and number of productions⁸, we then issued Google searches for terms corresponding to these countries, such as “Nigerian film database” and recorded resulting sites that had detail pages related to movies. We also issued a few additional searches related to specific genres we thought may not be well-represented in mainstream sites, including “animated film database” and “documentary film database”. After compiling our list of sites, we then checked CommonCrawl⁹ and kept all sites with more than one hundred pages available. Our final list contains a broad mix of movie sites, including sites based around national film industries, genres, film music, and screen size. Most are in English, but the set also includes sites in Czech, Danish, Icelandic, Italian, Indonesian, and Slovak. ⁸https://en.wikipedia.org/wiki/Film_industry ⁹For each site, we scanned the CommonCrawl indices for all monthly scrapes prior to January 2018 and downloaded all pages for the site from the scrape with the largest number of unique webpages. Note that these scrapes do not necessarily obtain all pages present on a site, so the retrieved pages represent only a subset of the full site.","CC-MAIN-201[3-7]-*","","","" "Gaurav Maheshwari, Priyansh Trivedi, Denis Lukovnikov, Nilesh Chakraborty, Asja Fischer, Jens Lehmann – University of Bonn, Germany; Ruhr University, Bochum, Germany","Learning to Rank Query Graphs for Complex Question Answering over Knowledge Graphs","https://arxiv.org/abs/1811.01118","papers","20180101Z00:00:00","","","University of Bonn, Germany; Ruhr University, Bochum, Germany","information retrieval, nlp/question-answering, nlp/knowledge-graph, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Jose L. Martinez-Rodriguez, Aidan Hogan, Ivan Lopez-Arevalo – Cinvestav Tamaulipas, Ciudad Victoria, Mexico; University of Chile, Chile","Information extraction meets the Semantic Web: A survey","https://content.iospress.com/articles/semantic-web/sw180333","papers","20180101Z00:00:00","","","Cinvestav Tamaulipas, Ciudad Victoria, Mexico; University of Chile, Chile","cc-cited-not-used, semantic web, linked data, information extraction","","","","","" "Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher – Salesforce Research","The natural language decathlon: Multitask learning as question answering","https://arxiv.org/abs/1806.08730","papers","20180101Z00:00:00","","","Salesforce Research","nlp/question-answering, nlp/machine-translation, nlp/text-summarization, nlp/sentiment-analysis, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Bryan McCann, Caiming Xiong, Richard Socher – Salesforce.com, Inc.","Natural language processing using context-specific word vectors","https://patents.google.com/patent/US20180373682A1/en","papers","20180101Z00:00:00","","","Salesforce.com, Inc.","nlp/word-embeddings, patent","","","","","GloVe-word-embeddings" "Bryan McCann, Caiming Xiong, Richard Socher – Salesforce.com, Inc.","Natural language processing using a neural network","https://patents.google.com/patent/US20180349359A1/en","papers","20180101Z00:00:00","","","Salesforce.com, Inc.","nlp/word-embeddings, patent","","","","","GloVe-word-embeddings" "Evert Meijers, Antoine Peris – Delft University of Technology, The Netherlands","Using toponym co-occurrences to measure relationships between places: review, application and evaluation","https://www.tandfonline.com/doi/abs/10.1080/12265934.2018.1497526","papers","20180101Z00:00:00","","","Delft University of Technology, The Netherlands","nlp, coocurrences, toponymy, urban system, place name disambiguation, semantic relatedness","We innovate by exploiting a so far unparalleled amount of data, namely the billions of web pages contained in the commoncrawl web archive, and by applying the method also to small places that tend to be ignored by other methods. [...] we use the March 2017 data. The Common Crawl data comes in three formats, of which the WET format is most useful for the co-occurrence method as it only contains extracted plain text.","","","","" "Hardik Meisheri, Lipika Dey – TCS Research, New Delhi, India","TCS Research at SemEval-2018 Task 1: Learning Robust Representations using Multi-Attention Architecture","http://www.aclweb.org/anthology/S18-1043","papers","20180101Z00:00:00","","","TCS Research, New Delhi, India","nlp/sentiment-analysis","","","","GloVe-word-embeddings","" "Todor Mihaylov, Peter Clark, Tushar Khot, Ashish Sabharwal – Allen Institute for Artificial Intelligence, Seattle, USA; Heidelberg University, Germany","Can a suit of armor conduct electricity? a new dataset for open book question answering","https://www.aclweb.org/anthology/D18-1260","papers","20180101Z00:00:00","","We present a new kind of question answering dataset, OpenBookQA, modeled after open book exams for assessing human understanding of a subject. The open book that comes with our questions is a set of 1326 elementary level science facts. Roughly 6000 questions probe an understanding of these facts and their application to novel situations. This requires combining an open book fact (e.g., metals conduct electricity) with broad common knowledge (e.g., a suit of armor is made of metal) obtained from other sources. While existing QA datasets over documents or knowledge bases, being generally self-contained, focus on linguistic understanding, OpenBookQA probes a deeper understanding of both the topic{---}in the context of common knowledge{---}and the language it is expressed in. Human performance on OpenBookQA is close to 92{\%}, but many state-of-the-art pre-trained QA methods perform surprisingly poorly, worse than several simple neural baselines we develop. Our oracle experiments designed to circumvent the knowledge retrieval bottleneck demonstrate the value of both the open book and additional facts. We leave it as a challenge to solve the retrieval problem in this multi-hop setting and to close the large gap to human performance.","Allen Institute for Artificial Intelligence, Seattle, USA; Heidelberg University, Germany","nlp/question-answering, nlp/word-embeddings, nlp/corpus-construction","For all experiments we used= 300GloVe(Penningtonet al., 2014) embeddings pre-trained on 840B tokens fromCommon Crawl(https://nlp.stanford.edu/projects/glove/).","","","GloVe-word-embeddings","" "Sewon Min, Victor Zhong, Richard Socher, Caiming Xiong – Seoul National University, South Korea; Salesforce Research","Efficient and Robust Question Answering from Minimal Context over Documents","https://arxiv.org/abs/1805.08092","papers","20180101Z00:00:00","","","Seoul National University, South Korea; Salesforce Research","nlp/question-answering, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Bahman Mirheidari, Daniel Blackburn, Traci Walker, Annalena Venneri, Markus Reuber, Heidi Christensen – University of Sheffield, United Kingdom; Royal Hallamshire Hospital, United Kingdom","Detecting signs of dementia using word vector representations","https://www.isca-speech.org/archive/Interspeech_2018/pdfs/1764.pdf","papers","20180101Z00:00:00","","","University of Sheffield, United Kingdom; Royal Hallamshire Hospital, United Kingdom","nlp/word-embeddings, nlp/speech-recognition, nlp/clinical-application, dementia detection","","","","GloVe-word-embeddings","" "Alistair Moffat, Matthias Petri – University of Melbourne, Australia","Index compression using byte-aligned ANS coding and two-dimensional contexts","https://dl.acm.org/citation.cfm?id=3159663","papers","20180101Z00:00:00","","We examine approaches used for block-based inverted index compression, such as the OptPFOR mechanism, in which fixed-length blocks of postings data are compressed independently of each other. Building on previous work in which asymmetric numeral systems (ANS) entropy coding is used to represent each block, we explore a number of enhancements: (i) the use of two-dimensional conditioning contexts, with two aggregate parameters used in each block to categorize the distribution of symbol values that underlies the ANS approach, rather than just one; (ii) the use of a byte-friendly strategic mapping from symbols to ANS codeword buckets; and (iii) the use of a context merging process to combine similar probability distributions. Collectively, these improvements yield superior compression for index data, outperforming the reference point set by the Interp mechanism, and hence representing a significant step forward. We describe experiments using the 426 GiB gov2 collection and a new large collection of publicly-available news articles to demonstrate that claim, and provide query evaluation throughput rates compared to other block-based mechanisms.","University of Melbourne, Australia","information-retrieval/search-engine, information-retrieval/inverted-index","The second pair of test files are derived from publicly available web-sourced news articles² [²http://commoncrawl.org/2016/10/news-dataset-available/], taking English language news sources (as identified by Apache Tika) from 01/09/2016 up until and including 28/02/2017, that is, a six month crawl period that contains 7,508,082 documents.","CC-NEWS","","","" "Nkwebi Motlogelwa, Edwin Thuma, Tebo Leburu-Dingalo – University of Botswana, Botswana","Merging search results generated by multiple query variants using data fusion","http://ceur-ws.org/Vol-2125/paper_194.pdf","papers","20180101Z00:00:00","","","University of Botswana, Botswana","ir/multilingual-information-retrieval, ir/biomedical-information-extraction, ir/query-expansion","","","","CLEF-eHealth-2018-IR-task","" "Mathieu Nassif, Christoph Treude, Martin Robillard – McGill University School of Computer Science, Montreal, Quebec, Canada","Automatically Categorizing Software Technologies","https://ieeexplore.ieee.org/abstract/document/8359344","papers","20180101Z00:00:00","","Informal language and the absence of a standard taxonomy for software technologies make it difficult to reliably analyze technology trends on discussion forums and other on-line venues. We propose an automated approach called Witt for the categorization of software technology (an expanded version of the hypernym discovery problem). Witt takes as input a phrase describing a software technology or concept and returns a general category that describes it (e.g., integrated development environment), along with attributes that further qualify it (commercial, php, etc.). By extension, the approach enables the dynamic creation of lists of all technologies of a given type (e.g., web application frameworks). Our approach relies on Stack Overflow and Wikipedia, and involves numerous original domain adaptations and a new solution to the problem of normalizing automatically-detected hypernyms. We compared Witt with six independent taxonomy tools and found that, when applied to software terms, Witt demonstrated better coverage than all evaluated alternate solutions, without a corresponding degradation in false positive rate.","McGill University School of Computer Science, Montreal, Quebec, Canada","nlp/semantics, ontology extraction, ir/information-extraction","All these approaches work by mining large text corpora. Among the latest such techniques is the WebIsA Database [32] from the Web Data Commons project, which extracts hypernyms from CommonCrawl,¹ a corpusof over 2.1 billion web pages. In contrast to these previous works, our method onlyrequires Stack Overflow tag information data and targeted Wikipedia searches. It creates a structure that links a single term to an attributed category that describes the term.","","","","WDC-WebIsADb" "Rosa Navarrete, Sergio Luján Mora – Universidad de Alicante, Spain","A Quantitative Analysis of the Use of Microdata for Semantic Annotations on Educational Resources","http://rua.ua.es/dspace/handle/10045/73711","papers","20180101Z00:00:00","","","Universidad de Alicante, Spain","semantic web, structured data, microdata","This quantitative analysis was conducted on datasets extracted from the Common Crawl Corpus [17], as it is the largest corpus of web crawl. The datasets containing structured data were extracted by the Web Data Commons (WDC) project [18] and are available for public use. Two datasets were considered: the first, from December 2014, with 2.01 billion pages, of which 620 million pages correspond to structured data; and the second, from November 2015, with 1.77 billion pages, of which 541 million pages correspond to structured data.","","","","WebDataCommons" "Matteo Negri, Marco Turchi, Rajen Chatterjee, Nicola Bertoldi – Fondazione Bruno Kessler, Trento, Italy; University of Trento, Italy","eSCAPE: a Large-scale Synthetic Corpus for Automatic Post-Editing","https://arxiv.org/abs/1803.07274","papers","20180101Z00:00:00","","","Fondazione Bruno Kessler, Trento, Italy; University of Trento, Italy","nlp/machine-translation","A widely used resource, described in (Junczys-Dowmunt and Grundkiewicz, 2016), was included in the training set of the winning (and almost all) submissions to the last two English–German rounds of the APE task at WMT (IT domain). It consists of 4.3 million instances created by first filtering a subset of IT-related sentences from the German Common Crawl corpus⁶, and then by using two English–German and German–English PBMT systems trained on in-domain IT corpora for a round-trip translation of the selected sentences (De → En → De).","","","WMT-13-translation-task-common-crawl-corpus","" "Dávid Márk Nemeskey, András Kornai – HAS Institute of Computer Science, Budapest, Hungary","Emergency vocabulary","https://link.springer.com/article/10.1007%2Fs10796-018-9843-x","papers","20180101Z00:00:00","","","HAS Institute of Computer Science, Budapest, Hungary","nlp/vocabulary-extraction, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Phuc Nguyen, Khai Nguyen, Ryutaro Ichise, Hideaki Takeda – SOKENDAI (The Graduate University for Advanced Studies) Shonan Village, Hayama, Kanagawa, Japan; National Institute of Informatics, Tokyo, Japan","EmbNum: Semantic labeling for numerical values with deep metric learning","https://arxiv.org/abs/1807.01367","papers","20180101Z00:00:00","","","SOKENDAI (The Graduate University for Advanced Studies) Shonan Village, Hayama, Kanagawa, Japan; National Institute of Informatics, Tokyo, Japan","","In a study of Lehmberg et al., 233 million tables were extracted from the July 2015 version of the Common Crawl [...]","","","","WDCWebTables" "Xing Niu, Michael Denkowski, Marine Carpuat – University of Maryland; Amazon.com, Inc.","Bi-Directional Neural Machine Translation with Synthetic Parallel Data","https://arxiv.org/pdf/1805.11213.pdf","papers","20180101Z00:00:00","","","University of Maryland; Amazon.com, Inc.","nlp/machine-translation","","","","","" "Takuya Ohshima, Motomichi Toyama – Keio University, Yokohama, Kanagawa, Japan","SDC: structured data collection by yourself","https://dl.acm.org/citation.cfm?id=3200849","papers","20180101Z00:00:00","","","Keio University, Yokohama, Kanagawa, Japan","web-crawling, semantic web, structured data","","","","","WebDataCommons" "Myle Ott, Michael Auli, David Granger, Marc'Aurelio Ranzato – Facebook AI Research, USA","Analyzing uncertainty in neural machine translation","https://arxiv.org/abs/1803.00047","papers","20180101Z00:00:00","","","Facebook AI Research, USA","cc-cited-not-used, nlp/machine-translation","","","","","" "Abel L. Peirson Peirson, E. Meltem Tolunay – Stanford University, USA","Dank Learning: Generating Memes Using Deep Neural Networks","https://arxiv.org/abs/1806.04510","papers","20180101Z00:00:00","","","Stanford University, USA","nlp/text-generation, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Christian S. Perone, Roberto Silveira, Thomas S. Paula – Universitat Politècnica de Catalunya, Barcelona, Spain","Evaluation of sentence embeddings in downstream and linguistic probing tasks","https://arxiv.org/abs/1806.06259","papers","20180101Z00:00:00","","","Universitat Politècnica de Catalunya, Barcelona, Spain","nlp/word-embeddings, nlp/sentence-embeddings","","","","fasttext-word-embeddings, GloVe-word-embeddings","" "Matthias Petri, Alistair Moffat – University of Melbourne, Australia","Compact inverted index storage using general-purpose compression libraries","http://dx.doi.org/10.1002/spe.2556","papers","20180101Z00:00:00","index compression, inverted index, web search","Efficient storage of large inverted indexes is one of the key technologies that support current web search services. Here we re-examine mechanisms for representing document-level inverted indexes and within-document term frequencies, including comparing specialized methods developed for this task against recent fast implementations of general-purpose adaptive compression techniques. Experiments with the Gov2-URL collection and a large collection of crawled news stories show that standard compression libraries can provide compression effectiveness as good as or better than previous methods, with decoding rates only moderately slower than reference implementations of those tailored approaches. This surprising outcome means that high-performance index compression can be achieved without requiring the use of specialized implementations.","University of Melbourne, Australia","information-retrieval/search-engine, information-retrieval/inverted-index","We also develop (and make freely available) a new IR test collection based on the News sub-collection of the Common Crawl∗∗. The News sub-collection provides daily crawls of news websites in many languages. We refer to this collection as CC-NEWS-URL. We provide all scripts to download the freely available source WARC files from Amazon AWS and process them using Apache Tika and Apache Lucene in a consistent manner. The resulting consistency enables researchers to perform experiments on exactly the collection in their experiments, and improves comparability of results between different rounds of experimentation. For example, the number of terms reported for the GOV2-URL collection ranges from 18 million up to 48 million, preventing fair and direct comparison between results reported in different papers. The number of WARC files in CC-NEWS-URL increases each day, and hence we specify the collection using: (1) a date range; and (2) a language filter. For example, in this work, we utilize the CC-NEWS-20160901-2017028-EN collection which uses all English language news sources (as identified by Apache Tika) from 01/09/2016 up until and including 28/02/2017, that is, a six month crawl period that contains 7,508,082 documents, 26,240,031 unique terms and 4,457,492,131 postings. Currently the CC-NEWS-URL collection grows by roughly 50,000 English documents per day. This exact parsing can be reproduced by the scripts provided at https://github.com/mpetri/rlz-invidx and https://github.com/mpetri/TikaLuceneWarc, with raw postings lists stored in the popular “ds2i” format††. Document identifiers are again reassigned in URL order. We also explored a date-ordered collection based on the same source data, and obtained – method-for-method – uniformly weaker compression outcomes than for URL-sorted, in part because many of the URLs contain dates encoded in them anyway.","CC-NEWS","","","" "Mohammad Taher Pilehvar, Dimitri Kartsaklis, Victor Prokhorov, Nigel Collier – University of Cambridge, United Kingdom","Card-660: Cambridge Rare Word Dataset-a Reliable Benchmark for Infrequent Word Representation Models","https://arxiv.org/abs/1808.09308","papers","20180101Z00:00:00","","","University of Cambridge, United Kingdom","linguistics, nlp/semantics, nlp/word-embeddings, lexicography","","","","GloVe-word-embeddings","" "Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, Alan W Black – Carnegie Mellon University, Pittsburgh, PA, USA","Style Transfer Through Back-Translation","https://arxiv.org/abs/1804.09000","papers","20180101Z00:00:00","","","Carnegie Mellon University, Pittsburgh, PA, USA","nlp/machine-translation","","","","WMT-13-translation-task-common-crawl-corpus","" "Roy Raanani, Russell Levy, Micha Yochanan Beakstone, Dominik Facher – Affectlayer Inc","Analyzing conversations to automatically identify product feature requests","https://patents.google.com/patent/US20180183930A1/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Micha Yochanan Breadstone – Affectlayer Inc","Automatic generation of playlists from conversations","https://patents.google.com/patent/US20180046710A1/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Micha Yochanan Breakstone – Affectlayer Inc","Coordinating voice calls between representatives and customers to influence an outcome of the call","https://patents.google.com/patent/US9900436B2/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Micha Yochanan Breakstone – Affectlayer Inc","Modeling voice calls to improve an outcome of a call between a representative and a customer","https://patents.google.com/patent/US20180309873A1/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and study world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Micha Yochanan Breakstone, Dominik Facher – Affectlayer Inc","Analyzing conversations to automatically identify action items","https://patents.google.com/patent/US20180122383A1/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Micha Yochanan Breakstone, Dominik Facher – Affectlayer Inc","Analyzing conversations to automatically identify customer pain points","https://patents.google.com/patent/US20180181561A1/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Micha Yochanan Breakstone, Dominik Facher – Affectlayer Inc","Analyzing conversations to automatically identify product features that resonate with customers","https://patents.google.com/patent/US20180183930A1/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Dominik Facher, Micha Yochanan Breakstone – Affectlayer Inc","Automatic pattern recognition in conversations","http://www.freepatentsonline.com/10110743.html","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Dominik Facher, Micha Yochanan Breakstone – Affectlayer Inc","Analyzing conversations to automatically identify deals at risk","https://patents.google.com/patent/US10133999B2/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Jonathan Raiman, John Miller – Baidu USA LLC","Global normalized reader systems and methods","https://patents.google.com/patent/US20180300312A1/en","papers","20180101Z00:00:00","","","Baidu USA LLC","nlp/question-answering, nlp/word-embeddings, patent","In embodiments, the 300 dimensional 8.4B token Common Crawl GloVe vectors were used. Words missing from the Common Crawl vocabulary were set to zero.","","","GloVe-word-embeddings","" "Martin Raison, Pierre-Emmanuel Mazaré, Rajarshi Das, Antoine Bordes – Facebook AI Research, Paris, France; University of Massachusetts, Amherst, USA","Weaver: Deep Co-Encoding of Questions and Documents for Machine Reading","https://arxiv.org/abs/1804.10490","papers","20180101Z00:00:00","","","Facebook AI Research, Paris, France; University of Massachusetts, Amherst, USA","nlp/question-answering, nlp/word-embeddings, information retrieval","","","","fastText-word-embeddings","" "Petar Ristoski, Petar Petrovski, Peter Mika, Heiko Paulheim – University of Mannheim, Germany; Yahoo Labs, London, United Kingdom","A machine learning approach for product matching and categorization","https://content.iospress.com/articles/semantic-web/sw300","papers","20180101Z00:00:00","","","University of Mannheim, Germany; Yahoo Labs, London, United Kingdom","semantic web, information extraction, microdata, linked data, data integration","","","","WDC-triples","" "Alexey Romanov, Chaitanya Shivade – University of Massachusetts Lowell, USA; IBM Almaden Research Center, San Jose, CA, USA","Lessons from Natural Language Inference in the Clinical Domain","https://arxiv.org/abs/1808.06752","papers","20180101Z00:00:00","","","University of Massachusetts Lowell, USA; IBM Almaden Research Center, San Jose, CA, USA","nlp, natural language inference","","","","GloVe-word-embeddings, fastText-word-embeddings","" "Amir Rosenfeld, Shimon Ullman – Weizmann Institute of Science, Rehovot, Israel","Action Classification via Concepts and Attributes","https://ieeexplore.ieee.org/abstract/document/8546184","papers","20180101Z00:00:00","","","Weizmann Institute of Science, Rehovot, Israel","nlp/word-embeddings, ai/computer-vision, image-classification","","","","GloVe-word-embeddings","" "Nick Rossenbach, Jan Rosendahl, Yunsu Kim, Miguel Graça, Aman Gokrani, Hermann Ney – RWTH Aachen University, Germany","The RWTH Aachen University filtering system for the WMT 2018 parallel corpus filtering task","https://www.aclweb.org/anthology/W18-6487","papers","20180101Z00:00:00","","","RWTH Aachen University, Germany","nlp/machine-translation, nlp/corpus-construction","","","","WMT-16-translation-task-common-crawl-corpus","" "Dwaipayan Roy, Debasis Ganguly, Sumit Bhatia, Srikanta Bedathur, Mandar Mitra – Indian Statistical Institute, Kolkata, India; IBM Research, Dublin, Ireland, Dublin, Ireland; IBM Research, Delhi, India, Delhi, India; Indian Institute of Technology, Delhi, Delhi, India","Using Word Embeddings for Information Retrieval: How Collection and Term Normalization Choices Affect Performance","https://dl.acm.org/citation.cfm?id=3269277","papers","20180101Z00:00:00","","","Indian Statistical Institute, Kolkata, India; IBM Research, Dublin, Ireland, Dublin, Ireland; IBM Research, Delhi, India, Delhi, India; Indian Institute of Technology, Delhi, Delhi, India","cc-cited-not-used, nlp/word-embeddings, information-retrieval/term-normalization","In future, we plan to solidify these observations [...] as well asexperiment using large datasets (e.g. Common Crawl).","","","","" "Ethan M. Rudd, Richard Harang, Joshua Saxe – Sophos Group PLC, VA, USA","MEADE: Towards a Malicious Email Attachment Detection Engine","https://arxiv.org/abs/1804.08162","papers","20180101Z00:00:00","","alicious email attachments are a growing delivery vector for malware. While machine learning has been successfully applied to portable executable (PE) malware detection, we ask, can we extend similar ap- proaches to detect malware across heterogeneous file types commonly found in email attachments? In this paper, we explore the feasibility of applying machine learning as a static countermeasure to detect several types of malicious email attachments including Microsoft Office documents and Zip archives. To this end, we collected a dataset of over 5 million malicious/benign Microsoft Office documents from VirusTotal for evaluation as well as a dataset of benign Microsoft Office documents from the Common Crawl corpus, which we use to provide more realistic estimates of thresholds for false positive rates on in-the-wild data. We also collected a dataset of approximately 500k malicious/benign Zip archives, which we scraped using the VirusTotal service, on which we performed a separate evaluation. We analyze predictive performance of several classifiers on each of the VirusTotal datasets using a 70/30 train/test split on first seen time, evaluating feature and classifier types that have been applied successfully in commercial antimalware products and R&D contexts. Using deep neural networks and gradient boosted decision trees, we are able to obtain ROC curves with >0.99 AUC on both Microsoft Office document and Zip archive datasets. Discussion of deployment viability in various antimalware contexts is provided.","Sophos Group PLC, VA, USA","web-science, computer-security/email-security","","","","","" "Maciej Rybinski, William Miller, Javier Del Ser, Miren Nekane Bilbao, José F. Aldana-Montes – University of Málaga, Spain; Anami Precision, San Sebastián, Spain; TECNALIA, Bizkaia, Spain; Basque Center for Applied Mathematics (BCAM), Bizkaia, Spain; University of the Basque Country (UPV/EHU), Bilbao, Spain","On the Design and Tuning of Machine Learning Models for Language Toxicity Classification in Online Platforms","https://link.springer.com/chapter/10.1007/978-3-319-99626-4_29","papers","20180101Z00:00:00","","","University of Málaga, Spain; Anami Precision, San Sebastián, Spain; TECNALIA, Bizkaia, Spain; Basque Center for Applied Mathematics (BCAM), Bizkaia, Spain; University of the Basque Country (UPV/EHU), Bilbao, Spain","nlp/text-classification, nlp/sentiment-analysis, nlp/word-embeddings, ai/deep-learning","","","","GloVe-word-embeddings","" "Shadi Saleh, Pavel Pecina – Charles University, Czech Republic","CUNI team: CLEF eHealth Consumer Health Search Task 2018","http://ceur-ws.org/Vol-2125/paper_201.pdf","papers","20180101Z00:00:00","","","Charles University, Czech Republic","ir/multilingual-information-retrieval, ir/biomedical-information-extraction, nlp/machine-translation","Document collection in the CLEF 2018 consumer health search task is created using CommonCrawl platform¹. First, the query set (described in Section 2.2) is submitted to Microsoft Bing APIs, and a list of domains is extracted from the top retrieved results. This list is extended by adding reliable health websites, at the end clefehealth2018_B (which we use in this work) contained 1,653 sites, after excluding non-medical websites such as news websites. After preparing the domain list, these domains are crawled and provided as an indexed collection to the participants.","","","CLEF-eHealth-2018-IR-task","" "Enrico Santus, Chris Biemann, Emmanuele Chersoni – Massachussetts Institute of Technology, USA; Universität Hamburg, Germany; Aix-Marseille University, France","BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern-and Graph-based Information to Identify Discriminative Attributes","https://arxiv.org/abs/1804.11251","papers","20180101Z00:00:00","","","Massachussetts Institute of Technology, USA; Universität Hamburg, Germany; Aix-Marseille University, France","nlp/semantics","Thirteen features related to word and word-feature frequency were calculated on the basis of the information extracted from a corpus of 3.2B words, corresponding to about 20\% of the Common Crawl.","??","","GloVe-word-embeddings","" "Prathusha Kameswara Sarma – University of Wisconsin-Madison","Learning Word Embeddings for Data Sparse and Sentiment Rich Data Sets","http://www.aclweb.org/anthology/N18-4007","papers","20180101Z00:00:00","","","University of Wisconsin-Madison","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Prathusha K Sarma, YIngyu Liang, William A Sethares – University of Wisconsin-Madison","Domain Adapted Word Embeddings for Improved Sentiment Classification","https://arxiv.org/abs/1805.04576","papers","20180101Z00:00:00","","","University of Wisconsin-Madison","nlp/sentiment-analysis, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Prathusha K Sarma, William Sethares – University of Wisconsin-Madison","Simple Algorithms For Sentiment Analysis On Sentiment Rich, Data Poor Domains.","http://www.aclweb.org/anthology/C18-1290","papers","20180101Z00:00:00","","","University of Wisconsin-Madison","nlp/sentiment-analysis","","","","","GloVe-word-embeddings" "Shigehiko Schamoni, Julian Hitschler, Stefan Riezler – Heidelberg University, Germany","A dataset and reranking method for multimodal MT of user-generated image captions","https://amtaweb.org/wp-content/uploads/2018/03/AMTA_2018_Proceedings_Research_Track.pdf#page=146","papers","20180101Z00:00:00","","","Heidelberg University, Germany","nlp/machine-translation","","","","WMT-13-translation-task-common-crawl-corpus","" "Julian Schamper, Jan Rosendahl, Parnia Bahar, Yunsu Kim, Arne Nix, Hermann Ney – RWTH Aachen University, Germany","The RWTH Aachen University supervised machine translation systems for WMT 2018","https://www.aclweb.org/anthology/W18-6426","papers","20180101Z00:00:00","","","RWTH Aachen University, Germany","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Sebastian Schelter, Jérôme Kunegis – Technical University Berlin, Germany; University of Namur, Belgium","On the Ubiquity of Web Tracking: Insights from a Billion-Page Web Crawl","http://dx.doi.org/10.1561/106.00000014","papers","20180101Z00:00:00","","","Technical University Berlin, Germany; University of Namur, Belgium","web-science/tracking","","","tracking-the-trackers","","" "Holger Schwenk – Facebook AI Research","Filtering and Mining Parallel Data in a Joint Multilingual Space","http://arxiv.org/abs/1805.09822","papers","20180101Z00:00:00","","","Facebook AI Research","nlp/machine-translation","","","","WMT-13-translation-task-common-crawl-corpus","" "Jurica Ševa, Mario Sänger, Ulf Leser – Humboldt-Universität zu Berlin, Germany","WBI at CLEF eHealth 2018 Task 1: Language-independent ICD-10 coding using multi-lingual embeddings and recurrent neural networks","http://ceur-ws.org/Vol-2125/paper_118.pdf","papers","20180101Z00:00:00","","","Humboldt-Universität zu Berlin, Germany","ir/multilingual-information-retrieval, ir/biomedical-information-extraction, nlp/machine-translation, nlp/word-embeddings","","","","CLEF-eHealth-2018-IR-task","" "Cory Shain, Richard Futrell, Marten van Schijndel, Edward Gibson, William Schuler – Ohio State University; MIT; Johns Hopkins University","Evidence of semantic processing difficulty in naturalistic reading","https://vansky.github.io/assets/pdf/shain_etal-2018-cuny.pdf","papers","20180101Z00:00:00","","","Ohio State University; MIT; Johns Hopkins University","nlp, psycholinguistics","[...] using GloVe vectors [20] pretrained on the 840B word Common Crawl dataset [...]","","","GloVe-word-embeddings","" "Gabi Shalev, Yossi Adi, Joseph Keshet – Bar-Ilan University, Israel","Out-of-distribution detection using multiple semantic label representations","http://papers.nips.cc/paper/7967-out-of-distribution-detection-using-multiple-semantic-label-representations","papers","20180101Z00:00:00","","","Bar-Ilan University, Israel","nlp/semantics, nlp/word-embeddings, ai/neural-networks, ai/computer-vision, nlp/speech-recognition","","","","GloVe-word-embeddings","" "Sistla Sai Shravani, Niraj Kumar Jha, Rajlaksmi Guha – IT Kharagpur, India","A Machine Learning Approach to Correlate Emotional Intelligence and Happiness Based on Twitter Data","http://hci2018.bcs.org/prelim_proceedings/papers/Work-in-Progress%20Track/BHCI-2018_paper_115.pdf","papers","20180101Z00:00:00","","","IT Kharagpur, India","nlp/sentiment-analysis, nlp/word-embeddings","","","","fastText-word-embeddings","" "Umutcan Şimşek, Dieter Fensel – University of Innsbruck, Austria","Intent Generation for Goal-Oriented Dialogue Systems based on Schema.org Annotations","https://arxiv.org/abs/1807.01292","papers","20180101Z00:00:00","","","University of Innsbruck, Austria","nlp/dialogue-systems, semantic web, microformats","","","","GloVe-word-embeddings","" "Ravinder Singh, Marina Levina, Nelson Jiao, Asha Saini – DELL EMC","Using open data to predict market movements","https://education.emc.com/content/dam/dell-emc/documents/en-us/2017KS_Ravinder-Using_Open_Data_to_Predict_Market_Movements.pdf","papers","20180101Z00:00:00","","","DELL EMC","market research, nlp, information retrieval","We found that The Register articles for specific vendors extracted from the common crawl data set are highly correlated with our reading of General Purpose Magic Quadrant position movements in time. [...] The Figure 11 : Common Crawl Data Processing Flow Diagram shows a broad overview of the steps involved in the analysis of common crawl data. Going from the bottom up it shows how the data is extracted, processed and visualized. The amount of data in each phase becomes more streamlined and, hence, the reduction in size of the data being worked on. We start with the crawl data, extract the pages of interest int o a private storage bucket, and then process it to remove unwanted words/tags. At the end, visualization tools are used to graphically display the results. These can be used to publish standard reports or customized by users to support their own analysis.","","","","" "Peter Andrew Miller Smith, Samuel Leeman-Munk, Angi Shelton, Bradford W Mott, Eric Wiebe, James Lester – North Carolina State University, Raleigh, NC, USA; SAS Institute Inc., Cary, NC, USA","A multimodal assessment framework for integrating student writing and drawing in elementary science learning","https://ieeexplore.ieee.org/abstract/document/8274912/","papers","20180101Z00:00:00","","","North Carolina State University, Raleigh, NC, USA; SAS Institute Inc., Cary, NC, USA","nlp/word-embeddings, nlp/semantics, education, tutoring systems, student writing","","","","","" "Luca Soldaini – Georgetown University, USA","The Knowledge and Language Gap in Medical Information Seeking","https://search.proquest.com/openview/e669cd1478b33d52fa4cc71e8393c639/1","papers","20180101Z00:00:00","","","Georgetown University, USA","ir/multilingual-information-retrieval, ir/biomedical-information-retrieval","","","","","CLEF-eHealth-2018-IR-task" "Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, Daniel Gildea – University of Rochester, Rochester, NY, USA; IBM T.J. Watson Research Center, Yorktown Heights, NY, USA; School of Engineering, Westlake University, China","Exploring graph-structured passage representation for multi-hop reading comprehension with graph neural networks","https://arxiv.org/abs/1809.02040","papers","20180101Z00:00:00","","","University of Rochester, Rochester, NY, USA; IBM T.J. Watson Research Center, Yorktown Heights, NY, USA; School of Engineering, Westlake University, China","nlp/word-embeddings, nlp/machine-reading, nlp/coreference-resolution, nlp/question-answering","","","","GloVe-word-embeddings","" "Samuel Spaulding, Huili Chen, Safinah Ali, Michael Kulinski, Cynthia Breazeal – Massachusetts Institute of Technology, Cambridge, MA, USA","A social robot system for modeling children's word pronunciation: socially interactive agents track","https://dl.acm.org/citation.cfm?id=3237946","papers","20180101Z00:00:00","","","Massachusetts Institute of Technology, Cambridge, MA, USA","computer-vision, nlp/word-embeddings","","","","","GloVe-word-embeddings" "Christian Stab, Johannes Daxenberger, Chris Stahlhut, Tristan Miller, Benjamin Schiller, Christopher Tauchmann, Steffen Eger, Iryna Gurevych – Ubiquitous Knowledge Processing Lab, Department of Computer Science, Technische Universität Darmstadt, Germany","ArgumenText: Searching for Arguments in Heterogeneous Sources","http://www.aclweb.org/anthology/N18-5005","papers","20180101Z00:00:00","","","Ubiquitous Knowledge Processing Lab, Department of Computer Science, Technische Universität Darmstadt, Germany","nlp/argument-mining","we build upon the English part of CommonCrawl, [...] we followed Habernal et al. (2016) for de-duplication, boiler-plate removal using jusText (Pomikálek, 2011), andlanguage detection.² This left us with 400 million heterogeneous plain-text documents in English, with an overall size of 683 GiB.","","","","" "Felix Stahlberg, Adria de Gispert, Bill Byrne – University of Cambridge, United Kingdom; SDL Research, Cambridge, United Kingdom","The University of Cambridge's Machine Translation Systems for WMT18","https://arxiv.org/abs/1808.09465","papers","20180101Z00:00:00","","","University of Cambridge, United Kingdom; SDL Research, Cambridge, United Kingdom","nlp/machine-translation","","","","WMT-13-translation-task-common-crawl-corpus","" "Chris Stahlhut – Ubiquitous Knowledge Processing Lab TU Darmstadt, Germany","Searching Arguments in German with ArgumenText","http://ceur-ws.org/Vol-2167/short7.pdf","papers","20180101Z00:00:00","","","Ubiquitous Knowledge Processing Lab TU Darmstadt, Germany","nlp/argument-mining","","","","","" "Stergios Stergiou, Dipen Rughwani, Kostas Tsioutsiouliklis – Yahoo Research, Sunnyvale, CA, USA; Google & Yahoo Research, Mountain View, CA, USA","Shortcutting Label Propagation for Distributed Connected Components","https://dl.acm.org/citation.cfm?id=3159696","papers","20180101Z00:00:00","","","Yahoo Research, Sunnyvale, CA, USA; Google & Yahoo Research, Mountain View, CA, USA","graph processing","","","","","" "Hanna Suominen, Liadh Kelly, Lorraine Goeuriot, Aurélie Névéol, Lionel Ramadier, Aude Robert, Evangelos Kanoulas, Rene Spijker, Leif Azzopardi, Dan Li, others – University of Turku, Turku, Finland; The Australian National University (ANU), Australia; Commonwealth Scientific and Industrial Research Organisation (CSIRO), University of Canberra, Canberra, Australia; Maynooth University, Maynooth, Ireland; Univ. Grenoble Alpes, CNRS, Grenoble, France; Université Paris-Saclay, Orsay, France; INSERM, France; University of Amsterdam, Amsterdam, Netherlands; Cochrane Netherlands and UMC Utrecht; Julius Center for Health Sciences and Primary Care, Utrecht, Netherlands; University of Strathclyde, Glasgow, UK; Queensland University of Technology, Brisbane, Australia; Vienna University of Technology, Vienna, Austria; Qatar Computing Research Institute, Doha, Qatar","Overview of the CLEF ehealth evaluation lab 2018","https://link.springer.com/chapter/10.1007/978-3-319-98932-7_26","papers","20180101Z00:00:00","","","University of Turku, Turku, Finland; The Australian National University (ANU), Australia; Commonwealth Scientific and Industrial Research Organisation (CSIRO), University of Canberra, Canberra, Australia; Maynooth University, Maynooth, Ireland; Univ. Grenoble Alpes, CNRS, Grenoble, France; Université Paris-Saclay, Orsay, France; INSERM, France; University of Amsterdam, Amsterdam, Netherlands; Cochrane Netherlands and UMC Utrecht; Julius Center for Health Sciences and Primary Care, Utrecht, Netherlands; University of Strathclyde, Glasgow, UK; Queensland University of Technology, Brisbane, Australia; Vienna University of Technology, Vienna, Austria; Qatar Computing Research Institute, Doha, Qatar","ir/search-engine-evaluation, nlp/corpus-construction","This year we introduced clefehealth2018 corpus. This was crated by compiling Web pages of selected domains acquired from the CommonCrawl¹¹. An initial list of Websites was identified for acquisition. The list was built by submitting the CLEF 2018 base queries to the Microsoft Bing APIs (through the Azure Cognitive Services) repeatedly over a period of few weeks¹², and acquiring the URLs of the retrieved results. The domains of the URLs were then included in the list, except some domains that were excluded for decency reasons (e.g. pornhub.com). The list was further augmented by including a number of known reliable health Websites and other known unreliable health Websites, from lists previously compiled by health institutions and agencies. The corpus was divided into folders, by domain name. Each folder contained a file for each Webpage from the domain available in the CommonCrawl dump. In total, 2,021 domains were requested from the CommonCrawl dump of 2018-09¹³. Of the 2,021 domains in total, 1,903 were successfully acquired. The remaining domains were discarded due to errors, corrupted or incomplete data returned by the CommonCrawl API (a total of ten retries were attempted for each domain before giving up on a domain). Of the 1,903 crawled domains, 84 were not available in the CommonCrawl dump, and for these, a folder in the corpus exists and represents the domain that was requested; however, the folder is empty, meaning that it was not available in the dump. Note that .pdf documents were excluded from the data acquired from CommonCrawl. A complete list of domains and size of the crawl data for each domain is available at https://github.com/CLEFeHealth/CLEFeHealth2018IRtask/ blob/master/clef2018collection_listofdomains.txt. The full collection, clefehealth2018¹⁴, it contains 5,535,120 Web pages and its uncompressed size is about 480GB. In addition to the full collection, an alternative corpus named clefehealth2018_B¹⁵ was created by manually removing a number of domains that were not strictly health-related (e.g., news Websites). This subset contains 1,653 domains and its size is about 294GB, uncompressed.","CC-MAIN-2018-09","CLEF-eHealth-2018-IR-task","","" "Shabnam Tafreshi, Mona Diab – George Washington University","Emotion Detection and Classification in a Multigenre Corpus with Joint Multi-Task Deep Learning","http://www.aclweb.org/anthology/C18-1246","papers","20180101Z00:00:00","","","George Washington University","nlp/emotion-detection, nlp/word-embeddings","Our results indicate that common crawl corpus with 2 million words, trained using fastText model has the most word coverage among these genres.","","","GloVe-word-embeddings, fastText-word-embeddings","" "Nicolas Tempelmeier, Elena Demidova, Stefan Dietze – Leibniz Universität Hannover, Germany","Inferring missing categorical information in noisy and sparse web markup","https://arxiv.org/abs/1803.00446","papers","20180101Z00:00:00","","","Leibniz Universität Hannover, Germany","semantic web, linked data","","","","WDC-triples","" "Brian Thompson, Huda Khayrallah, Antonios Anastasopoulos, Arya McCarthy, Kevin Duh, Rebecca Marvin, Paul McNamee, Jeremy Gwinnup, Tim Anderson, Philipp Koehn – Johns Hopkins University, USA; University of Notre Dame, France; Air Force Research Laboratory, USA","Freezing Subnetworks to Analyze Domain Adaptation in Neural Machine Translation","https://arxiv.org/abs/1809.05218","papers","20180101Z00:00:00","","","Johns Hopkins University, USA; University of Notre Dame, France; Air Force Research Laboratory, USA","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Henry S. Thompson, Jian Tong – University of Edinburgh, United Kingdom","Can Common Crawl reliably track persistent identifier (PID) use over time?","https://arxiv.org/abs/1802.01424","papers","20180101Z00:00:00","","","University of Edinburgh, United Kingdom","web-science","","","","","" "Swapna Buccapatnam Tirumala, Ashish Jagmohan, Elham Khabiri, Ta-Hsin Li, Matthew Daniel Riemer, Vadim Sheinin, Aditya Vempaty – International Business Machines Corp.","Facilitating mapping of control policies to regulatory documents","https://patents.google.com/patent/US20180137107A1/en","papers","20180101Z00:00:00","","","International Business Machines Corp.","patent, cc-cited-not-used","The global corpora [203] can comprise a general internet-based collection of texts derived from various sources (e.g., GUTENBERG®, REUTERS®, COMMON CRAWL®, and/or GOOGLE NEWS®).","","","","" "Maksim Tkachenko, Chong Cher Chia, Hady Lauw – Singapore Management University, Singapore","Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings","http://www.aclweb.org/anthology/P18-1112","papers","20180101Z00:00:00","","","Singapore Management University, Singapore","nlp/sentiment-analysis, nlp/word-embeddings, cc-cited-not-used","","","","","" "Marcus Tober, Daniela Neumann – Searchmetrics GmbH","Creation and optimization of resource contents","https://patents.google.com/patent/US20180096067A1/en","papers","20180101Z00:00:00","","","Searchmetrics GmbH","patent, cc-cited-not-used","The crawler module [310] may automatically crawl a network and acquire contents from one or more resources in the network, acquire the contents from an open repository of web crawl data such as CommonCrawl.org.","","","","" "Melanie Tosik, Antonio Mallia, Kedar Gangopadhyay – New York University","Debunking Fake News One Feature at a Time","https://arxiv.org/abs/1808.02831","papers","20180101Z00:00:00","","","New York University","nlp, text classification","Cosine similarity between averaged headline/body Common Crawl vectors","","","?? GloVe-word-embeddings","" "Ke Tran, Yonatan Bisk – University of Amsterdam; University of Washington","Inducing Grammars with and for Neural Machine Translation","https://arxiv.org/abs/1805.10850","papers","20180101Z00:00:00","","","University of Amsterdam; University of Washington","nlp/maschine-translation, nlp/syntax, nlp/grammar-learning, nlp/dependency-grammar","","","","","" "Trieu H Trinh, Quoc V Le – Google Brain","A Simple Method for Commonsense Reasoning","https://arxiv.org/abs/1806.02847","papers","20180101Z00:00:00","","Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset [ 1]. In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.","Google Brain","ai/deep-learning, nlp/language-model","In particular, we aggregate documents from the CommonCrawl dataset that has the most overlapping n-grams with the questions. [...] We name this dataset STORIES since most of the constituent documents take the form of a story with long chain of coherent events.","","CC-Stories","","" "Dmitry Ustalov, Alexander Panchenko, Chris Biemann, Simone Paolo Ponzetto – University of Mannheim, Germany; University of Hamburg, Germany; Skolkovo Institute of Science and Technology, Moskva, Russia","Watset: local-global graph clustering with applications in sense and frame induction","https://arxiv.org/abs/1808.06696","papers","20180101Z00:00:00","","","University of Mannheim, Germany; University of Hamburg, Germany; Skolkovo Institute of Science and Technology, Moskva, Russia","nlp/dependency-parsing, nlp/semantics, nlp/synonymy, nlp/frames-semantics, graph-clustering, web-mining","For the evaluation purposes, we operate on the intersection of triples from DepCC and FrameNet.","","","depcc","" "Dmitry Ustalov, Alexander Panchenko, Chris Biemann, Simone Paolo Ponzetto – University of Mannheim, Germany; University of Hamburg, Germany","Unsupervised sense-aware hypernymy extraction","https://arxiv.org/abs/1809.06223","papers","20180101Z00:00:00","","","University of Mannheim, Germany; University of Hamburg, Germany","nlp/semantics, nlp/hypernymy, web-mining","","","","","WDC-WebIsADb" "Dmitry Ustalov, Alexander Panchenko, Andrei Kutuzov, Chris Biemann, Simone Paolo Ponzetto – University of Mannheim, Germany; University of Hamburg, Germany; University of Oslo, Norway","Unsupervised semantic frame induction using triclustering","https://arxiv.org/abs/1805.04715","papers","20180101Z00:00:00","","","University of Mannheim, Germany; University of Hamburg, Germany; University of Oslo, Norway","nlp/dependency-parsing, nlp/semantics, nlp/synonymy, nlp/frames-semantics, graph-clustering, web-mining","In our evaluation, we use triple frequencies from the DepCC dataset (Panchenkoet al., 2018) , which is a dependency-parsed version of the Common Crawl corpus, and the standard 300-dimensional word embeddings model trained on the Google News corpus (Mikolovet al., 2013). [...] For the evaluation purposes, we operate on the intersection of triples from DepCC and FrameNet.","","","depcc","" "Hal Varian – National Bureau of Economic Research, Cambridge, MA, USA","Artificial intelligence, economics, and industrial organization","https://www.nber.org/papers/w24839","papers","20180101Z00:00:00","","Machine learning (ML) and artificial intelligence (AI) have been around for many years. However, in the last 5 years, remarkable progress has been made using multilayered neural networks in diverse areas such as image recognition, speech recognition, and machine translation. AI is a general purpose technology that is likely to impact many industries. In this chapter I consider how machine learning availability might affect the industrial organization of both firms that provide AI services and industries that adopt AI technology. My intent is not to provide an extensive overview of this rapidly-evolving area, but instead to provide a short summary of some of the forces at work and to describe some possible areas for future research.","National Bureau of Economic Research, Cambridge, MA, USA","economy","","","","","" "Vivek Vinayan, Kumar M Anand, K P Soman – Amrita School of Engineering, India","AmritaNLP at SemEval-2018 Task 10: Capturing discriminative attributes using convolution neural network over global vector representation.","http://www.aclweb.org/anthology/S18-1166","papers","20180101Z00:00:00","","","Amrita School of Engineering, India","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Yogarshi Vyas, Xing Niu, Marine Carpuat – Department of Computer Science, University of Maryland","Identifying Semantic Divergences in Parallel Text without Annotations","https://arxiv.org/abs/1803.11112","papers","20180101Z00:00:00","","","Department of Computer Science, University of Maryland","nlp/machine-translation","","","","{?Ngrams-LMs-2013}","" "Changhan Wang, Kyunghyun Cho, Douwe Kiela – Facebook AI Research; New York University","Code-Switched Named Entity Recognition with Embedding Attention","http://www.aclweb.org/anthology/W18-3221","papers","20180101Z00:00:00","","","Facebook AI Research; New York University","nlp/named-entity-recognition, nlp/word-embeddings","","","","fastText-word-embeddings","" "Renzhi Wang, Mizuho Iwaihara – Graduate School of Information, Production and Systems, Waseda University Japan","Detection of mergeable Wikipedia articles based on overlapping topics","db-event.jpn.org/deim2018/data/papers/157.pdf","papers","20180101Z00:00:00","","","Graduate School of Information, Production and Systems, Waseda University Japan","nlp/word-embeddings, ir/duplicate-detection","","","","GloVe-word-embeddings","" "Mingxuan Wang, Jun Xie, Zhixing Tan, Jinsong Su, Deyi Xiong, Chao Bian – Mobile Internet Group, Tencent Technology Co., Ltd; Xiamen University, China; Soochow University, China","Neural Machine Translation with Decoding History Enhanced Attention","https://www.aclweb.org/anthology/C18-1124","papers","20180101Z00:00:00","","","Mobile Internet Group, Tencent Technology Co., Ltd; Xiamen University, China; Soochow University, China","nlp/machine-translation, cc-cited-not-used","","","","","" "Zhuxiaona Wei, Thuan Nguyen, Iat Chan, Kenny M Liou, Helin Wang, Houchang Lu – Baidu USA LLC","Systems and methods for improved user interface","https://patents.google.com/patent/US20180011688A1/en","papers","20180101Z00:00:00","","","Baidu USA LLC","patent, ir/user-interface","For English, in embodiments, the language model is a Kneser-Ney smoothed 5-gram model with pruning that is trained using the KenLM toolkit on cleaned text from the Common Crawl Repository. The vocabulary is the most frequently used 400,000 words from 250 million lines of text, which produces a language model with about 850 million n-grams.","","","","" "John Wieting, Kevin Gimpel – Carnegie Mellon University, Pittsburgh, PA, USA; Toyota Technological Institute at Chicago, IL, USA","Paranmt-50m: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations","http://www.aclweb.org/anthology/P18-1042","papers","20180101Z00:00:00","","","Carnegie Mellon University, Pittsburgh, PA, USA; Toyota Technological Institute at Chicago, IL, USA","nlp/machine-translation, nlp/sentence-paraphrase, nlp/sentence-embeddings","","","WMT-16-translation-task-common-crawl-corpus","","" "Genta Indra Winata, Chien-Sheng Wu, Andrea Madotto, Pascale Fung – Hong Kong University of Science and Technology, Hong Kong","Bilingual Character Representation for Efficiently Addressing Out-of-Vocabulary Words in Code-Switching Named Entity Recognition","https://arxiv.org/abs/1805.12061","papers","20180101Z00:00:00","","","Hong Kong University of Science and Technology, Hong Kong","nlp/named-entity-recognition, nlp/word-embeddings","","","","fastText-word-embeddings","" "Ziang Xie, Guillaume Genthial, Stanley Xie, Andrew Ng, Dan Jurafsky – Stanford University, USA","Noising and Denoising Natural Language: Diverse Backtranslation for Grammar Correction","http://www.aclweb.org/anthology/N18-1057","papers","20180101Z00:00:00","","","Stanford University, USA","nlp/machine-translation, nlp/grammatical-error-correction","","","","Ngrams-LMs-2013","" "Hao Xiong, Zhongjun He, Xiaoguang Hu, Hua Wu – Baidu Inc., China","Multi-channel encoder for neural machine translation","https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPaper/16788","papers","20180101Z00:00:00","","","Baidu Inc., China","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Steven Xu, Andrew Bennett, Doris Hoogeveen, Jey Han Lau, Timothy Baldwin – University of Melbourne, Australia","Preferred Answer Selection in Stack Overflow: Better Text Representations... and Metadata, Metadata, Metadata","https://www.aclweb.org/anthology/W18-6119","papers","20180101Z00:00:00","","","University of Melbourne, Australia","information retrieval, nlp/question-answering, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Hua Yang, Teresa Gonçalves – University of Èvora, Portugal; ZhongYuan University of Technology, Zhengzhou, China","Improving personalized consumer health search: notebook for ehealth at clef 2018","http://ceur-ws.org/Vol-2125/paper_195.pdf","papers","20180101Z00:00:00","","","University of Èvora, Portugal; ZhongYuan University of Technology, Zhengzhou, China","ir/multilingual-information-retrieval, ir/biomedical-information-extraction, ir/query-expansion, ir/learning-to-rank, nlp/word-embeddings","","","","CLEF-eHealth-2018-IR-task","" "Thanos Yannakis, Pavlos Fafalios, Yannis Tzitzikas – University of Crete, Greece; Leibniz University of Hannover, Germany","Heuristics-based Query Reordering for Federated Queries in SPARQL 1.1 and SPARQL-LD","http://ceur-ws.org/Vol-2110/paper7.pdf","papers","20180101Z00:00:00","","","University of Crete, Greece; Leibniz University of Hannover, Germany","semantic web, linked data, SparQL","","","","WebDataCommons","" "Evi Yulianti, Ruey-Cheng Chen, Falk Scholer, W Bruce Croft, Mark Sanderson – RMIT University, Melbourne, Australia; SEEK Ltd., Melbourne, Australia","Ranking Documents by Answer-Passage Quality","http://marksanderson.org/publications/my_papers/SIGIR2018a.pdf","papers","20180101Z00:00:00","","","RMIT University, Melbourne, Australia; SEEK Ltd., Melbourne, Australia","information retrieval, nlp/question-answering, cc-cited-not-used","","","","","" "Siwar Zayani, Nesrine Ksentini, Mohamed Tmar, Faiez Gargouri – University of Sfax, Tunisia","Miracl at clef 2018: Consumer health search task","http://ceur-ws.org/Vol-2125/paper_141.pdf","papers","20180101Z00:00:00","","","University of Sfax, Tunisia","ir/multilingual-information-retrieval, ir/biomedical-information-extraction, ir/query-expansion","","","","CLEF-eHealth-2018-IR-task","" "Neil Zeghidour, Qiantong Xu, Vitaliy Liptchinsky, Nicolas Usunier, Gabriel Synnaeve, Ronan Collobert – Facebook A.I. Research, Paris, France; Facebook A.I. Research, New York & Menlo Park, USA; CoML, ENS/CNRS/EHESS/INRIA/PSL Research University, Paris, France","Fully convolutional speech recognition","https://arxiv.org/abs/1812.06864","papers","20180101Z00:00:00","","","Facebook A.I. Research, Paris, France; Facebook A.I. Research, New York & Menlo Park, USA; CoML, ENS/CNRS/EHESS/INRIA/PSL Research University, Paris, France","nlp/speech-recognition","(12k training hours AM, common crawl LM)","","","??","" "Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi – University of Washington, USA","Swag: A large-scale adversarial dataset for grounded commonsense inference","https://arxiv.org/abs/1808.05326","papers","20180101Z00:00:00","","","University of Washington, USA","ai/reasoning, nlp/text-generation, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Meilin Zhan, Roger Levy – Massachusetts Institute of Technology, USA","Comparing Theories of Speaker Choice Using a Model of Classifier Production in Mandarin Chinese","http://www.aclweb.org/anthology/N18-1181","papers","20180101Z00:00:00","","","Massachusetts Institute of Technology, USA","nlp/syntax, nlp/corpus-lingustics, nlp/paraphrasing","","","","","WMT-13-translation-task-common-crawl-corpus" "Yunming Zhang, Mengjiao Yang, Riyadh Baghdadi, Shoaib Kamil, Julian Shun, Saman P. Amarasinghe – MIT CSAIL; Adobe Research","GraphIt - A High-Performance DSL for Graph Analytics","http://arxiv.org/abs/1805.00923","papers","20180101Z00:00:00","","","MIT CSAIL; Adobe Research","graph-processing","","","","WDC-hyperlinkgraph","" "Pengqing Zhang, Yuexian Hou, Zhan Su, Yi Su – Tianjin University, China","Two-Step Multi-factor Attention Neural Network for Answer Selection","https://link.springer.com/chapter/10.1007/978-3-319-97304-3_50","papers","20180101Z00:00:00","","","Tianjin University, China","nlp/answer-selection, ai/neural-networks, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Ji Zhang, Leonard Tan, Xiaohui Tao, Xiaoyao Zheng, Yonglong Luo, Jerry Chun-Wei Lin – University of Southern Queensland, Australia; Anhui Normal University, Wuhu, China; Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, China","SLIND: Identifying Stable Links in Online Social Networks","https://link.springer.com/chapter/10.1007/978-3-319-91458-9_54","papers","20180101Z00:00:00","","","University of Southern Queensland, Australia; Anhui Normal University, Wuhu, China; Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, China","web-science/hyperlinkgraph, web-science/social-networks","The dataset chosen for this study, as well as for the demo, was crawled from Facebook and obtained from the repositories of the Common Crawl (August 2016).","CC-MAIN-2016-36","","","" "Ji Zhang, Xiaohui Tao, Leonard Tan, Jerry Chun-Wei Lin, Hongzhou Li, Liang Chang – University of Southern Queensland, Toowoomba, Australia; Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, China; Guilin University of Electronic Technology, Guilin, China; Guilin University of Electronic Technology, Guilin, China","On Link Stability Detection for Online Social Networks","https://link.springer.com/chapter/10.1007/978-3-319-98809-2_20","papers","20180101Z00:00:00","link stability, graph theory, online social networks","","University of Southern Queensland, Toowoomba, Australia; Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, China; Guilin University of Electronic Technology, Guilin, China; Guilin University of Electronic Technology, Guilin, China","graph-processing, social networks","Since the social network we obtain from the repositories of common crawl contains missing links and partial information, stochastic estimations are …","","","","" "Biao Zhang, Deyi Xiong, Jinsong Su – Xiamen University, China; Soochow University, China","Neural Machine Translation with Deep Attention","https://ieeexplore.ieee.org/abstract/document/8493282","papers","20180101Z00:00:00","","","Xiamen University, China; Soochow University, China","nlp/machine-translation","","","","","" "Biao Zhang, Deyi Xiong, Jinsong Su, Qian Lin, Huiji Zhang – Xiamen University, China; Soochow University, China; Xiamen Meiya Pico information Co., Ltd. Xiamen, China","Simplifying Neural Machine Translation with Addition-Subtraction Twin-Gated Recurrent Networks","https://arxiv.org/abs/1810.12546","papers","20180101Z00:00:00","","","Xiamen University, China; Soochow University, China; Xiamen Meiya Pico information Co., Ltd. Xiamen, China","nlp/machine-translation, cc-cited-not-used","","","","","" "Nils Brügger, Ian Milligan – Aarhus University, Denmark; University of Waterloo, Canada","The SAGE Handbook of Web History","https://us.sagepub.com/en-us/nam/the-sage-handbook-of-web-history/book252251","papers","20190101Z00:00:00","","","Aarhus University, Denmark; University of Waterloo, Canada","web-science, web history","","","","","" "Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave – Facebook AI","CCNet: Extracting high quality monolingual datasets from web crawl data","https://arxiv.org/abs/1911.00359","papers","20190101Z00:00:00","","Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.","Facebook AI","nlp/corpus-construction, nlp/web-as-corpus, nlp/low-resource-language","[about https://github.com/facebookresearch/cc_net] In this paper, we present a data collection pipeline that allows to gather massive monolingual corpora of high quality in a variety of languages, including many low-resource ones. The principles of our pipeline are general and we show the results of its application to data collected by the Common Crawl project.¹ Common Crawl is a massive non-curated dataset of webpages in many languages, mixed together in temporal snapshots of the web.","","CCNet","","" "A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, Ilya Sutskever – OpenAI, San Francisco, California, United States","Language models are unsupervised multitask learners","https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe","papers","20190101Z00:00:00","","","OpenAI, San Francisco, California, United States","cc-cited-not-used","A promising source of diverse and nearly unlimited text is web scrapes such as Common Crawl. While these archives are many orders of magnitude larger than current language modeling datasets, they have significant data quality issues. Trinh & Le (2018) used Common Crawl in their work on commonsense reasoning but noted a large amount of documents “whose content are mostly unintelligible”. We ob-served similar data issues in our initial experiments with Common Crawl. Trinh & Le (2018)’s best results were achieved using a small subsample of Common Crawl which included only documents most similar to their target dataset,the Winograd Schema Challenge. While this is a pragmatic approach to improve performance on a specific task, we want to avoid making assumptions about the tasks to be performed ahead of time.Instead, we created a new web scrape which emphasizes document quality. To do this we only scraped web pages which have been curated/filtered by humans. Manually filtering a full web scrape would be exceptionally expensive so as a starting point, we scraped all outbound links from Reddit, a social media platform, which received at least 3 karma. This can be thought of as a heuristic indicator for whether other users found the link interesting, educational, or just funny. The resulting dataset, WebText, contains the text subsetof these 45 million links.","","","","" "Pedro Javier Ortiz Suárez, Benoît Sagot, Laurent Romary – Inria, Paris, France; Sorbonne University, Paris, France","Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures","https://hal.inria.fr/hal-02148693","papers","20190101Z00:00:00","","","Inria, Paris, France; Sorbonne University, Paris, France","nlp/corpus-construction","We use the November 2018 snapshot which surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header. From now on, when we mention the “Common Crawl” corpus, we refer to this particular November 2018 snapshot.","CC-MAIN-2018-47 (WET)","OSCAR","","" "Dominik Mottl – Hochschule Darmstadt, Germany","Multi-Label Branchenklassifikation von Web-Texten","https://fbmn.h-da.de/uploads/Themen/WS18_thesis_mottl.pdf","papers","20190101Z00:00:00","","","Hochschule Darmstadt, Germany","nlp/NER, entity-linking","NER of company names and linking to DBpedia performed on English texts in 712 WET files of November 2018 crawl (CC-MAIN-2018-47) using cc-pyspark.","","","","" "Sebastian Nagel – Common Crawl, USA","Accessing WARC files via SQL","https://digital.library.unt.edu/ark:/67531/metadc1608961/","papers","20190101Z00:00:00","","","Common Crawl, USA","web-archiving, SQL, Parquet","","cc-index-table","","","" "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov – Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA; Facebook AI","RoBERTa: A Robustly Optimized BERT Pretraining Approach","https://arxiv.org/abs/1907.11692","papers","20190101Z00:00:00","","","Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA; Facebook AI","nlp/corpus-construction, nlp/language-model","We find that BERT was significantly undertrained and propose an improved recipe for training BERT models, which we call RoBERTa, that can match or exceed the performance of all of the post-BERT methods. Our modifications are simple, they include: (1) training the model longer, with bigger batches, over more data; (2) removing the next sentence prediction objective; (3) training on longer sequences; and (4) dynamically changing the masking pattern applied to the training data. We also collect a large new dataset (CC-NEWS) of comparable size to other privately used datasets, to better control for training set size effects. [...] CC-NEWS, which we collected from the English portion of the CommonCrawl News dataset (Nagel, 2016). The data contains 63 million English news articles crawled between September 2016 and February 2019. (76GB after filtering).⁴ [⁴ We use news-please (Hamborg et al.,2017) to collect and extract CC-NEWS. CC-NEWS is similar to the REALNEWS dataset described in Zellers et al. (2019).]","CC-NEWS","CC-NEWS-RoBERTa","","" "Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi – University of Washington, USA; Allen Institute for Artificial Intelligence, USA","Defending against neural fake news","http://papers.nips.cc/paper/9106-defending-against-neural-fake-news.pdf","papers","20190101Z00:00:00","","","University of Washington, USA; Allen Institute for Artificial Intelligence, USA","nlp/language-model, nlp/fake-news-detection, nlp/text-classification, misinformation, disinformation","Dataset. We present RealNews, a large corpus of news articles from Common Crawl. Training Grover requires a large corpus of news articles with metadata, but none currently exists. Thus, we construct one by scraping dumps from Common Crawl, limiting ourselves to the 5000 news domains indexed by Google News. We used the Newspaper Python library to extract the body and meta-data from each article. News from Common Crawl dumps from December 2016 through March 2019 were used as training data; articles published in April 2019 from the April 2019 dump were used for evaluation. After deduplication, RealNews is 120 gigabytes without compression. [...] Obtaining the data required through Common Crawl cost \$10k in AWS credits and can be massively parallelized over many CPUs. [...]","","Grover-RealNews","","" "Giulio Ermanno Pibiri, Matthias Petri, Alistair Moffat – University of Melbourne, Australia; University of Pisa, Italy; ISTI-CNR, Pisa, Italy","Fast Dictionary-Based Compression for Inverted Indexes","https://dl.acm.org/citation.cfm?id=3290962","papers","20190101Z00:00:00","","Dictionary-based compression schemes provide fast decoding operation, typically at the expense of reduced compression effectiveness compared to statistical or probability-based approaches. In this work, we apply dictionary-based techniques to the compression of inverted lists, showing that the high degree of regularity that these integer sequences exhibit is a good match for certain types of dictionary methods, and that an important new trade-off balance between compression effectiveness and compression efficiency can be achieved. Our observations are supported by experiments using the document-level inverted index data for two large text collections, and a wide range of other index compression implementations as reference points. Those experiments demonstrate that the gap between efficiency and effectiveness can be substantially narrowed.","University of Melbourne, Australia; University of Pisa, Italy; ISTI-CNR, Pisa, Italy","information-retrieval/search-engine, information-retrieval/inverted-index","We use the standard Gov2 collection containing 426 GiB of text; and CCNEWS, an English subset of the freely available NEWS subset of the CommonCrawl¹ [¹http://commoncrawl.org/2016/10/news-dataset-available/], consisting of news articles in the period 09/01/16 to 30/03/18, following the methodology of Petri and Moffat [28].","CC-NEWS","","","" "Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin – Facebook AI","CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB","https://arxiv.org/abs/1911.04944","papers","20190101Z00:00:00","Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","We show that margin-based bitext mining in a multilingual sentence space can be applied to monolingual corpora of billions of sentences. We are using ten snapshots of a curated common crawl corpus (Wenzek et al., 2019), totalling 32.7 billion unique sentences. Using one unified approach for 38 languages, we were able to mine 4.5 billions parallel sentences, out of which 661 million are aligned with English. 20 language pairs have more then 30 million parallel sentences, 112 more then 10 million, and most more than one million, including direct alignments between many European or Asian languages.","Facebook AI","nlp/corpus-construction, nlp/parallel-corpus, nlp/machine-translation","The curated Common Crawl corpus¶ In this work, we propose to mine parallel sentences from the Web, by using the data released by the Common Crawl project.[⁵https://commoncrawl.org/] Each month, a snapshot of the Web containing terabytes of web pages in various languages is obtained by randomly exploring URLs. We start by applying some preprocessing steps to the raw text data, following the pipeline introduced by Wenzek et al. (2019) and leading to the CCNet dataset. The first step is to deduplicate the data at the paragraph level, as the original crawls contain up to 70% of duplicated data. This preprocessing removes low quality content, such as boilerplate, navigation menus or cookie warnings. The second step of the pipeline is to identify the language of each document, using fastText6 (Grave et al., 2018). This language identifier uses a linear classifier with character n-gram features, and can recognize up to 176 languages. Finally, the last step of the preprocessing is to filter low quality content by training a language model on Wikipedia, and only keeping documents with a low perplexity score. We refer the reader to Wenzek et al. (2019) for more details about this pre- processing pipeline. In Figure 1, we report the number of unique sentences obtained after preprocessing ten snapshots from Common Crawl. We currently process 38 languages. The English Web content is abundant and we used only one snapshot.","","CCMatrix","","" "Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc'Aurelio Ranzato, Arthur Szlam – Facebook AI Research; Harvard University, USA","Real or Fake? Learning to Discriminate Machine from Human Generated Text","https://arxiv.org/abs/1906.03351","papers","20190101Z00:00:00","","","Facebook AI Research; Harvard University, USA","nlp/text-classification","CCNews: We collect a de-duplicated subset of the English portion of the CommonCrawl news dataset (Nagel, 2016) [Sebastian Nagel. Cc-news. http://web.archive.org/save/http://commoncrawl. org/2016/10/news-dataset-available/, 2016.], which totals around 16 Billion words.","CC-NEWS","CCNews (Bakhtin, et al. 2019)","","" "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le – Carnegie Mellon University, Google AI Brain Team","XLNet: Generalized Autoregressive Pretraining for Language Understanding","https://arxiv.org/abs/1906.08237","papers","20190101Z00:00:00","","","Carnegie Mellon University, Google AI Brain Team","nlp/transformer-language-model","Following BERT [10], we use the BooksCorpus [40] and English Wikipedia as part of our pretraining data, which have 13GB plain text combined. In addition, we include Giga5 (16GB text) [26], ClueWeb 2012-B (extended from 5]), and Common Crawl [6] for pretraining. We use heuristics to aggressively filter out short or low-quality articles for ClueWeb 2012-B and Common Crawl, which results in 19GB and 110GB text respectively.","","","","" "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov – Facebook AI","Unsupervised Cross-lingual Representation Learning at Scale","https://arxiv.org/abs/1911.02116","papers","20190101Z00:00:00","Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross- lingual transfer tasks. We train a Transformer- based masked language model on one hundred languages, using more than two terabytes of fil- tered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6\% average accu- racy on XNLI, +13\% average F1 score on MLQA, and +2.4\% F1 score on NER. XLM-R performs particularly well on low-resource lan- guages, improving 15.7\% in XNLI accuracy for Swahili and 11.4\% for Urdu over previ- ous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and ca- pacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per- language performance; XLM-R is very compet- itive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code, data and models publicly available.","Facebook AI","nlp/corpus-construction, nlp/web-as-corpus, nlp/language-model","Following Wenzek et al. (2019) 2, we build a clean CommonCrawl Corpus in 100 languages. [...] In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages.","","CC-100","CCNet","" "Joel Mackenzie, Rodger Benham, Matthias Petri, Johanne R. Trippas, J. Shane Culpepper, Alistair Moffat – The University of Melbourne, Melbourne, Australia; RMIT University, Melbourne, Australia; Amazon Alexa, Manhattan Beach, CA, USA","CC-News-En: A large English news corpus","https://doi.org/10.1145/3340531.3412762","papers","20200101Z00:00:00","corpus, user query variations, collection, news search, crowdsourcing","We describe a static, open-access news corpus using data from the Common Crawl Foundation, who provide free, publicly available web archives, including a continuous crawl of international news articles published in multiple languages. Our derived corpus, CC-News-En, contains 44 million English documents collected between September 2016 and March 2018. The collection is comparable in size with the number of documents typically found in a single shard of a large-scale, distributed search engine, and is four times larger than the news collections previously used in offline information retrieval experiments. To complement the corpus, 173 topics were curated using titles from Reddit threads, forming a temporally representative sampling of relevant news topics over the 583 day collection window. Information needs were then generated using automatic summarization tools to produce textual and audio representations, and used to elicit query variations from crowdworkers, with a total of 10,437 queries collected against the 173 topics. Of these, 10,089 include key-stroke level instrumentation that captures the timings of character insertions and deletions made by the workers while typing their queries. These new resources support a wide variety of experiments, including large-scale efficiency exercises and query auto-completion synthesis, with scope for future addition of relevance judgments to support offline effectiveness experiments and hence batch evaluation campaigns.","The University of Melbourne, Melbourne, Australia; RMIT University, Melbourne, Australia; Amazon Alexa, Manhattan Beach, CA, USA","nlp/text-corpora, nlp/corpus-construction, ir/information-extraction","Our derived corpus, CC-News-En, contains 44 million English documents collected between September 2016 and March 2018. [...] One such example is the CommonCrawl Foundation,[¹ ] who generate large-scale crawls of the web at regular intervals. A key philosophy behind the Common Crawlis to democratize data, allowing open access with no fees. In late 2016, the Common Crawl Foundation announced a news-specific crawl (CC-News), [² ] with documents being added on a daily basis, and covering sources from a wide range of countries and languages. Here we derive a static, English segment of the CC-Newscrawl that we refer to as CC-News-En. Due to the storage and computation costs involved in filtering out non-English documents, we make the complete corpus available as a free resource, along with asuite of tools which can be used to replicate corpus extraction from the original source CC-News data. We also provide a set of 10,437 user query variations over 173 query topics, including keystroke-level data collected from a novel crowdworking experiment. Our goal is to encourage reproducible and replicable experimentation, with greatly reduced barriers to entry. [...] A total of 2,291 CC-News WARC files were processed to build CC-News-En, covering the period 26 August 2016 to 31 March 2018, inclusive. The first and last WARC files inthis collection are as follows: •CC-NEWS-20160826124520-00000.warc.gz •CC-NEWS-20180331191315-00143.warc.gz The resulting subset of compressed WARC files occupies 2.14 TiB of disk space, and contains a total of 102.5 million documents in over 100 languages. [...] Missing Documents and Temporal Gaps. During the creation of the collection, the CC-NEWS-20170812163812-00038.warc.gz file was not processed correctly by our pipeline, and was subsequently dropped from the CC-News-En corpus. In addition, there are six days within the 583 day period where no WARC files were added to the original CC-News crawl: 22/09/2016 – 25/09/2016 inclusive, 18/12/2017, and 22/12/2017. These gaps typically correspond to hardware and software upgrades on the crawl servers.[¹⁸ Private correspondence with Common Crawl Engineers.] It is also important to note that both CC-News and CC-News-En are not intended to be complete crawls of their sources, but rather, to provide a reproducible sample of these sites.","CC-NEWS","","","" "Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, Philipp Koehn – Facebook AI; Johns Hopkins University","CCAligned: A Massive collection of cross-lingual web-document pairs","https://www.aclweb.org/anthology/2020.emnlp-main.480","papers","20200101Z00:00:00","","Cross-lingual document alignment aims to identify pairs of documents in two distinct languages that are of comparable content or translations of each other. In this paper, we exploit the signals embedded in URLs to label web documents at scale with an average precision of 94.5{\%} across different language pairs. We mine sixty-eight snapshots of the Common Crawl corpus and identify web document pairs that are translations of each other. We release a new web dataset consisting of over 392 million URL pairs from Common Crawl covering documents in 8144 language pairs of which 137 pairs include English. In addition to curating this massive dataset, we introduce baseline methods that leverage cross-lingual representations to identify aligned documents based on their textual content. Finally, we demonstrate the value of this parallel documents dataset through a downstream task of mining parallel sentences and measuring the quality of machine translations from models trained on this mined data. Our objective in releasing this dataset is to foster new research in cross-lingual NLP across a variety of low, medium, and high-resource languages.","Facebook AI; Johns Hopkins University","nlp/machine-translation, nlp/text-corpora, nlp/parallel-corpus, nlp/cross-lingual-document-alignment","[...] we exploit the signals embedded in URLs to label web documents at scale with an average precision of 94.5{\%} across different language pairs. We mine sixty-eight snapshots of the Common Crawl corpus and identify web document pairs that are translations of each other. We release a new web dataset consisting of over 392 million URL pairs from Common Crawl covering documents in 8144 language pairs of which 137 pairs include English. [...] Starting from 68 Common Crawl snapshots with a raw document count of 169.4 billion documents, upon deduplication, the resultant corpus is approximately 29.6 billion web documents from 107.8 million distinct web domains – a 83{\%} reduction from the raw corpus.","","CCAligned-2020","","" "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei – Johns Hopkins University; OpenAI","Language models are few-shot learners","https://arxiv.org/abs/2005.14165","papers","20200101Z00:00:00","","","Johns Hopkins University; OpenAI","nlp/language-model, ai/deep-learning, nlp/autoregressive-transformer-language-model, nlp/question-answering, nlp/machine-translation, nlp/text-generation","Datasets for language models have rapidly expanded, culminating in the Common Crawl dataset [...] constituting nearly a trillion words. [...] However, we have found that unfiltered or lightly filtered versions of Common Crawl tend to have lower quality than more curated datasets. Therefore, we took 3 steps to improve the average quality of our datasets: (1) we downloaded and filtered a version of CommonCrawl based on similarity to a range of high-quality reference corpora, (2) we performed fuzzy deduplication at the document level, within and across datasets, to prevent redundancy and preserve the integrity of our held-out validation set as an accurate measure of overfitting, and (3) we also added known high-quality reference corpora to the training mix to augment CommonCrawl and increase its diversity. Details of the first two points (processing of Common Crawl) are described in Appendix A.","","","","" "Metod Jazbec, Barna Pásztor, Felix Faltings, Nino Antulov-Fantulin, Petter N. Kolm – ETH Zurich, Switzerland; New York University, New York, USA","On the impact of publicly available news and information transfer to financial markets","https://arxiv.org/abs/2010.12002","papers","20200101Z00:00:00","","We quantify the propagation and absorption of large-scale publicly available news articles from the World Wide Web to financial markets. To extract publicly available information, we use the news archives from the Common Crawl, a nonprofit organization that crawls a large part of the web. We develop a processing pipeline to identify news articles associated with the constituent companies in the S&P 500 index, an equity market index that measures the stock performance of U.S. companies. Using machine learning techniques, we extract sentiment scores from the Common Crawl News data and employ tools from information theory to quantify the information transfer from public news articles to the U.S. stock market. Furthermore, we analyze and quantify the economic significance of the news-based information with a simple sentiment-based portfolio trading strategy. Our findings provides support for that information in publicly available news on the World Wide Web has a statistically and economically significant impact on events in financial markets.","ETH Zurich, Switzerland; New York University, New York, USA","statistical-finance, ai/machine-learning, nlp/sentiment-analysis","In this article, we use news articles from the Common Crawl News, a subset of the Common Crawl’s petabytes of publicly available World Wide Web archives, to measure the impact of the arrival of new information about the constituent stocks in the S&P 500 index at the time of publishing. To the best of our knowledge, our study is the first one to use the Common Crawl in this way. We develop a cloud-based processing pipeline that identifies news articles in the Common Crawl News data that are related to the companies in the S&P 500. As the Common Crawl public data archives are getting bigger, they are opening doors for many real-world “data-hungry” applications such as transformers models GPT49 and BERT50, a recent class of deep learning language models. We believe that public sources of news data is important not only for natural language processing (NLP) and finance communities but also for more general studies in complex systems and computational social sciences that are aiming to characterize (mis)information propagation and dynamics in techno-socio-economic systems. The abundance of high-frequency data around the financial systems enables complex systems researchers to have microscopic observables that allow verification of different models, theories, and hypotheses.","CC-NEWS","","","" "Marco Squarcina, Mauro Tempesta, Lorenzo Veronese, Stefano Calzavara, Matteo Maffei – TU Wien, Austria; Università Ca’ Foscari Venezia, Italy","Can I take your subdomain? Exploring related-domain attacks in the modern web","https://arxiv.org/abs/2012.01946","papers","20200101Z00:00:00","","","TU Wien, Austria; Università Ca’ Foscari Venezia, Italy","computer-security/internet-security, related-domain attacks","Our web security analysis aims at quantifying the number of domains hosting web applications that can be exploited by taking over the vulnerable domains discovered by RDScan. In particular, for every apex domain with at least one vulnerable subdomain, we selected from the CommonCrawl dataset [¹⁹ Common Crawl. Host- and domain-level webgraphs feb/mar/may 2020. https://commoncrawl.org/2020/06/host-and-domain-level-web-graphs-febmarmay-2020/, 2020.] the list of 200 most popular related-domains according to the Pagerank score [11]. From the homepage of these domains,we extracted the same-origin links that appear in the HTML code.","hyperlinkgraph/cc-main-2020-feb-mar-may/hostgraph","","","" "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu – Google, Mountain View, CA, USA","Exploring the limits of transfer learning with a unified text-to-text transformer","http://jmlr.org/papers/v21/20-074.html","papers","20200101Z00:00:00","","Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.","Google, Mountain View, CA, USA","nlp/corpus-construction, nlp/language-model","We also introduce our approach for treating every problem as a text-to-text task and describe our “Colossal Clean Crawled Corpus” (C4), the Common Crawl-based data set we created as a source of unlabeled text data. [...] Common Crawl is a publicly-available web archive that provides “web extracted text” by removing markup and other non-text content from the scraped HTML files. This process produces around 20TB of scraped text data each month. Unfortunately, the majority of the resulting text is not natural language. Instead, it largely comprises gibberish or boiler-plate text like menus, error messages, or duplicate text. Furthermore, a good deal of the scraped text contains content that is unlikely to be helpful for any of the tasks we consider (offensive language, placeholder text, source code, etc.). To address these issues, we used the following heuristics for cleaning up Common Crawl’s web extracted text: [...] To assemble our base data set, we downloaded the web extracted text from April 2019 and applied the aforementioned filtering. This produces a collection of text that is not only orders of magnitude larger than most data sets used for pre-training (about 750 GB) but also comprises reasonably clean and natural English text. We dub this data set the “Colossal Clean Crawled Corpus” (or C4 for short) and release it as part of TensorFlow Datasets.⁸ [⁸https://www.tensorflow.org/datasets/catalog/c4]","CC-MAIN-2019-18 (WET)","Tensorflow-C4","","" "Jay M. Patel – Specrom Analytics, Ahmedabad, India","Getting structured data from the internet","https://www.apress.com/gp/book/9781484265758","papers","20200101Z00:00:00","","","Specrom Analytics, Ahmedabad, India","web-mining","[Chapter 6: Introduction to Common Crawl Datasets + Chapter 7: Web Crawl Processing on Big Data Scale]","","","","" "Jonathan Dunn – University of Canterbury, Christchurch, New Zealand","Mapping languages: The Corpus of Global Language Use","https://doi.org/10.1007/s10579-020-09489-2","papers","20200101Z00:00:00","","This paper describes a web-based corpus of global language use with a focus on how this corpus can be used for data-driven language mapping. First, the corpus provides a representation of where national varieties of major languages are used (e.g., English, Arabic, Russian) together with consistently collected data for each variety. Second, the paper evaluates a language identification model that supports more local languages with smaller sample sizes than alternative off-the-shelf models. Improved language identification is essential for moving beyond majority languages. Given the focus on language mapping, the paper analyzes how well this digital language data represents actual populations by (i) systematically comparing the corpus with demographic ground-truth data and (ii) triangulating the corpus with an alternate Twitter-based dataset. In total, the corpus contains 423 billion words representing 148 languages (with over 1 million words from each language) and 158 countries (again with over 1 million words from each country), all distilled from Common Crawl web data. The main contribution of this paper, in addition to describing this publicly-available corpus, is to provide a comprehensive analysis of the relationship between two sources of digital data (the web and Twitter) as well as their connection to underlying populations.","University of Canterbury, Christchurch, New Zealand","nlp/corpus-construction, nlp/language-identification","The raw portions of the Common Crawl dataset used to build the corpus are shown in Table 2. The corpus uses every portion of the crawl from March 2014 to June 2019, totaling 147 billion web pages in total. No temporal divisions are included in the corpus because these dates represent the time of collection rather than the time of production: web data does not expire and there is a long-tail in which the same samples are observed multiple times across different periods.","64 monthly crawls: March 2014 (CC-MAIN-2014-10) -- June 2019 (CC-MAIN-2019-29) (WET)","earthlings.io/CGLU","","" "Liang Xu, Xuanwei Zhang, Qianqian Dong – CLUE Organization","CLUECorpus2020: A large-scale Chinese corpus for pre-training language model","https://arxiv.org/abs/2003.01355","papers","20200101Z00:00:00","","","CLUE Organization","nlp/corpus-construction","we introduce the Chinese corpusfrom CLUE organization, CLUECorpus2020, a large-scale corpus that can be used directly for self-supervised learning such as pre-training of a language model, or language gen-eration. It has 100G raw corpus with 35 billion Chinese characters, which is retrieved from Common Crawl¹. [...] We download the corpus from July to December 2019 from Common Crawl. After the aforementioned filtering method, we extract the corpus of 100GB.","July to December 2019 (WARC)","","","" "Andreas Giannakoulopoulos, Minas Pergantis, Nikos Konstantinou, Aristeidis Lamprogeorgos, Laida Limniati, Iraklis Varlamis – Ionian University, Corfu, Greece; Harokopio University of Athens, Athens, Greece","Exploring the Dominance of the English Language on the Websites of EU Countries","http://dx.doi.org/10.3390/fi12040076","papers","20200101Z00:00:00","","The English language is the most dominant language in the Western world and its influence can be noticed in every aspect of human communication. It’s increasing diffusion, especially since the turn of the century, is hard to measure with conventional means. The present research studies the use of language in websites of European Union (EU) member states, in order to collect data about the prevalence of the English language in the different countries and regions of the European Union.To achieve a realistic representation of today’s landscape of the European Web, this study uses avast population of websites and a representative sampling size and methodology. By analyzing and processing the findings from over 100,000 websites from every country in the EU, a solid foundation is set that is used to explore the dominance of the English language in the European World Wide Web in general. This is the first study that examines the presence of English content in the websites of all EU member countries and provides statistical evidence regarding the ratio of English content availability for each country. Conclusively, the results of the research demonstrate that the English language is available on more than one quarter of all websites of non-English speaking EU member states.Moreover, it is available in the vast majority of multilingual and bilingual websites, while at the same time being the only language that is available in a number of monolingual websites. In addition, it is shown preference over the national language in a significant number of cases. A moderate negative correlation is found between a member state’s population and the availability of English in these countries’ websites and the same holds true for a member state’s Gross Domestic Product (GDP).Both these correlations indicate that smaller countries tend to provide more content in English in order to establish a stronger presence in the international environment. Taking into account the role of language in the expression of national identity, this study provides data and insights which may contribute to the discussion about the changes underway in the national identity of EU member states.","Ionian University, Corfu, Greece; Harokopio University of Athens, Athens, Greece","nlp/corpus-construction, web-science, socio-linguistics","The nature of the present research required as many websites as possible, so that both our total population and our sampling pool were as close a representation of reality as possible. For this purpose,we used information obtained from Common Crawl, a “repository of web crawl data that is universally accessible and analyzable” [34]. Among the data Common Crawl offers is an index of every available webpage for all member states of the EU amongst other countries. A process was developed in PHP:Hypertext Preprocessor (PHP) that used the CompounD indeX (CDX) server Application Program Interface (API) [35] to access Common Crawl’s Uniform Resource Locator (URL) index [36] and created a MariaDB database with information about websites from every member state of the EU. Although Common Crawl’s index provides all available crawled pages, our process of data collecting only focused on recording the landing page of one website per domain.","","","","" "Mukund Srinath, Shomir Wilson, C Lee Giles – Pennsylvania State University, PA, USA","Privacy at scale: Introducing the PrivaSeer corpus of web privacy policies","https://arxiv.org/abs/2004.11131","papers","20200101Z00:00:00","","","Pennsylvania State University, PA, USA","nlp/corpus-construction, web-science, internet-security/privacy-policies","We used Common Crawl² to gather seed URLs to crawl for privacy policies from the web, as we describe in detail below. We filtered the Common Crawl URLs to get a set of possible links to web site privacy policies. We then crawled the filtered set to obtain candidate privacy policy documents. The complete pipeline from the Common Crawl URL dump to the gold standard privacy policy corpus is shown in Figure 1. [...] The Common Crawl Foundation is a non-profit which has been releasing large monthly internet web crawls since 2008. Monthly crawl archives provide a “snapshot of the web” by including re-crawls of popular domains (re-crawls from previous archives) and crawls of new domains. Common Crawl has also been releasing a domain-level webgraph from which the harmonic centrality of the crawled domains are calculated. This webgraph his used to sample popular domains that need to be re-crawled and to obtain new uncrawled domains. We downloaded the URL dump of the May, 2019 archive. Common Crawl reports that the archive contains 2.65 billion web pages or 220 TB of uncompressed content which were crawled between 19th and 27th of May, 2019. They also report that this archive contains 825 million URLs which were not contained in any previously released crawl archives. We applied a selection criteria on the downloaded URL dump to filter the URLs of likely privacy policy pages.","","","","" "Tianxi Dong, Jason Triche – Trinity University, San Antonio, TX, USA; University of Montana, MT, USA","A longitudinal analysis of job skills for entry-level data analysts","https://jise.org/Volume31/n4/JISEv31n4p312.pdf","papers","20200101Z00:00:00","","","Trinity University, San Antonio, TX, USA; University of Montana, MT, USA","business-intelligence, nlp/corpus-construction","Our first challenge was how to collect job postings over past years because job websites do not keep historical data for more than one year. Therefore, we used the Common Crawl dataset to address this problem (http://commoncrawl.org/). Common Crawl is a non-profit organization that builds and maintains an open repository of web crawl data that is, in essence, a copy of the Internet. Common Crawl data contains over 25 billion web pages (Batikas, Claussen, and Peukert, 2018) and is widely used in hundreds of research projects (Batikas, Claussen, and Peukert, 2018; Cafarella et al., 2018). Since we were only interested in the content from Indeed.com, we only examined a very small fraction of the Common Crawl corpus.","","","","" "Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, Colin Raffel – Google; Stanford University; UC Berkeley; Northeastern University; OpenAI; Harvard University; Apple","Extracting training data from large language models","https://arxiv.org/abs/2012.07805","papers","20200101Z00:00:00","","","Google; Stanford University; UC Berkeley; Northeastern University; OpenAI; Harvard University; Apple","ai/ethical-concerns, nlp/language-models","We follow a different data collection process as used in GPT-2 (which follows Reddit links) in order to reduce the likelihood that our dataset has any intersection with the model’s training data. In particular, we select samples from a subset of Common Crawl⁶ [⁶http://commoncrawl.org/] to feed as context to the model.⁷ [⁷It is possible there is some intersection between these two datasets, effectively allowing this strategy to “cheat”. We believe this does not considerably affect results. First, any overlap between the two datasets is rare on average. Second, because we only use the first 5 or 10 tokens of each sample, any possible overlap will be small in absolute terms.]","","","","" "Thaer Sammar, Hadi Khalilia – Palestine Technical University, Tulkarm, West Bank","Going Back in Time to Find What Existed on the Web and How much has been Preserved: How much of Palestinian Web has been Archived?","http://proceedings.sriweb.org/akn/index.php/art/article/view/410","papers","20200101Z00:00:00","","The web is an important resource for publishing and sharing content. The main characteristic of the web is its volatility. Content is added, updated, and deleted all the time. Therefore, many national and international institutes started crawling and archiving the content of the web. The main focus of national institutes is to archive the web related to their country heritage, for example, the National Library of the Netherlands is focusing on archiving website that are of value to the Dutch heritage. However, there are still countries that haven’t taken the action to archive their web, which will result in loosing and having a gap in the knowledge. In this research, we focus on shedding the light on the Palestinian web. Precisely, how much of the Palestinian web has been archived. First, we create a list of Palestinian hosts that were on the web. For that we queried Google index exploiting the time range filter in order to get hosts overtime. We collected in 98 hosts in average in 5-years granularity from the year 1990 to 2019. We also obtained Palestinian hosts from the DMOZ directory. We collected 188 hosts. Second, we investigate the coverage of collected hosts in the Internet Archive and the Common-Crawl. We found that coverage of Google hosts in the Internet Archive ranges from 0\% to 89\% from oldest to newest time-granularity. The coverage of DMOZ hosts was 96\%. The coverage of Google hosts in the Common-Crawl 57.1\% to 74.3, while the coverage of DMOZ hosts in the Common-Crawl was in average 25\% in all crawls. We found that even the host is covered in Internet Archive and Common-Crawl, the lifespan and the number of archived versions are low.","Palestine Technical University, Tulkarm, West Bank","web-archiving/regional-coverage","","CDX index","","","" "Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, Noah A. Smith – Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, Seattle, USA","RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models","https://arxiv.org/abs/2009.11462","papers","20200101Z00:00:00","","","Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, Seattle, USA","no-citation-misclassified, ai/ethics-of-machine-learning, ai/machine-learning, nlp/language-model","","","","","" "Xinyue Wang, Zhiwu Xie – Virginia Polytechnic Institute and State University, Blacksburg, VA, USA","The Case For Alternative Web Archival Formats To Expedite The Data-To-Insight Cycle","https://doi.org/10.1145/3383583.3398542","papers","20200101Z00:00:00","storage management, big data analysis, web archiving, file format","The WARC file format is widely used by web archives to preserve collected web content for future use. With the rapid growth of web archives and the increasing interest to reuse these archives as big data sources for statistical and analytical research, the speed to turn these data into insights becomes critical. In this paper we show that the WARC format carries significant performance penalties for batch processing workload. We trace the root cause of these penalties to its data structure, encoding, and addressing method. We then run controlled experiments to illustrate how severe these problems can be. Indeed, performance gain of one to two orders of magnitude can be achieved simply by reformatting WARC files into Parquet or Avro formats. While these results do not necessarily constitute an endorsement for Avro or Parquet, the time has come for the web archiving community to consider replacing WARC with more efficient web archival formats.","Virginia Polytechnic Institute and State University, Blacksburg, VA, USA","web-archiving, data formats, big data, data processing, WARC, Parquet","","","","","" "Srdjan Matic, Costas Iordanou, Georgios Smaragdakis, Nikolaos Laoutaris – TU Berlin, Germany; Cyprus University of Technology, Cyprus; IMDEA Networks Institute","Identifying Sensitive URLs at Web-Scale","https://do.tu-berlin.de/handle/11303/13215","papers","20200101Z00:00:00","","Several data protection laws include special provisions for protecting personal data relating to religion, health, sexual orientation, and other sensitive categories. Having a well-defined list of sensitive categories is sufficient for filing complaints manually, conducting investigations, and prosecuting cases in courts of law. Data protection laws, however, do not define explicitly what type of content falls under each sensitive category. Therefore, it is unclear how to implement proactive measures such as informing users, blocking trackers, and filing complaints automatically when users visit sensitive domains. To empower such use cases we turn to the Curlie.org crowdsourced taxonomy project for drawing training data to build a text classifier for sensitive URLs. We demonstrate that our classifier can identify sensitive URLs with accuracy above 88%, and even recognize specific sensitive categories with accuracy above 90%. We then use our classifier to search for sensitive URLs in a corpus of 1 Billion URLs collected by the Common Crawl project. We identify more than 155 millions sensitive URLs in more than 4 million domains. Despite their sensitive nature, more than 30% of these URLs belong to domains that fail to use HTTPS. Also, in sensitive web pages with third-party cookies, 87% of the third-parties set at least one persistent cookie.","TU Berlin, Germany; Cyprus University of Technology, Cyprus; IMDEA Networks Institute","computer-security/internet-security, privacy, GDPR, general data protection regulation","When it comes to detecting specific sensitive categories, such as those defined by GDPR: Health, Politics, Religion, Sexual Orientation, Ethnicity, our classifier achieves a high classification accuracy as well. For specific categories, such as Health (98%), Politics (92%), Religion (97%), our classifier achieves an accuracy that exceeds the basic classification accuracy between sensitive and non-sensitive URLs (88%).¶ • Applying our classifier on a Common Crawl snapshot of the English speaking Web (around 1 Billion URLs), we identify 155 million sensitive URLs in more than 4 million domains. Health, Religion, and Political Beliefs are the most popular categories with around 70 millions, 35 millions, and 32 millions URLs respectively.¶ • Looking among the identified sensitive URLs we reach the conclusion that sensitive URLs are handled as any other URL, without any special provision for the privacy of users. For example, we show that 30% of sensitive URLs are hosted in domains that fail to use HTTPS. Also, in sensitive web pages with third-party cookies, 87% of the third-parties sets at least one persistent cookie.","","","","" "Sebastian Nagel – Common Crawl","Experiments using a Distributed Web Crawler to Process and Index Web Archives","https://doi.org/10.5281/zenodo.4609371","papers","20200101Z00:00:00","","","Common Crawl","web crawling, web archiving","","","","","" "Sebastian Roth, Timothy Barron, Stefano Calzavara, Nick Nikiforakis, Ben Stock – CISPA Helmholtz Center for Information Security, Germany; Stony Brook University, USA; Università Ca’ Foscari, Venezia, Italy","Complex security policy? a longitudinal analysis of deployed content security policies","https://par.nsf.gov/biblio/10173479","papers","20200101Z00:00:00","","The Content Security Policy (CSP) mechanism was developed as a mitigation against script injection attacks in 2010. In this paper, we leverage the unique vantage point of the Internet Archive to conduct a historical and longitudinal analysis of how CSP deployment has evolved for a set of 10,000 highly ranked domains. In doing so, we document the long- term struggle site operators face when trying to roll out CSP for content restriction and highlight that even seemingly secure whitelists can be bypassed through expired or typo domains. Next to these new insights, we also shed light on the usage of CSP for other use cases, in particular, TLS enforcement and framing control. Here, we find that CSP can be easily deployed to fit those security scenarios, but both lack wide-spread adoption. Specifically, while the underspecified and thus inconsistently implemented X-Frame-Options header is increasingly used on the Web, CSP’s well-specified and secure alternative cannot keep up. To understand the reasons behind this, we run a notification campaign and subsequent survey, concluding that operators have often experienced the complexity of CSP (and given up), utterly unaware of the easy-to-deploy components of CSP. Hence, we find the complexity of secure, yet functional content restriction gives CSP a bad reputation, resulting in operators not leveraging its potential to secure a site against the non-original attack vectors.","CISPA Helmholtz Center for Information Security, Germany; Stony Brook University, USA; Università Ca’ Foscari, Venezia, Italy","computer-security/internet-security, web-science","To determine this IA-specific influence, we chose a second archive service to corroborate the IA’s data. In particular, Common Crawl (CC) [10] has been collecting snapshots of popular sites since 2013. For each date on which we found a CSP in the IA, we queried the CC API for a matching snapshot. Overall, we found 38,129 overlapping snapshots for 940 sites. Out of these, 729 (1.9%) on 127 sites were inconsistent between the two archives. For 96 cases the difference was the lack of block-all-mixed-content or upgrade-insecure-requests in the CC data. Further investigation showed that in the IA, these directives were separated from the remaining CSP with a comma instead of a semicolon. This likely relates to the IA joining headers with the same name with a comma. For those pages, we could always only find a single CSP header in the CC response. Moreover, starting from August 2018, these sites still used the aforementioned directives in the IA data, but CC returned two CSP headers (one including only those directives). Hence, we speculate this relates to a bug in CC, which was fixed around August 2018.","","","","" "Frankie Robertson, Jarkko Lagus, Kaisla Kajava – University of Jyväskylä, Finland; University of Helsinki, Finland","A COVID-19 news coverage mood map of Europe","https://www.aclweb.org/anthology/2021.hackashop-1.15","papers","20210101Z00:00:00","","We present a COVID-19 news dashboard which visualizes sentiment in pandemic news coverage in different languages across Europe. The dashboard shows analyses for positive/neutral/negative sentiment and moral sentiment for news articles across countries and languages. First we extract news articles from news-crawl. Then we use a pre-trained multilingual BERT model for sentiment analysis of news article headlines and a dictionary and word vectors -based method for moral sentiment analysis of news articles. The resulting dashboard gives a unified overview of news events on COVID-19 news overall sentiment, and the region and language of publication from the period starting from the beginning of January 2020 to the end of January 2021.","University of Jyväskylä, Finland; University of Helsinki, Finland","nlp/corpus-construction, nlp/sentiment-analysis","","CC-NEWS","","","" "Jesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Matt Gardner – Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, USA","Documenting large webtext corpora: a case study on the Colossal Clean Crawled Corpus","https://arxiv.org/abs/2104.08758","papers","20210101Z00:00:00","","Large language models have led to remarkable progress on many NLP tasks, and researchers are turning to ever-larger text corpora to train them. Some of the largest corpora available are made by scraping significant portions of the internet, and are frequently introduced with only minimal documentation. In this work we provide some of the first documentation for the Colossal Clean Crawled Corpus (C4; Raffel et al., 2020), a dataset created by applying a set of filters to a single snapshot of Common Crawl. We begin by investigating where the data came from, and find a significant amount of text from unexpected sources like patents and US military websites. Then we explore the content of the text itself, and find machine-generated text (e.g., from machine translation systems) and evaluation examples from other benchmark NLP datasets. To understand the impact of the filters applied to create this dataset, we evaluate the text that was removed, and show that blocklist filtering disproportionately removes text from and about minority individuals. Finally, we conclude with some recommendations for how to created and document web-scale datasets from a scrape of the internet.","Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, USA","nlp/corpus-construction, nlp/language-model","","CC-MAIN-2019-18 (WET)","Tensorflow-C4, Huggingface-Allenai-C4-English","","" "Isaac Caswell, Julia Kreutzer, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Javier Ortiz Suárez, Iroro Orife, Kelechi Ogueji, Rubungo Andre Niyongabo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, Mofetoluwa Adeyemi – Google Research; Masakhane NLP; Turkic Interlingua; Haverford College; RobotsMali; Intel Labs; University of Zambia; Google; AIMS-AMMI; Inria; University of Zurich; Stanford University; Kwame Nkrumah University of Science and Technology; Sorbonne Université; Niger-Volta LTI; University of Waterloo; University of Electronic Science and Technology of China; University of Notre Dame; Bayero University Kano; University of South Florida; Hugging Face; Jacobs University Bremen; University of Moratuwa; EleutherAI; Obafemi Awolowo University; University of Ibadan; Instadeep; University of Maryland; Defence Space Administration Abuja","Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets","https://arxiv.org/abs/2103.12028","papers","20210101Z00:00:00","","With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, web-mined text datasets covering hundreds of languages. However, to date there has been no systematic analysis of the quality of these publicly available datasets, or whether the datasets actually contain content in the languages they claim to represent. In this work, we manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4), and audit the correctness of language codes in a sixth (JW300). We find that lower-resource corpora have systematic issues: at least 15 corpora are completely erroneous, and a significant fraction contains less than 50\% sentences of acceptable quality. Similarly, we find 82 corpora that are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-speakers of the languages in question, and supplement the human judgements with automatic analyses. Inspired by our analysis, we recommend techniques to evaluate and improve multilingual corpora and discuss the risks that come with low-quality data releases.","Google Research; Masakhane NLP; Turkic Interlingua; Haverford College; RobotsMali; Intel Labs; University of Zambia; Google; AIMS-AMMI; Inria; University of Zurich; Stanford University; Kwame Nkrumah University of Science and Technology; Sorbonne Université; Niger-Volta LTI; University of Waterloo; University of Electronic Science and Technology of China; University of Notre Dame; Bayero University Kano; University of South Florida; Hugging Face; Jacobs University Bremen; University of Moratuwa; EleutherAI; Obafemi Awolowo University; University of Ibadan; Instadeep; University of Maryland; Defence Space Administration Abuja","nlp/corpus-construction, nlp/web-as-corpus, nlp/parallel-corpus, nlp/low-resource-language","We selected the corpora for their multilinguality and the inclusion of understudied languages in NLP. With the exception of WikiMatrix and Paracrawl, all corpora are derived from CommonCrawl, and distinguish themselves by the choice of filtering methods, LangID and automatic alignment technology.","","CCAligned-2020, Tensorflow-C4-Multilingual, OSCAR","","" "P. Kalaharsha, B. M. Mehtre – Institute for Development and Research in Banking Technology (IDRBT), Hyderabad, Indiab; School of Computer Science and Information Sciences (SCIS), University of Hyderabad, Hyderabad, India","Detecting Phishing Sites -- An Overview","https://arxiv.org/abs/2103.12739","papers","20210101Z00:00:00","","","Institute for Development and Research in Banking Technology (IDRBT), Hyderabad, Indiab; School of Computer Science and Information Sciences (SCIS), University of Hyderabad, Hyderabad, India","computer-security/internet-security, computer-security/malicious-domain-detection","Alexa and Common crawl contains names of the legitimate sites which are likely to be used for phishing [62][63]. [63:http://index.commoncrawl.org]","","","","" "Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel – Google Research","mT5: A massively multilingual pre-trained text-to-text transformer","https://arxiv.org/abs/2010.11934","papers","20210101Z00:00:00","","","Google Research","nlp/corpus-construction, nlp/web-as-corpus, nlp/language-model","[...] we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages.","CC-MAIN-2019-18 (WET)","Tensorflow-C4-Multilingual (mC4)","","" "Bilal Tahir, Muhammad Amir Mehmood – University of Engineering and Technology, Lahore, Pakistan","Corpulyzer: A Novel Framework for Building Low Resource Language Corpora","https://ieeexplore.ieee.org/document/9316706","papers","20210101Z00:00:00","","","University of Engineering and Technology, Lahore, Pakistan","nlp/corpus-construction, nlp/web-as-corpus, nlp/low-resource-language","Leveraging dataset from Common Crawl Corpus (CCC), first, we prepare a list of seed URLs by filtering the Urdu language webpages. Next, we use Corpulyzer to crawl the World-Wide-Web (WWW) over a period of four years (2016-2020). We build Urdu web corpus “UrduWeb20” that consists of 8.0 million Urdu webpages crawled from 6,590 websites. [...] building a corpus of a low-resource language from CCC is a challenging task due to: i) sampling techniques, ii) filtering of webpages of target languages, and iii) full parsing of CCC. [...] we build upon our previous approach [40] where we developed a dataset consisting of 1.28 million Urdu webpages from CCC 2016 dataset. [...] In general, CCC release meta-data as well as the crawled content where former is lightweight and easier to analyze and latter requires huge bandwidth to download and store the data. As an alternate strategy, we build three datasets using CC released data: i) CC-meta, ii) CC-Urdu-meta, and ii) CC-Urdu-crawl. First, we build CC-meta dataset to explore the impact of URL selection and crawling strategies of Common Crawl in general. This dataset consists of meta-information of 29.1 billion URLs in 11 common crawl releases from September2018 – June2019. This meta-information of each release is available in the form of compressed files (>200GB size) with information of webpage URL, MIME-type, and charset etc [94]. Next, we build CC-Urdu-meta dataset by filtering out Urdu webpages. We note that from August 2018 onward releases [95], CC also provides ISO6 language code of top three languages present in webpages after parsing HTML of the webpage from CLD2.","","","","" "Alexandra Sasha Luccioni, Joseph D. Viviano – Université de Montréal, Canada; Mila Québec AI Institute, Canada","What's in the Box? An Analysis of Undesirable Content in the Common Crawl Corpus","https://arxiv.org/abs/2105.02732","papers","20210101Z00:00:00","","","Université de Montréal, Canada; Mila Québec AI Institute, Canada","ai/ethics-of-machine-learning, nlp/corpus-construction, nlp/text-corpora","Given its size, both downloading and analyzing the Common Crawl are time-consuming and costly endeavors. The most recent version of the Common Crawl [https://commoncrawl.org/2020/12/nov-dec-2020-crawl-archive-now-available/], dating from November/December 2020, has 2.6 billion web pages in raw text format, saved in ‘shards’ each containing of tens of thousands of pages. Given our hardware constraints, we chose to focus on a subset of the corpus, randomly sampling 1% of the files it contains, roughly amounting toroughly 81 GB of textual content or 5,835,339 webpages in total, which we analyzed in terms of hate speech, adult content, and efficacy of perplexity-based filtering. All code used in these analysis are publicly available¹ [¹https://github.com/josephdviviano/whatsinthebox]. [...] We found that the three approaches compared suggest similar proportions of websites containing hate speech: 5.24% of websites from our sample were flagged by DELIMIT, 4.02% by HateSonar,and 6.38% by the n-gram approach². [²We are conscious of the high false positive rate of n-gram approaches and therefore only consider sites to be flagged if they contain 3 or more n-grams from the list.] Qualitative analysis of a sample of sites flagged by each approach showed that while n-grams picked up on racial slurs, HateSonar picked up on debates about racial supremacy and conspiracy theories. Many of the sites that DELIMIT flagged were adult content with mentions of violent acts towards specific ethnic groups, illustrating the fine line between sexual violence and hate speech. [...] While it can be argued that the Common Crawl corpus is an accurate portrayal of the discourse of modern society – which includes sexual content, hate speech, racial biases, and gender biases – we believe that it is up for debate whether this discourse is the one that we, as a community, want to use to train the models that translate our texts, influence our search results and answer our questions. Notably, the Common Crawl overrepresents those populations that are avid users of the internet: younger, English-speaking individuals from developed countries, [...]","","","","" "Maik Fröbe, Janek Bevendorff, Lukas Gienapp, Michael Völske, Benno Stein, Martin Potthast, Matthias Hagen – Martin-Luther-Universität Halle-Wittenberg, Germany; Bauhaus-Universität Weimar, Germany; Leipzig University, Germany","CopyCat: Near-Duplicates within and between the ClueWeb and the Common Crawl","https://dl.acm.org/doi/10.1145/3404835.3463246","papers","20210101Z00:00:00","","","Martin-Luther-Universität Halle-Wittenberg, Germany; Bauhaus-Universität Weimar, Germany; Leipzig University, Germany","ir/duplicate-detection","","CC-MAIN-2015-11, CC-MAIN-2017-04","","","" "Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy – EleutherAI","The Pile: An 800GB Dataset of Diverse Text for Language Modeling","https://arxiv.org/abs/2101.00027","papers","20210101Z00:00:00","","Recent work has demonstrated that increased training dataset diversity improves general cross-domain knowledge and downstream generalization capability for large-scale language models. With this in mind, we present the Pile: an 825 GiB English text corpus targeted at training large-scale language models. The Pile is constructed from 22 diverse high-quality subsets—both existing and newly constructed—many of which derive from academic or professional sources. Our evaluation of the untuned performance of GPT-2 and GPT-3 on the Pile shows that these models struggle on many of its components, such as academic writing. Conversely, models trained on the Pile improve significantly over both Raw CC and CC-100 on all components of the Pile, while improving performance on downstream evaluations. Through an in-depth exploratory analysis, we document potentially concerning aspects of the data for prospective users. We make publicly available the code used in its construction.¹ [¹https://pile.eleuther.ai/]","EleutherAI","nlp/corpus-construction, nlp/text-corpora, nlp/language-model, nlp/text-corpora/legal-aspects","The growing need for data in language modeling has caused most existing large-scale language models to turn to the Common Crawl for most or all of their data (Brown et al., 2020; Raffel et al., 2019). While training on the Common Crawl has been effective, recent work has shown that dataset diversity leads to better downstream generalization capability (Rosset, 2019). [...] we also introduce a new filtered subset of Common Crawl, Pile-CC, with improved extraction quality. [...] 2.1 Pile-CC Common Crawl is a collection of website crawls from 2008 onwards, including raw web pages, metadata and text extractions. Due to the raw nature of the dataset, Common Crawl has the advantage of including text from diverse domains, but at the cost of varying quality data. Due to this, use of Common Crawl typically necessitates well-designed extraction and filtering. Our Common Crawl-based dataset, Pile-CC, uses jusText (Endrédy and Novák, 2013) on Web Archive files (raw HTTP responses including page HTML) for extraction, which yields higher quality output than directly using the WET files (extracted plain-text). [...] Surprisingly, raw Common Crawl performs better on the Pile BPB than CC-100, despite losing by a significant margin on LAMBADA and WikiText. We hypothesize that this is due to the perplexity based filtering used in CC-100, where a language model is trained on Wikipedia and all data with a perplexity too high or too low is discarded. This effectively discards any data too similar to or too different from Wikipedia, which severely limits the diversity of the collected data. This result suggests that future work using Common Crawl should take caution with filtering to preserve its diversity.","69 monthly crawls (WARC): CC-MAIN-2013-20 - CC-MAIN-2020-24, cf. https://github.com/leogao2/commoncrawl_downloader/blob/3a7a4a7c33aaee2a45f320f7bc57d0dcd3f3a220/indexes_20200607105929","The-Pile-English","","" "Leon Derczynski, Manuel R. Ciosici, Rebekah Baglini, Morten H. Christiansen, Jacob Aarup Dalsgaard, Riccardo Fusaroli, Peter Juel Henrichsen, Rasmus Hvingelby, Andreas Kirkedal, Alex Speed Kjeldsen, Claus Ladefoged, Finn Årup Nielsen, Jens Madsen, Malte Lau Petersen, Jonathan Hvithamar Rystrøm, Daniel Varab – ITU Copenhagen, Denmark; Aarhus University, Denmark; Danish Language Council, Denmark; TV2 Regionerne, Denmark; Karnov Group, Denmark; USC Information Sciences Institute, USA; Alexandra Institute, Denmark; University of Copenhagen, Denmark; Technical University of Denmark; Novo Nordisk, Denmark","The Danish Gigaword Corpus","https://gigaword.dk/","papers","20210101Z00:00:00","","","ITU Copenhagen, Denmark; Aarhus University, Denmark; Danish Language Council, Denmark; TV2 Regionerne, Denmark; Karnov Group, Denmark; USC Information Sciences Institute, USA; Alexandra Institute, Denmark; University of Copenhagen, Denmark; Technical University of Denmark; Novo Nordisk, Denmark","nlp/corpus-construction, nlp/text-corpora","[...] the Danish section of Common Crawlis plagued by significant amounts of non-Danish content, in part due to the pervasive confusion between Danish and Norwegian Bokmål by highly multilingual language ID classifiers (Haas and Derczynski, 2021). Datasets derived exclusively from Common Crawl also have a bias toward webspeak and content from recent years, leaving models built over them sub-ptimally prepared to process older Danish. Common Crawl’s undirected collection of content often overrepresents some dialects at the expense of other dialects.","","","","" "Patrick Dinklage, Jonas Ellert, Johannes Fischer, Florian Kurpicz, Marvin Löbel – TU Dortmund University, Germany","Practical Wavelet Tree Construction","https://doi.org/10.1145/3457197","papers","20210101Z00:00:00","text indexing, shared memory, external memory, distributed memory, data structures","We present new sequential and parallel algorithms for wavelet tree construction based on a new bottom-up technique. This technique makes use of the structure of the wavelet trees—refining the characters represented in a node of the tree with increasing depth—in an opposite way, by first computing the leaves (most refined), and then propagating this information upwards to the root of the tree. We first describe new sequential algorithms, both in RAM and external memory. Based on these results, we adapt these algorithms to parallel computers, where we address both shared memory and distributed memory settings.In practice, all our algorithms outperform previous ones in both time and memory efficiency, because we can compute all auxiliary information solely based on the information we obtained from computing the leaves. Most of our algorithms are also adapted to the wavelet matrix, a variant that is particularly suited for large alphabets.","TU Dortmund University, Germany","data-structures, text-indexing","Common Crawl. The Common Crawl corpus contains websites that are crawled by the Common Crawl Project. We use the WET files, which contain only the textual data of the crawled websites, i. e., no HTML tags. We also removed the meta information added by the Commoncrawl corpus. To be more precise, we used the following WET files: crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/wet/CC-MAIN-20190215183319-20190215205319-#ID.warc.wet, where #ID is in the range from 00000 to 00600. As we only care for the text, we removed the WARC meta information, i. e., each line consisting of WARC/1.0 and the following eight lines. CommonCrawl is the concatenation of all files sorted in ascending order by their ID.","CC-MAIN-2019-09 (600 WET files)","","","" "Jay A. Olson, Johnny Nahas, Denis Chmoulevitch, Simon J. Cropper, Margaret E. Webb – Department of Psychology, Harvard University, Cambridge, MA, USA; Department of Psychology, McGill University, Montreal, QC, Canada; Melbourne School of Psychological Sciences, University of Melbourne, Australia","Naming unrelated words predicts creativity","https://www.pnas.org/content/118/25/e2022340118","papers","20210101Z00:00:00","","Many traditional measures of creativity require time-intensive and subjective scoring procedures. Their scores are relative to the specific sample, which makes multicultural or international assessments difficult. Our results show that a shorter and simpler task with automatic and objective scoring may be at least as reliable at measuring verbal creativity. This finding enables assessments across larger and more diverse samples with less bias.Several theories posit that creative people are able to generate more divergent ideas. If this is correct, simply naming unrelated words and then measuring the semantic distance between them could serve as an objective measure of divergent thinking. To test this hypothesis, we asked 8,914 participants to name 10 words that are as different from each other as possible. A computational algorithm then estimated the average semantic distance between the words; related words (e.g., cat and dog) have shorter distances than unrelated ones (e.g., cat and thimble). We predicted that people producing greater semantic distances would also score higher on traditional creativity measures. In Study 1, we found moderate to strong correlations between semantic distance and two widely used creativity measures (the Alternative Uses Task and the Bridge-the-Associative-Gap Task). In Study 2, with participants from 98 countries, semantic distances varied only slightly by basic demographic variables. There was also a positive correlation between semantic distance and performance on a range of problems known to predict creativity. Overall, semantic distance correlated at least as strongly with established creativity measures as those measures did with each other. Naming unrelated words in what we call the Divergent Association Task can thus serve as a brief, reliable, and objective measure of divergent thinking.The data and algorithm code have been deposited in the Open Science Framework (https://osf.io/vjazn/).","Department of Psychology, Harvard University, Cambridge, MA, USA; Department of Psychology, McGill University, Montreal, QC, Canada; Melbourne School of Psychological Sciences, University of Melbourne, Australia","psychology/creativity, psychology/computational-scoring, nlp/word-embeddings","We chose the GloVe algorithm and the Common Crawl corpus [...]","","","GloVe-word-embeddings","" "Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer – Facebook AI; University of Washington, USA","HTLM: Hyper-Text Pre-Training and Prompting of Language Models","https://arxiv.org/abs/2107.06955","papers","20210101Z00:00:00","","We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advan- tages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task- adjacent supervision (e.g. class and id at- tributes often encode document category information), and (3) it allows for new structured prompting that follows the established seman- tics of HTML (e.g. to do zero-shot summarization by infilling