cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited | |
"Roland Schäfer – Freie Universität Berlin, Germany",Accurate and Efficient General-purpose Boilerplate Detection for Crawled Web Corpora,https://doi.org/10.1007/s10579-016-9359-2,papers,20170101Z00:00:00,"Boilerplate, Corpus construction, Non-destructive corpus normalization, Web corpora","Removal of boilerplate is one of the essential tasks in web corpus construction and web indexing. Boilerplate (redundant and automatically inserted material like menus, copyright notices, navigational elements, etc.) is usually considered to be linguistically unattractive for inclusion in a web corpus. Also, search engines should not index such material because it can lead to spurious results for search terms if these terms appear in boilerplate regions of the web page. The size of large web corpora necessitates the use of efficient algorithms while a high accuracy directly improves the quality of the final corpus. In this paper, I present and evaluate a supervised machine learning approach to general-purpose boilerplate detection for languages based on Latin alphabets which is both very efficient and very accurate. Using a Multilayer Perceptron and a high number of carefully engineered features, I achieve between 95\% and 99\% correct classifications (depending on the input language) with precision and recall over 0.95. Since the perceptrons are trained on language-specific data, I also evaluate how well perceptrons trained on one language perform on other languages. The single features are also evaluated for the merit they contribute to the classification. I show that the accuracy of the Multilayer Perceptron is on a par with that of other classifiers such as Support Vector Machines. I conclude that the quality of general-purpose boilerplate detectors depends mainly on the availability of many well-engineered features and which are highly language-independent. The method has been implemented in the open-source texrex web page cleaning software, and large corpora constructed using it are available from the COW initiative, including the CommonCOW corpora created from CommonCrawl data sets.","Freie Universität Berlin, Germany","nlp/boilerplate-removal, nlp/web-as-corpus, nlp/corpus-construction",,,,, | |
"Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, Václava Kettnerová, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria dePaiva, Kira Droganova, Héctor Martínez Alonso, Çağrı Çöltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadová, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonca, Tatiana Lando, Rattima Nitisaroj, Josie Li – Charles University, Czech Republic; Uppsala University, Sweden; University of Turku, Finland; University of Cambridge; Google; Bauhaus-Universität Weimar, Germany; UiT The Arctic University of Norway; University of the Basque Country, Spain; Istanbul Technical University, Turkey; Stanford University; New York University Abu Dhabi; City University of Hong Kong; Ohio State University, USA; University of Turin, Italy; University of Pisa, Italy; IBM Research; Nuance Communications; INRIA – Paris 7, France; University of Tübingen, Germany; DFKI, Germany; text & form, Germany",CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies,http://www.aclweb.org/anthology/K/K17/K17-3001.pdf,papers,20170101Z00:00:00,,"The Conference on Computational Natural Language Learning (CoNLL) features a shared task, in which participants train and test their learning systems on the same data sets. In 2017, the task was devoted to learning dependency parsers for a large number of languages, in a real-world setting without any gold-standard annotation on input. All test sets followed a unified annotation scheme, namely that of Universal Dependencies. In this paper, we define the task and evaluation methodology, describe how the data sets were prepared, report and analyze the main results, and provide a brief categorization of the different approaches of the participating systems.","Charles University, Czech Republic; Uppsala University, Sweden; University of Turku, Finland; University of Cambridge; Google; Bauhaus-Universität Weimar, Germany; UiT The Arctic University of Norway; University of the Basque Country, Spain; Istanbul Technical University, Turkey; Stanford University; New York University Abu Dhabi; City University of Hong Kong; Ohio State University, USA; University of Turin, Italy; University of Pisa, Italy; IBM Research; Nuance Communications; INRIA – Paris 7, France; University of Tübingen, Germany; DFKI, Germany; text & form, Germany","nlp/dependency-parsing, nlp/dependency-treebank, nlp/corpus-construction","The supporting raw data was gathered from CommonCrawl, which is a publicly available web crawl created and maintained by the non-profit CommonCrawl foundation.² The data is publicly available in the Amazon cloud both as raw HTML and as plain text. It is collected from a number of independent crawls from 2008 to 2017, and totals petabytes in size. We used cld2³ as the language detection engine because of its speed, available Python bindings and large coverage of languages. Language detection was carried out on the first 1024 bytes of each plaintext document. Deduplication was carried out using hashed document URLs, a simple strategy found in our tests to be effective for coarse duplicate removal. The data for each language was capped at 100,000 tokens per a single input file.",,conll-2017-shared-task,, | |
"Abu Bakr Soliman, Kareem Eissa, Samhaa El-Beltagy – Nile University, Egypt",AraVec: A set of Arabic Word Embedding Models for use in Arabic NLP,https://www.researchgate.net/publication/319880027_AraVec_A_set_of_Arabic_Word_Embedding_Models_for_use_in_Arabic_NLP,papers,20170101Z00:00:00,,,"Nile University, Egypt",nlp/word-embeddings,"we have used a subset of the January 2017 crawl dump. The dump contains more than 3.14 billion web pages and about 250 Terabytes of uncompressed content. [...] We used WET files as we were only interested in plain text for building the distributed word representation models. Due to the size of the dump, which requires massive processing power and time for handling, we only used 30\% of the data contained in it. As this subset comprises about one billion web pages (written in multiple language), we believed that it was large enough to provide sufficient Arabic Web pages from which we can build a representative word embeddings model. Here it is important to note that the Common Crawl project does not provide any technique for identifying or selecting the language of web pages to download. So, we had to download data first, and then discard pages that were not written in Arabic. The Arabic detection phase was performed using some regex commands and some NLP techniques to distinguish Arabic from other languages. After the completion of this phase we succeeded in obtaining 4,379,697 Arabic web pages which were then segmented into more than 180,000,000 paragraphs/documents for building our models.",,,, | |
"Tommy Dean, Ali Pasha, Brian Clarke, Casey J. Butenhoff – Virginia Polytechnic Institute and State University, USA; Eastman Chemical Company; USA",Common Crawl Mining,http://hdl.handle.net/10919/77629,papers,20170101Z00:00:00,,,"Virginia Polytechnic Institute and State University, USA; Eastman Chemical Company; USA","information retrieval, market research, business intelligence",The main goal behind the Common Crawl Mining system is to improve Eastman Chemical Company’s ability to use timely knowledge of public concerns to inform key business decisions. It provides information to Eastman Chemical Company that is valuable for consumer chemical product marketing and strategy development. Eastman desired a system that provides insight into the current chemical landscape. Information about trends and sentiment towards chemicals over time is beneficial to their marketing and strategy departments. They wanted to be able to drill down to a particular time period and look at what people were writing about certain keywords. [...] The final Common Crawl Mining system is a search engine implemented using Elasticsearch. Relevant records are identified by first analyzing Common Crawl for Web Archive (WARC) files that have a high frequency of records from interesting domains.,,,, | |
"Yuheng Du, Alexander Herzog, Andre Luckow, Ramu Nerella, Christopher Gropp, Amy Apon – Clemson University, USA",Representativeness of latent dirichlet allocation topics estimated from data samples with application to common crawl,http://alexherzog.net/files/IEEE_BigData_2017_Representativeness_of_LDA.pdf,papers,20170101Z00:00:00,,,"Clemson University, USA","nlp/topic-modeling, nlp/corpus-representativeness","Common Crawl is a massive multi-petabyte dataset hosted by Amazon. It contains archived HTML web page data from 2008 to date. Common Crawl has been widely used for text mining purposes. Using data extracted from Common Crawl has several advantages over a direct crawl of web data, among which is removing the likelihood of a user’s home IP address becoming blacklisted for accessing a given web site too frequently. However, Common Crawl is a data sample, and so questions arise about the quality of Common Crawl as a representative sample of the original data. We perform systematic tests on the similarity of topics estimated from Common Crawl compared to topics estimated from the full data of online forums. Our target is online discussions from a user forum for automotive enthusiasts, but our research strategy can be applied to other domains and samples to evaluate the representativeness of topic models. We show that topic proportions estimated from Common Crawl are not significantly different than those estimated on the full data. We also show that topics are similar in terms of their word compositions, and not worse than topic similarity estimated under true random sampling, which we simulate through a series of experiments. Our research will be of interest to analysts who wish to use Common Crawl to study topics of interest in user forum data, and analysts applying topic models to other data samples.",,,, | |
"Shalini Ghosh, Phillip Porras, Vinod Yegneswaran, Ken Nitz, Ariyam Das – CSL, SRI International, Menlo Park",ATOL: A Framework for Automated Analysis and Categorization of the Darkweb Ecosystem,https://www.aaai.org/ocs/index.php/WS/AAAIW17/paper/download/15205/14661,papers,20170101Z00:00:00,,,"CSL, SRI International, Menlo Park","web-science, information retrieval, nlp/text-classification",".onion references from [...] and an open repository of (non-onion) Web crawling data, called Common Crawl (Common Crawl Foundation 2016).",,,, | |
"Filip Ginter, Jan Hajič, Juhani Luotolahti, Milan Straka, Daniel Zeman – Charles University, Czech Republic; University of Turku, Finland",CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings,http://hdl.handle.net/11234/1-1989,papers,20170101Z00:00:00,,,"Charles University, Czech Republic; University of Turku, Finland","nlp/corpus-construction, nlp/word-embeddings, nlp/syntactic-annotations, nlp/dependency-parsing","Automatic segmentation, tokenization and morphological and syntactic annotations of raw texts in 45 languages, generated by UDPipe (http://ufal.mff.cuni.cz/udpipe), together with word embeddings of dimension 100 computed from lowercased texts by word2vec (https://code.google.com/archive/p/word2vec/). [...] Note that the CC BY-SA-NC 4.0 license applies to the automatically generated annotations and word embeddings, not to the underlying data, which may have different license and impose additional restrictions.",,conll-2017-shared-task,, | |
"Jakub Kúdela, Irena Holubová, Ondřej Bojar – Charles University, Czech Republic",Extracting Parallel Paragraphs from Common Crawl,https://ufal.mff.cuni.cz/pbml/107/art-kudela-holubova-bojar.pdf,papers,20170101Z00:00:00,,"Most of the current methods for mining parallel texts from the web assume that web pages of web sites share same structure across languages. We believe that there still exists a non-negligible amount of parallel data spread across sources not satisfying this assumption. We propose an approach based on a combination of bivec (a bilingual extension of word2vec) and locality-sensitive hashing which allows us to efficiently identify pairs of parallel segments located anywhere on pages of a given web domain, regardless their structure. We validate our method on realigning segments from a large parallel corpus. Another experiment with real-world data provided by Common Crawl Foundation confirms that our solution scales to hundreds of terabytes large set of web-crawled data.","Charles University, Czech Republic","nlp/machine-translation, nlp/corpus-construction",,,,, | |
"Amir Mehmood, Hafiz Muhammad Shafiq, Abdul Waheed – UET, Lahore, Pakistan",Understanding Regional Context of World Wide Web using Common Crawl Corpus,https://www.researchgate.net/publication/321489200_Understanding_Regional_Context_of_World_Wide_Web_using_Common_Crawl_Corpus,papers,20170101Z00:00:00,,,"UET, Lahore, Pakistan","web-science, webometrics",,CC-MAIN-2016-50,,, | |
"Alexander Panchenko, Eugen Ruppert, Stefano Faralli, Simone Paolo Ponzetto, Chris Biemann – University of Hamburg, Germany; University of Mannheim, Germany",Building a Web-Scale Dependency-Parsed Corpus from CommonCrawl,http://arxiv.org/abs/1710.01779,papers,20170101Z00:00:00,,,"University of Hamburg, Germany; University of Mannheim, Germany","nlp/dependency-parsing, nlp/corpus-construction",,CC-MAIN-2016-07,depcc,, | |
"Ajinkya Kale, Thrivikrama Taula, Sanjika Hewavitharana, Amit Srivastava – eBay Inc.",Towards semantic query segmentation,https://arxiv.org/abs/1707.07835,papers,20170101Z00:00:00,,,eBay Inc.,"ir/query-segmentation, nlp/word-embeddings, patent",,,,,GloVe-word-embeddings | |
"Kjetil Bugge Kristoffersen – University of Oslo, Norway",Common crawled web corpora: constructing corpora from large amounts of web data,http://urn.nb.no/URN:NBN:no-60569,papers,20170101Z00:00:00,,"Efforts to use web data as corpora seek to provide solutions to problems traditional corpora suffer from, by taking advantage of the web's huge size and diverse type of content. This thesis will discuss the several sub-tasks that make up the web corpus construction process, like HTML markup removal, language identification, boilerplate removal, duplication detection, etc. Additionally, by using data provided by the Common Crawl Foundation, I develop a new very large English corpus with more than 135 billion tokens. Finally, I evaluate the corpus by training word embeddings and show that the trained model largely outperforms models trained on other corpora in a word analogy and word similarity task.","University of Oslo, Norway","nlp/corpus-construction, nlp/web-as-corpus",,,,, | |
"David Stuart – University of Wolverhampton, Wolverhampton, UK",Open bibliometrics and undiscovered public knowledge,https://doi.org/10.1108/OIR-07-2017-0209,papers,20170101Z00:00:00,,,"University of Wolverhampton, Wolverhampton, UK",web-science/webometrics,"Whether altmetrics is really any more open than traditional citation analysis is a matter of debate, although services such as Common Crawl (http://commoncrawl.org), an open repository of web crawl data, provides the opportunity for more open webometrics, [...]",,,, | |