year
stringclasses 1
value | title
stringlengths 11
143
| authors
sequencelengths 1
2
| snippet
stringlengths 188
291
| url
sequencelengths 1
2
|
---|---|---|---|---|
2016 | Learning to refine text based recommendations | [
"Y Gu, T Lei, R Barzilay, T Jaakkola"
] | ... Word Vectors: For the ingredient/product prediction task, we used the GloVe pre-trained vectors (Common Crawl, 42 billion tokens, 300dimensional) (Pennington et al., 2014). The word vectors for the AskUbuntu vectors are pre-trained using the AskUbuntu and Wikipedia ... | [
"https://people.csail.mit.edu/taolei/papers/emnlp16_recommendation.pdf"
] |
2016 | Learning to translate from graded and negative relevance information | [
"L Jehl, S Riezler"
] | Page 1. Learning to translate from graded and negative relevance information Laura Jehl Computational Linguistics Heidelberg University 69120 Heidelberg, Germany jehl@cl.uni-heidelberg.de Stefan Riezler Computational ... | [
"https://pdfs.semanticscholar.org/79ee/9b20f0776affab912a3528d604e152cc1217.pdf"
] |
2016 | Lexical Coherence Graph Modeling Using Word Embeddings | [
"M Mesgar, M Strube - Proceedings of NAACL-HLT, 2016"
] | ... 1971). We use a pretrained model of GloVe for word embeddings. This model is trained on Common Crawl with 840B tokens, 2.2M vocabulary. We represent each word by a vector with length 300 (Pennington et al., 2014). For ... | [
"http://www.aclweb.org/anthology/N/N16/N16-1167.pdf"
] |
2016 | LIMSI@ WMT'16: Machine translation of news | [
"A Allauzen, L Aufrant, F Burlot, E Knyazeva… - Proc. of the ACL 2016 First …, 2016"
] | ... Having noticed many sentence alignment errors and out-of-domain parts in the Russian common-crawl parallel corpus, we have used a bilingual sentence aligner3 and proceeded to a domain adaptation filtering using the same procedure as for monolingual data (see ... | [
"http://www.aclweb.org/anthology/W/W16/W16-2304.pdf"
] |
2016 | Log-linear Combinations of Monolingual and Bilingual Neural Machine Translation Models for Automatic Post-Editing | [
"M Junczys-Dowmunt, R Grundkiewicz - arXiv preprint arXiv:1605.04800, 2016"
] | ... 4. The German monolingual common crawl corpus — a very large resource of raw German text from the Common Crawl project — admissible for the WMT-16 news translation and IT translation tasks. 3.2 Preand post-processing ... | [
"http://arxiv.org/pdf/1605.04800"
] |
2016 | Lurking Malice in the Cloud: Understanding and Detecting Cloud Repository as a Malicious Service | [
"X Liao, S Alrwais, K Yuan, L Xing, XF Wang, S Hao… - Proceedings of the 2016 …, 2016"
] | ... Running the scanner over all the data collected by the Common Crawl [?], which indexed five billion web pages, for those associated with all major cloud storage providers (including Amazon S3, Cloudfront, Google Drive, etc.), we found around 1 million sites utilizing 6,885 ... | [
"http://dl.acm.org/citation.cfm?id=2978349"
] |
2016 | Machine Translation Quality and Post-Editor Productivity | [
"M Sanchez-Torron, P Koehn - AMTA 2016, Vol., 2016"
] | ... corresponding Spanish human reference translations. We trained nine MT systems with training data from the European Parliament proceedings, News Commentary, Common Crawl, and United Nations. The systems are phrase ... | [
"https://www.researchgate.net/profile/John_Ortega3/publication/309765044_Fuzzy-match_repair_using_black-box_machine_translation_systems_what_can_be_expected/links/5822496f08ae7ea5be6af317.pdf#page=22"
] |
2016 | Machine Translation Through Learning From a Communication Game | [
"D He, Y Xia, T Qin, L Wang, N Yu, T Liu, WY Ma - Advances In Neural Information …, 2016"
] | ... In detail, we used the same bilingual corpora from WMT'14 as used in [1, 5], which contains 12M sentence pairs extracting from five datasets: Europarl v7, Common Crawl corpus, UN corpus, News Commentary, and 109French-English corpus. ... | [
"http://papers.nips.cc/paper/6468-machine-translation-through-learning-from-a-communication-game.pdf"
] |
2016 | Measuring semantic similarity of words using concept networks | [
"G Recski, E Iklódi, K Pajkossy, A Kornai"
] | ... We extend this set of models with GloVe vectors4 (Pennington et al., 2014), trained on 840 billion tokens of Common Crawl data5, and the two word embeddings mentioned in Section 1 that have recently been evaluated on the SimLex dataset: the 500-dimension SP model6 ... | [
"http://www.kornai.com/Papers/wordsim.pdf"
] |
2016 | Models and Inference for Prefix-Constrained Machine Translation | [
"J Wuebker, S Green, J DeNero, S Hasan, MT Luong"
] | ... The English-French bilingual training data consists of 4.9M sentence pairs from the Common Crawl and Europarl corpora from WMT 2015 (Bo- jar et al., 2015). The LM was estimated from the target side of the bitext. For English-German we run large-scale experiments. ... | [
"http://nlp.stanford.edu/pubs/wuebker2016acl_prefix.pdf"
] |
2016 | Multi-cultural Wikipedia mining of geopolitics interactions leveraging reduced Google matrix analysis | [
"KM Frahm, SE Zant, K Jaffrès-Runser… - arXiv preprint arXiv: …, 2016"
] | ... At present directed networks of real systems can be very large (about 4.2 million articles for the English Wikipedia edition in 2013 [13] or 3.5 billion web pages for a publicly ac- cessible web crawl that was gathered by the Common Crawl Foundation in 2012 [18]). ... | [
"https://arxiv.org/pdf/1612.07920"
] |
2016 | Multi-Perspective Context Matching for Machine Comprehension | [
"Z Wang, H Mi, W Hamza, R Florian - arXiv preprint arXiv:1612.04211, 2016"
] | ... jpurkar et al., 2016). To initialize the word embeddings in the word representation layer, we use the 300-dimensional GloVe word vectors pre-trained from the 840B Common Crawl corpus (Pennington et al., 2014). For the out ... | [
"https://arxiv.org/pdf/1612.04211"
] |
2016 | N-gram language models for massively parallel devices | [
"N Bogoychev, A Lopez"
] | ... benchmark task computes perplexity on data ex- tracted from the Common Crawl dataset used for the 2013 Workshop on Machine Translation, which ... statmt.org/moses/RELEASE-3.0/models/ fr- en/lm/europarl.lm.1 7http://www.statmt.org/wmt13/training-parallelcommoncrawl.tgz ... | [
"http://homepages.inf.ed.ac.uk/s1031254/publications/n-gram-language.pdf"
] |
2016 | Neural Architectures for Fine-grained Entity Type Classification | [
"S Shimaoka, P Stenetorp, K Inui, S Riedel - arXiv preprint arXiv:1606.01341, 2016"
] | ... Rocktäschel et al., 2015). For this purpose, we used the freely available 300-dimensional cased word embeddings trained on 840 billion tokens from the Common Crawl supplied by Pennington et al. (2014). For words not present ... | [
"http://arxiv.org/pdf/1606.01341"
] |
2016 | Neural Interactive Translation Prediction | [
"R Knowles, P Koehn - AMTA 2016, Vol., 2016"
] | ... The data consists of a 115 million word parallel corpus (Europarl, News Commentary, CommonCrawl), 3http://www. statmt. ... and about 75 billion words of additional English monolingual data (LDC Gigaword, monolingual news, monolingual CommonCrawl). ... | [
"https://www.researchgate.net/profile/John_Ortega3/publication/309765044_Fuzzy-match_repair_using_black-box_machine_translation_systems_what_can_be_expected/links/5822496f08ae7ea5be6af317.pdf#page=113"
] |
2016 | Neural Machine Translation with Pivot Languages | [
"Y Cheng, Y Liu, Q Yang, M Sun, W Xu - arXiv preprint arXiv:1611.04928, 2016"
] | ... We use the statistical significance test with paired bootstrap resampling [Koehn, 2004]. Table 1 shows the Spanish-English and English-French corpora from WMT which include Common Crawl, News Commentary, Europarl v7 and UN. ... | [
"https://arxiv.org/pdf/1611.04928"
] |
2016 | Neural Machine Translation with Recurrent Attention Modeling | [
"Z Yang, Z Hu, Y Deng, C Dyer, A Smola - arXiv preprint arXiv:1607.05108, 2016"
] | ... 3 Experiments & Results 3.1 Data sets We experiment with two data sets: WMT EnglishGerman and NIST Chinese-English. • English-German The German-English data set contains Europarl, Common Crawl and News Commentary corpus. ... | [
"http://arxiv.org/pdf/1607.05108"
] |
2016 | Neural Network-based Word Alignment through Score Aggregation | [
"J Legrand, M Auli, R Collobert - arXiv preprint arXiv:1606.09560, 2016"
] | ... For LSE, we set r = 1 in (4). We initialize the word embeddings with a simple PCA computed over the matrix of word co- occurrence counts (Lebret and Collobert, 2014). The co-occurrence counts were computed over the common crawl corpus provided by WMT16. ... | [
"http://arxiv.org/pdf/1606.09560"
] |
2016 | Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision | [
"C Liang, J Berant, Q Le, KD Forbus, N Lao - arXiv preprint arXiv:1611.00020, 2016"
] | ... All the weight matrices are initialized with a uniform distribution in [− √ 3 d , √ 3 d] where d is the input dimension. For pretrained word embeddings, we used the 300 dimension GloVe word embeddings trained on 840B common crawl corpus [? ]. ... | [
"https://arxiv.org/pdf/1611.00020"
] |
2016 | NewsQA: A Machine Comprehension Dataset | [
"A Trischler, T Wang, X Yuan, J Harris, A Sordoni… - arXiv preprint arXiv: …, 2016"
] | ... Both mLSTM and BARB are implemented with the Keras framework (Chollet, 2015) using the Theano (Bergstra et al., 2010) backend. Word embeddings are initialized using GloVe vectors (Pennington et al., 2014) pre-trained on the 840-billion Common Crawl corpus. ... | [
"https://arxiv.org/pdf/1611.09830"
] |
2016 | Normalized Log-Linear Interpolation of Backoff Language Models is Efficient | [
"K Heafield, C Geigle, S Massung, L Schwartz - Urbana"
] | Page 1. Normalized Log-Linear Interpolation of Backoff Language Models is Efficient Kenneth Heafield University of Edinburgh 10 Crichton Street Edinburgh EH8 9AB United Kingdom kheafiel@inf.ed.ac.uk Chase Geigle Sean ... | [
"https://kheafield.com/professional/edinburgh/interpolate_paper.pdf"
] |
2016 | NRC Russian-English Machine Translation System for WMT 2016 | [
"C Lo, C Cherry, G Foster, D Stewart, R Islam… - Proceedings of the First …, 2016"
] | ... They include the CommonCrawl corpus, the NewsCommentary v11 corpus, the Yandex corpus and the Wikipedia headlines corpus. ... Due to resource limits, we have not used the newly re- leased 3 billion sentence CommonCrawl monolingual English corpus. ... | [
"http://www.aclweb.org/anthology/W/W16/W16-2317.pdf"
] |
2016 | of Deliverable: Multimedia Linking and Mining | [
"K Andreadou, S Papadopoulos, M Zampoglou… - 2016"
] | Page 1. D3.2 – Multimedia Linking and Mining Version: v1.4 – Final, Date: 02/03/2016 PROJECT TITLE: REVEAL CONTRACT NO. FP7-610928 PROJECT COORDINATOR: INTRASOFT INTERNATIONAL SA WWW.REVEALPROJECT.EU PAGE 1 OF 139 REVEAL FP7-610928 ... | [
"http://revealproject.eu/wp-content/uploads/D3.2Multimedia-Linking-and-Mining.pdf"
] |
2016 | On Approximately Searching for Similar Word Embeddings | [
"K Sugawara, H Kobayashi, M Iwasaki"
] | ... GV 300-dimensional embeddings (Pennington et al., 2014a) learned by the global vectors for word representation (GloVe) model (Pennington et al., 2014b) using Common Crawl corpora, which contain about 2 million words and 42 billion tokens. ... | [
"http://www.aclweb.org/anthology/P/P16/P16-1214.pdf"
] |
2016 | On Bias-free Crawling and Representative Web Corpora | [
"R Schäfer, H Allee - ACL 2016, 2016"
] | ... Language Resources and Evaluation. Online first: DOI 10.1007/s10579-016-9359-2. Roland Schäfer. 2016b. CommonCOW: Massively huge web corpora from CommonCrawl data and a method to distribute them freely under restrictive EU copyright laws. ... | [
"http://iiegn.eu/assets/outputs/WAC-X:2016.pdf#page=81"
] |
2016 | On the Ubiquity of Web Tracking: Insights from a Billion-Page Web Crawl | [
"S Schelter, J Kunegis - arXiv preprint arXiv:1607.07403, 2016"
] | ... We extract third-party embeddings from more than 3.5 billion web pages of the CommonCrawl 2012 corpus, and aggregate those to a dataset containing more than 140 million third-party embeddings in over 41 million domains. ... | [
"http://arxiv.org/pdf/1607.07403"
] |
2016 | Online tracking: A 1-million-site measurement and analysis | [
"S Englehardt, A Narayanan - 2016"
] | ... AdFisher builds on similar technologies as OpenWPM (Selenium, xvfb), but is not intended for tracking measurements. Common Crawl4 uses an Apache Nutch based crawler. The Common Crawl dataset is the largest publicly available web crawl5, with billions of page visits. ... | [
"http://senglehardt.com/papers/ccs16_online_tracking.pdf"
] |
2016 | Optimizing Interactive Development of Data-Intensive Applications | [
"M Interlandi, SD Tetali, MA Gulzar, J Noor, T Condie… - Proceedings of the Seventh …, 2016"
] | ... 1. 311 service requests dataset. https://data.cityofnewyork.us/Social-Services/311-ServiceRequests-from-2010-to-Present/erm2-nwe9. 2. Common crawl dataset. http://commoncrawl.org. 3. Hadoop. http://hadoop.apache.org. 4. Spark. http://spark.apache.org. 5. WikiReverse. ... | [
"http://dl.acm.org/citation.cfm?id=2987565"
] |
2016 | Paragraph Vector for Data Selection in Statistical Machine Translation | [
"MS Duma, W Menzel"
] | ... As general domain data we chose the Commoncrawl corpus1 as it is a relatively large corpus and contains crawled data from a variety of domains as well as texts having different discourse types 1http://commoncrawl.org/ (including spoken discourse). ... | [
"https://www.linguistics.rub.de/konvens16/pub/11_konvensproc.pdf"
] |
2016 | Parallel Graph Processing on Modern Multi-Core Servers: New Findings and Remaining Challenges | [
"A Eisenman, L Cherkasova, G Magalhaes, Q Cai…"
] | Page 1. Parallel Graph Processing on Modern Multi-Core Servers: New Findings and Remaining Challenges Assaf Eisenman1,2, Ludmila Cherkasova2, Guilherme Magalhaes3, Qiong Cai2, Sachin Katti1 1Stanford University ... | [
"http://www.labs.hpe.com/people/lucy_cherkasova/papers/main-mascots16.pdf"
] |
2016 | ParFDA for Instance Selection for Statistical Machine Translation | [
"E Biçici - Proceedings of the First Conference on Machine …, 2016"
] | ... Compared with last year, this year we do not use Common Crawl parallel corpus except for en-ru. We use Common Crawl monolingual corpus fi, ro, and tr datasets and we extended the LM corpora with previous years' corpora. ... | [
"http://www.aclweb.org/anthology/W/W16/W16-2306.pdf"
] |
2016 | Partitioning Trillion-edge Graphs in Minutes | [
"GM Slota, S Rajamanickam, K Devine, K Madduri - arXiv preprint arXiv:1610.07220, 2016"
] | Page 1. Partitioning Trillion-edge Graphs in Minutes George M. Slota Computer Science Department Rensselaer Polytechnic Institute Troy, NY slotag@rpi.edu Sivasankaran Rajamanickam & Karen Devine Scalable Algorithms ... | [
"https://arxiv.org/pdf/1610.07220"
] |
2016 | Performance Optimization Techniques and Tools for Distributed Graph Processing | [
"V Kalavri - 2016"
] | Page 1. Performance Optimization Techniques and Tools for Distributed Graph Processing VASILIKI KALAVRI School of Information and Communication Technology KTH Royal Institute of Technology Stockholm, Sweden 2016 ... | [
"http://www.diva-portal.org/smash/get/diva2:968786/FULLTEXT02"
] |
2016 | Phishing Classification using Lexical and Statistical Frequencies of URLs | [
"S Villegas, AC Bahnsen, J Vargas"
] | ... We used a sample of 1.2 million phishing URLs extracted from Phishtank and 1.2 million ham URLs from the CommonCrawl corpus to train the model. Classification based on URLs facilitates a defense against all phishing attacks due to the feature they all share, a URL. ... | [
"http://albahnsen.com/files/Phishing%20Classification%20using%20Lexical%20and%20Statistical%20Frequencies%20of%20URLs.pdf"
] |
2016 | Phrase-based Machine Translation is State-of-the-Art for Automatic Grammatical Error Correction | [
"M Junczys-Dowmunt, R Grundkiewicz - arXiv preprint arXiv:1605.06353, 2016"
] | ... Their method relies on a character-level encoder-decoder recurrent neural network with an attention mechanism. They use data from the public Lang-8 corpus and combine their model with an n-gram language model trained on web-scale Common Crawl data. ... | [
"http://arxiv.org/pdf/1605.06353"
] |
2016 | Phrase-Based SMT for Finnish with More Data, Better Models and Alternative Alignment and Translation Tools | [
"J Tiedemann, F Cap, J Kanerva, F Ginter, S Stymne… - Proceedings of the First …, 2016"
] | ... The English language model based on the provided Common Crawl data is limited to trigrams. ... The data is obtained from a large-scale Internet crawl, seeded from all Finnish pages in CommonCrawl.3 However, actual CommonCrawl data is only a small fraction of the total ... | [
"http://www.aclweb.org/anthology/W/W16/W16-2326.pdf"
] |
2016 | PJAIT Systems for the WMT 2016 | [
"K Wołk, K Marasek - Proceedings of the First Conference on Machine …, 2016"
] | ... “BASE” in the tables represents the baseline SMT system. “EXT” indicates results for the baseline system, using the baseline settings but extended with additional permissible data (limited to parallel Europarl v7, Common Crawl, ... | [
"http://www.aclweb.org/anthology/W/W16/W16-2328.pdf"
] |
2016 | Porting an Open Information Extraction System from English to German | [
"T Falke, G Stanovsky, I Gurevych, I Dagan"
] | ... For this purpose, we created a new dataset consisting of 300 German sentences, randomly sampled from three sources of different genres: news articles from TIGER (Brants et al., 2004), German web pages from CommonCrawl (Habernal et al., 2016) and featured Wikipedia ... | [
"https://www.ukp.tu-darmstadt.de/fileadmin/user_upload/Group_UKP/publikationen/2016/EMNLP_2016_PropsDE_cr.pdf"
] |
2016 | Practical Variable Length Gap Pattern Matching | [
"J Bader, S Gog, M Petri - Experimental Algorithms, 2016"
] | ... implemented on top of SDSL [7] data structures. We use three datasets from different application domains: The CC data set is a \(371 \,\mathrm{GiB}\) prefix of a recent \(145 \,\mathrm{TiB}\) web crawl from commoncrawl.org. ... | [
"http://link.springer.com/chapter/10.1007/978-3-319-38851-9_1"
] |
2016 | Pre-Translation for Neural Machine Translation | [
"J Niehues, E Cho, TL Ha, A Waibel - arXiv preprint arXiv:1610.05243, 2016"
] | ... The systems were trained on all parallel data available for the WMT 20161. The news commentary corpus, the European parliament proceedings and the common crawl corpus sum up to 3.7M sentences and around 90M words. ... | [
"https://arxiv.org/pdf/1610.05243"
] |
2016 | Predicting Motivations of Actions by Leveraging Text | [
"C Vondrick, D Oktay, H Pirsiavash, A Torralba - … of the IEEE Conference on Computer …, 2016"
] | ... In ECCV. 2012. [5] C. Buck, K. Heafield, and B. van Ooyen. N-gram counts and language models from the common crawl. LREC, 2014. [6] X. Chen, A. Shrivastava, and A. Gupta. Neil: Extracting visual knowledge from web data. In ICCV, 2013. 3004 Page 9. ... | [
"http://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Vondrick_Predicting_Motivations_of_CVPR_2016_paper.html"
] |
2016 | Privacy issues in online machine translation services–European perspective | [
"P Kamocki, J O'Regan - 2016"
] | ... 1/11/2014. Retrieved from http://itre.cis.upenn.edu/~myl/languagelog/archives/005 492.html Smith, JR et al. (2013). Dirt cheap web-scale parallel text from the Common Crawl. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. ... | [
"https://ids-pub.bsz-bw.de/files/5043/Kamocki-ORegan_Privacy_issues_in_online_machine_translation_2016.pdf"
] |
2016 | Query Answering to IQ Test Questions Using Word Embedding | [
"M Frąckowiak, J Dutkiewicz, C Jędrzejek, M Retinger… - Multimedia and Network …, 2017"
] | ... The pre-trained model based on Google News [16]. Embedding vector size 300, the negative sampling count as 3. 8. Glove Small. Pre-trained model based on Glove approach [20] using common crawl data, accessible on [15]. Embedding vector size 300. 9. Glove Large. ... | [
"http://link.springer.com/chapter/10.1007/978-3-319-43982-2_25"
] |
2016 | Query Expansion with Locally-Trained Word Embeddings | [
"F Diaz, B Mitra, N Craswell - arXiv preprint arXiv:1605.07891, 2016"
] | ... the entire corpus. Instead of training a global embedding on the large web collection, we use a GloVe embedding trained on Common Crawl data.4 We train local embeddings using one of three retrieval sources. First, we consider ... | [
"http://arxiv.org/pdf/1605.07891"
] |
2016 | Real-Time Presentation Tracking Using Semantic Keyword Spotting | [
"R Asadi, HJ Fell, T Bickmore, H Trinh"
] | ... gathered from a large corpus. We use a pre-trained vector representation with 1.9 million uncased words and vectors with 300 elements. It was trained using 42 billion tokens of web data from Common Crawl. We will use both the ... | [
"http://relationalagents.com/publications/Interspeech2016.pdf"
] |
2016 | Recurrent versus Recursive Approaches Towards Compositionality in Semantic Vector Spaces | [
"A Nayebi, H Blundell"
] | ... 0.01, and 1They were in fact trained on 840 billion tokens of Common Crawl data, as in http://nlp.stanford.edu/ projects/glove/. Adagrad (with the default learning rate of 0.01) as our optimizer, with a minibatch size of 300. Addi ... | [
"http://web.stanford.edu/~anayebi/projects/CS_224U_Final_Project_Writeup.pdf"
] |
2016 | Relatedness | [
"C Barrière - Natural Language Understanding in a Semantic Web …, 2016"
] | ... GloVe has some datasets trained on Wikipedia 2014 + Gigaword 5 (large news corpus) for a total of 6 billion tokens, covering a 400K vocabulary. It has other datasets based on an even larger corpus, the Common Crawl. To ... | [
"http://link.springer.com/chapter/10.1007/978-3-319-41337-2_10"
] |
2016 | Reordering space design in statistical machine translation | [
"N Pécheux, A Allauzen, J Niehues, F Yvon - Language Resources and Evaluation"
] | ... in (Allauzen et al. 2013), and, for English-Czech, the Europarl and CommonCrawl parallel WMT'12 corpora. For each task, a 4-gram language model is estimated using the target side of the training data. We use Ncode with ... | [
"http://link.springer.com/article/10.1007/s10579-016-9353-8"
] |
2016 | Richer Interpolative Smoothing Based on Modified Kneser-Ney Language Modeling | [
"E Shareghi, T Cohn, G Haffari"
] | ... Interdependency of m, data size, and discounts To explore the correlation between these factors we selected the German and investigated this correlation on two different training data sizes: Europarl (61M words), and CommonCrawl 2014 (984M words). ... | [
"http://people.eng.unimelb.edu.au/tcohn/papers/shareghi16emnlp.pdf"
] |
2016 | Scaling Up Word Clustering | [
"J Dehdari, L Tan, J van Genabith"
] | ... The parallel data comes from the WMT-2015 Common Crawl Corpus, News Commentary, Yandex 1M Corpus, and the Wiki Headlines Corpus.7 The monolingual data consists of 2007– 2014 News Commentary and News Crawl articles. ... | [
"http://anthology.aclweb.org/N/N16/N16-3009.pdf"
] |
2016 | Selecting Domain-Specific Concepts for Question Generation With Lightly-Supervised Methods | [
"Y Jin, PTV Le"
] | ... 3 Datasets We make use of two datasets obtained from the In- ternet. One is 200k company profiles from CrunchBase. Another is 57k common crawl business news articles. We refer to these two corpora as “Company Profile Corpus” and “News Corpus”. ... | [
"https://www.researchgate.net/profile/Yiping_Jin2/publication/304751113_Selecting_Domain-Specific_Concepts_for_Question_Generation_With_Lightly-Supervised_Methods/links/57cfc28208ae057987ac127c.pdf"
] |
2016 | Semantic Snippets via Query-Biased Ranking of Linked Data Entities | [
"M Alsarem - 2016"
] | Page 1. Semantic Snippets via Query-Biased Ranking of Linked Data Entities Mazen Alsarem To cite this version: Mazen Alsarem. Semantic Snippets via Query-Biased Ranking of Linked Data Entities. In- formation Retrieval [cs.IR]. ... | [
"https://hal.archives-ouvertes.fr/tel-01327769/document"
] |
2016 | Semantic word embedding neural network language models for automatic speech recognition | [
"K Audhkhasi, A Sethy, B Ramabhadran - 2016 IEEE International Conference on …, 2016"
] | ... The Gigaword corpus was a suitable choice because of its focus on news domain data in- stead of generic data sets such as Wikipedia or Common Crawl. We used a symmetric window size of 10 words for constructing the word co-occurrence matrix. ... | [
"http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7472828"
] |
2016 | Semantics derived automatically from language corpora necessarily contain human biases | [
"A Caliskan-Islam, JJ Bryson, A Narayanan - arXiv preprint arXiv:1608.07187, 2016"
] | ... Page 9. GloVe authors provide trained embeddings, which is a “Common Crawl” corpus obtained from a large-scale crawl of the web, containing 840 billion tokens (roughly, words). Tokens in this corpus are case-sensitive and ... | [
"http://arxiv.org/pdf/1608.07187"
] |
2016 | Session: P44-Corpus Creation and Querying (1) | [
"MK Bingel, P Banski, A Witt, C Data, F Lefevre, M Diab…"
] | ... 960 Roland Schäfer CommonCOW: Massively Huge Web Corpora from CommonCrawl Data and a Method to Distribute them Freely under Restrictive EU Copyright Laws 990 Ioannis Manousos Katakis, Georgios Petasis and Vangelis Karkaletsis ... | [
"https://pdfs.semanticscholar.org/f62d/6d9b67532ccd66915481b7cb4047ba03a1f2.pdf"
] |
2016 | Shared Task on Quality Assessment for Text Simplification | [
"S Štajner, M Popovic, H Saggion, L Specia, M Fishel - Training"
] | ... The parameters for the ensemble were obtained using particle swarm optimisation under multiple cross-validation scenarios. 2. Treelstm – The metric uses GloVe word vectors8 trained on the Common Crawl corpus and dependency parse trees. ... | [
"https://www.researchgate.net/profile/Maja_Popovic7/publication/301229567_Shared_Task_on_Quality_Assessment_for_Text_Simplification/links/570e179e08ae3199889d4eb5.pdf"
] |
2016 | Sheffield Systems for the English-Romanian Translation Task | [
"F Blain, X Song, L Specia"
] | ... For the two last, we use subsets of both the News Commentary (93%) and the Common Crawl (13%), selected using XenC- v2.12 (Rousseau, 2013) in mode 23 with the parallel corpora (Europarl7, SETimes2) as in-domain data. ... | [
"http://www.aclweb.org/anthology/W/W16/W16-2307.pdf"
] |
2016 | SoK: Applying Machine Learning in Security-A Survey | [
"H Jiang, J Nagra, P Ahammad - arXiv preprint arXiv:1611.03186, 2016"
] | Page 1. SoK: Applying Machine Learning in Security - A Survey Heju Jiang* , Jasvir Nagra, Parvez Ahammad ∗ Instart Logic, Inc. {hjiang, jnagra, pahammad }@instartlogic.com ABSTRACT The idea of applying machine learning ... | [
"https://arxiv.org/pdf/1611.03186"
] |
2016 | Source Sentence Simplification for Statistical Machine Translation | [
"E Hasler, A de Gispert, F Stahlberg, A Waite, B Byrne - Computer Speech & Language, 2016"
] | ... translation lattices. We trained an English-German system on the WMT 2015 training data (Bojar et al., 2015) comprising 4.2M parallel sentences from the Europarl, News Commentary v10 and Commoncrawl corpora. We word ... | [
"http://www.sciencedirect.com/science/article/pii/S0885230816301711"
] |
2016 | Syntactically Guided Neural Machine Translation | [
"F Stahlberg, E Hasler, A Waite, B Byrne - arXiv preprint arXiv:1605.04569, 2016"
] | ... The En-De training set includes Europarl v7, Common Crawl, and News Commentary v10. Sentence pairs with sentences longer than 80 words or length ratios exceeding 2.4:1 were deleted, as were Common Crawl sentences from other languages (Shuyo, 2010). ... | [
"http://arxiv.org/pdf/1605.04569"
] |
2016 | SYSTEMS AND METHODS FOR SPEECH TRANSCRIPTION | [
"A Hannun, C Case, J Casper, B Catanzaro, G Diamos… - US Patent 20,160,171,974, 2016"
] | ... the decoding. The language model was trained on 220 million phrases of the Common Crawl (available at commoncrawl.org), selected such that at least 95% of the characters of each phrase were in the alphabet. Only the most ... | [
"http://www.freepatentsonline.com/y2016/0171974.html"
] |
2016 | TAIPAN: Automatic Property Mapping for Tabular Data | [
"I Ermilov, ACN Ngomo"
] | ... RAM. Gold Standard We aimed to use T2D entity-level Gold Standard (T2D), a reference dataset which consists of 1 748 tables and reflects the actual distribution of the data in the Common Crawl,5 to evaluate our algorithms. ... | [
"http://svn.aksw.org/papers/2016/EKAW_Taipan/public.pdf"
] |
2016 | Target-Side Context for Discriminative Models in Statistical Machine Translation | [
"A Tamchyna, A Fraser, O Bojar, M Junczys-Dowmunt - arXiv preprint arXiv: …, 2016"
] | ... Our English-German system is trained on the data available for the WMT14 translation task: Europarl (Koehn, 2005) and the Common Crawl corpus,3 roughly 4.3 million sentence pairs altogether. We tune the system on the WMT13 test set and we test on the WMT14 set. ... | [
"http://arxiv.org/pdf/1607.01149"
] |
2016 | TAXI at SemEval-2016 Task 13: a Taxonomy Induction Method based on Lexico-Syntactic Patterns, Substrings and Focused Crawling | [
"A Panchenko, S Faralli, E Ruppert, S Remus, H Naets…"
] | ... 59G 59.2 – – – CommonCrawl 168000.0 ‡ – – – FocusedCrawl Food 22.8 7.9 3.4 3.6 ... WebISA. In addition to PattaMaika and PatternSim, we used a publicly available database of English hypernym relations extracted from the CommonCrawl corpus (Seitner et al., 2016). ... | [
"http://web.informatik.uni-mannheim.de/ponzetto/pubs/panchenko16.pdf"
] |
2016 | Temporal Attention-Gated Model for Robust Sequence Classification | [
"W Pei, T Baltrušaitis, DMJ Tax, LP Morency - arXiv preprint arXiv:1612.00385, 2016"
] | ... 4.2.2 Experimental Setup We utilize 300-d Glove word vectors pretrained over the Common Crawl [27] as the features for each word of the sentences. Our model is well suitable to perform sentiment analysis using sentence-level labels. ... | [
"https://arxiv.org/pdf/1612.00385"
] |
2016 | The 2016 KIT IWSLT Speech-to-Text Systems for English and German | [
"TS Nguyen, M Müller, M Sperber, T Zenkel, K Kilgour…"
] | ... Page 4. Text corpus # Words TED 3.6m Fisher 10.4m Switchboard 1.4m TEDLIUM dataselection 155m News + News-commentary + -crawl 4,478m Commoncrawl 185m GIGA 2323m Table 3: English language modeling data. Text corpus # Words ... | [
"http://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_24.pdf"
] |
2016 | The AFRL-MITLL WMT16 News-Translation Task Systems | [
"J Gwinnup, T Anderson, G Erdmann, K Young, M Kazi… - Proceedings of the First …, 2016"
] | ... to build a monolithic language model from the following sources: Yandex4, Commoncrawl (Smith et al., 2013), LDC Gigaword English v5 (Parker et al., 2011) and News Commentary. Submission system 1 included the data selected from the large Commoncrawl corpus as ... | [
"http://www.aclweb.org/anthology/W/W16/W16-2313.pdf"
] |
2016 | The CogALex-V Shared Task on the Corpus-Based Identification of Semantic Relations | [
"E Santus, A Gladkova, S Evert, A Lenci - COLING 2016, 2016"
] | ... Team Method (s) Corpus size Corpus GHHH Word analogies, linear regression and multi-task CNN 100B 6B 840B Google News (pre-trained word2vec embeddings, 300 dim.); Wikipedia+ Gigaword 5 (pre-trained GloVe embeddings, 300 dim.), Common Crawl (pre-trained ... | [
"https://sites.google.com/site/cogalex2016/home/accepted-papers/CogALex-V_Proceedings.pdf#page=83"
] |
2016 | The Edinburgh/LMU Hierarchical Machine Translation System for WMT 2016 | [
"M Huck, A Fraser, B Haddow - Proc. of the ACL 2016 First Conf. on Machine …, 2016"
] | ... CommonCrawl LM training data in background LM ... Utilizing a larger amount of target-side monolingual resources by appending the CommonCrawl corpus to the background LM's training data is very beneficial and increases the BLEU scores by around one point. ... | [
"http://www.aclweb.org/anthology/W/W16/W16-2315.pdf"
] |
2016 | The Edit Distance Transducer in Action: The University of Cambridge English-German System at WMT16 | [
"F Stahlberg, E Hasler, B Byrne - arXiv preprint arXiv:1606.04963, 2016",
"FSEHB Byrne"
] | This paper presents the University of Cambridge submission to WMT16. Motivated by the complementary nature of syntactical machine translation and neural machine translation (NMT), we exploit the synergies of Hiero and NMT in different … | [
"http://arxiv.org/pdf/1606.04963",
"https://ar5iv.labs.arxiv.org/html/1606.04963"
] |
2016 | The ILSP/ARC submission to the WMT 2016 Bilingual Document Alignment Shared Task | [
"V Papavassiliou, P Prokopidis, S Piperidis - Proceedings of the First Conference on …, 2016"
] | ... 1http://commoncrawl.org/ 2http://nlp.ilsp.gr/redmine/ilsp-fc/ 3Including modules for metadata extraction, language identification, boilerplate removal, document clean-up, text classification and sentence alignment 733 ... Dirt cheap web-scale parallel text from the common crawl... | [
"http://www.aclweb.org/anthology/W/W16/W16-2375.pdf"
] |
2016 | The JHU Machine Translation Systems for WMT 2016 | [
"S Ding, K Duh, H Khayrallah, P Koehn, M Post - … of the First Conference on Machine …, 2016"
] | ... In addition, we included a large language model based on the CommonCrawl monolingual data ... of the language model trained on the monomlingual corpora extracted from Common Crawl... year, large corpora of monolingual data were extracted from Common Crawl (Buck et ... | [
"http://www.aclweb.org/anthology/W/W16/W16-2310.pdf"
] |
2016 | The Karlsruhe Institute of Technology Systems for the News Translation Task in WMT 2016 | [
"TL Ha, E Cho, J Niehues, M Mediani, M Sperber… - Proceedings of the First …, 2016"
] | ... To im- prove the quality of the Common Crawl corpus be- ing used in training, we filtered out noisy sentence pairs using an SVM classifier as described in (Me- diani et al., 2011). All of our translation systems are basically phrase-based. ... | [
"http://www.aclweb.org/anthology/W/W16/W16-2314.pdf"
] |
2016 | The NTNU-YZU System in the AESW Shared Task: Automated Evalua-tion of Scientific Writing Using a Convolutional Neural Network | [
"LH Lee, BL Lin, LC Yu, YH Tseng"
] | ... For the GloVe representation, we adopted 4 different datasets for training the vectors including one from Wikipedia 2014 and Gigaword 5 (400K vo- cabulary), two common crawl datasets (uncased 1.9M vocabulary, and cased 2.2M vocabulary) and one Twitter dataset (1.2M ... | [
"http://anthology.aclweb.org/W/W16/W16-0513.pdf"
] |
2016 | The RWTH Aachen Machine Translation System for IWSLT 2016 | [
"JT Peter, A Guta, N Rossenbach, M Graça, H Ney"
] | ... ich war fünf Mal dort oben . . Figure 1: An example of multiple phrasal segmentations taken from the common crawl corpus. The JTR sequence is indicated by blue arcs. The distinct phrasal segmentations are shown in red and shaded green colour. log-linear framework. ... | [
"http://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_23.pdf"
] |
2016 | Topics of Controversy: An Empirical Analysis of Web Censorship Lists | [
"Z Weinberg, M Sharif, J Szurdi, N Christin - Proceedings on Privacy Enhancing …, 2017"
] | ... Common Crawl Finally, this is the closest available ap- proximation to an unbiased sample of the entire Web. The Common Crawl Foundation continuously operates a large-scale Web crawl and publishes the results [27]. Each crawl contains at least a billion pages. ... | [
"https://www.andrew.cmu.edu/user/nicolasc/publications/Weinberg-PETS17.pdf"
] |
2016 | Toward Multilingual Neural Machine Translation with Universal Encoder and Decoder | [
"TL Ha, J Niehues, A Waibel - arXiv preprint arXiv:1611.04798, 2016"
] | ... translation and the web-crawled parallel data (CommonCrawl). ... network. We mix the TED parallel corpus and the substantial monolingual corpus (EPPS+NC+ CommonCrawl) and train a mix-source NMT system from those data. ... | [
"https://arxiv.org/pdf/1611.04798"
] |
2016 | Towards a Complete View of the Certificate Ecosystem | [
"B VanderSloot, J Amann, M Bernhard, Z Durumeric… - 2016"
] | ... In 36th IEEE Symposium on Security and Privacy, May 2015. [5] Certificate Transparency: Extended validation in Chrome. https://www.certificate-transparency.org/ev-ct-plan. [6] Common Crawl. https://commoncrawl.org/. [7] The DROWN attack. https://drownattack.com/. ... | [
"https://jhalderm.com/pub/papers/https-perspectives-imc16.pdf"
] |
2016 | Towards More Accurate Statistical Profiling of Deployed schema. org Microdata | [
"R Meusel, D Ritze, H Paulheim - Journal of Data and Information Quality (JDIQ), 2016"
] | ... Springer International Publishing. 44. Alex Stolz and Martin Hepp. 2015. Towards crawling the web for structured data: Pitfalls of common crawl for e-commerce. In Proceedings of the 6th International Workshop on Consuming Linked Data (COLD ISWC'15). ... | [
"http://dl.acm.org/citation.cfm?id=2992788"
] |
2016 | Translation of Unknown Words in Low Resource Languages | [
"B Gujral, H Khayrallah, P Koehn"
] | ... worthy trade-off. Word Embedding: For this technique, we collect Hindi monolingual data from Wikipedia dump (Al-Rfou et al., 2013) and Commoncrawl (Buck et al., 2014),2 with a total of about 29 million tokens. For Uzbek, the ... | [
"https://pdfs.semanticscholar.org/f130/2e20b4dabb48b8442f857426c28b205287f1.pdf"
] |
2016 | TripleSent: a triple store of events associated with their prototypical sentiment | [
"V Hoste, E Lefever, S van der Waart van Gulik… - eKNOW 2016: The Eighth …, 2016"
] | ... These events will be obtained by extracting patterns for highly explicit sentiment expressions (eg, “I hate” or “I love”) or from large web data crawls (eg, commoncrawl.org), which will subsequently be syntactically and semantically parsed to extract events and sentiment triples. ... | [
"https://biblio.ugent.be/publication/8071695/file/8071708"
] |
2016 | Undercounting File Downloads from Institutional Repositories | [
"P OBrien, K Arlitsch, L Sterman, J Mixter, J Wheeler… - Journal of Library …, 2016"
] | Page 1. © Patrick OBrien, Kenning Arlitsch, Leila Sterman, Jeff Mixter, Jonathan Wheeler, and Susan Borda Address correspondence to Patrick OBrien, Semantic Web Research Director, Montana State University, PO Box 173320, Bozeman, MT 59717-3320, USA. ... | [
"http://scholarworks.montana.edu/xmlui/bitstream/handle/1/9943/IR-Undercounting-preprint_2016-07.pdf?sequence=3&isAllowed=y"
] |
2016 | User Modeling in Language Learning with Macaronic Texts | [
"A Renduchintala, R Knowles, P Koehn, J Eisner - Proceedings of ACL, 2016"
] | ... We translated each German sentence using the Moses Statistical Machine Translation (SMT) toolkit (Koehn et al., 2007). The SMT system was trained on the German-English Commoncrawl parallel text used in WMT 2015 (Bojar et al., 2015). ... | [
"https://www.cs.jhu.edu/~jason/papers/renduchintala+al.acl16-macmodel.pdf"
] |
2016 | Using Feedforward and Recurrent Neural Networks to Predict a Blogger's Age | [
"T Moon, E Liu"
] | ... The embedding matrix L ∈ RV ×d is initialized for d = 300 with GloVe word vectors trained on the Common Crawl data set [9]. If a token does not correspond to any pre-trained word vector, a random word vector is generated with Xavier initialization [2]. The unembedded vector ... | [
"http://cs224d.stanford.edu/reports/tym1.pdf"
] |
2016 | Vive la petite différence! Exploiting small differences for gender attribution of short texts | [
"F Gralinski, R Jaworski, Ł Borchmann, P Wierzchon"
] | ... The procedure of preparing the HSSS corpus was to take Common Crawl-based Web corpus1 of Polish [4] and grep for lines ... Classification with Deep Learning (2015) 4. Buck, C., Heafield, K., van Ooyen, B.: N-gram counts and language models from the common crawl... | [
"http://www.staff.amu.edu.pl/~rjawor/tsd-article.pdf"
] |
2016 | Vive la Petite Différence! | [
"F Graliński, R Jaworski, Ł Borchmann, P Wierzchoń - International Conference on …, 2016"
] | ... The research was conducted on the publicly available corpus called “He Said She Said”, consisting of a large number of short texts from the Polish version of Common Crawl... Keywords. Gender attribution Text classification Corpus Common Crawl Research reproducibility. ... | [
"http://link.springer.com/chapter/10.1007/978-3-319-45510-5_7"
] |
2016 | VoldemortKG: Mapping schema. org and Web Entities to Linked Open Data | [
"A Tonon, V Felder, DE Difallah, P Cudré-Mauroux"
] | ... that apply to the Common Crawl corpus.12 4 The VoldemortKG Knowledge Graph To demonstrate the potential of the dataset we release, we built a proof of concept knowledge graph called VoldemortKG. VoldemortKG integrates schema.org 12 http://commoncrawl.org/terms ... | [
"http://daplab.ch/wp-content/uploads/2016/08/voldemort.pdf"
] |
2016 | What does the Web remember of its deleted past? An archival reconstruction of the former Yugoslav top-level domain | [
"A Ben-David - New Media & Society, 2016"
] | ... The completeness of the reconstruction effort could have been aided by consulting other large repositories of temporal Web data, such as Common Crawl, or by simply contacting the Internet Archive and requesting for all domains in the .yu domain. ... | [
"http://nms.sagepub.com/content/early/2016/04/27/1461444816643790.abstract"
] |
2016 | What Makes Word-level Neural Machine Translation Hard: A Case Study on English-German Translation | [
"F Hirschmann, J Nam, J Fürnkranz"
] | ... 5.1 Dataset & Preprocessing Our models were trained on the data provided by the 2014 Workshop on Machine Translation (WMT). Specifically, we used the Europarl v7, Common Crawl, and News Commentary corpora. Our ... | [
"http://www.aclweb.org/anthology/C/C16/C16-1301.pdf"
] |
2016 | WHAT: A Big Data Approach for Accounting of Modern Web Services | [
"M Trevisan, I Drago, M Mellia, HH Song, M Baldi - 2016"
] | ... [5] D. Plonka and P. Barford, “Flexible Traffic and Host Profiling via DNS Rendezvous,” in Proc. of the SATIN, 2011, pp. 1–8. [6] “Common Crawl,” http://commoncrawl.org/. [7] A. Finamore et al., “Experiences of Internet Traffic Monitoring with Tstat,” IEEE Netw., vol. 25, no. 3, pp. ... | [
"http://www.tlc-networks.polito.it/mellia/papers/BMLIT_web_meter.pdf"
] |
2016 | Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM | [
"I Habernal, I Gurevych"
] | ... Memory (BLSTM) neural network for end-to-end processing.9 The input layer relies on pre-trained word embeddings, in particular GloVe (Pennington et al., 2014) trained on 840B tokens from Common Crawl;10 the embedding weights are further updated during training. ... | [
"https://www.informatik.tu-darmstadt.de/fileadmin/user_upload/Group_UKP/publikationen/2016/acl2016-convincing-arguments-camera-ready.pdf"
] |
2016 | Wikipedia mining of hidden links between political leaders | [
"KM Frahm, K Jaffrès-Runser, DL Shepelyansky - arXiv preprint arXiv:1609.01948, 2016"
] | ... At present directed networks of real systems can be very large (about 4.2 million articles for the English Wikipedia edition in 2013 [10] or 3.5 billion web pages for a publicly accessible web crawl that was gathered by the Common Crawl Foundation in 2012 [28]). ... | [
"http://arxiv.org/pdf/1609.01948"
] |
2016 | WOLVESAAR at SemEval-2016 Task 1: Replicating the Success of Monolingual Word Alignment and Neural Embeddings for Semantic Textual Similarity | [
"H Bechara, R Gupta, L Tan, C Orasan, R Mitkov… - Proceedings of SemEval, 2016"
] | ... 2We use the 300 dimensions vectors from the GloVe model trained on the Commoncrawl Corpus with 840B tokens, 2.2M vocabulary. distributions p and pθ using regularised KullbackLeibler (KL) divergence. J(θ) = 1 n n ∑ i=1KL(p(i)∣ ∣ ∣ ∣ p (i) θ ) + λ2||θ||2 2 (8) ... | [
"http://www.anthology.aclweb.org/S/S16/S16-1096.pdf"
] |
2016 | Word Representation on Small Background Texts | [
"L Li, Z Jiang, Y Liu, D Huang - Chinese National Conference on Social Media …, 2016"
] | ... For example, Pennington et al. (2014) used Wikipedia, Gigaword 5 and Common Crawl to learn word representations, each of which contained billions of tokens. There was not always a monotonic increase in performance as the amount of background texts increased. ... | [
"http://link.springer.com/chapter/10.1007/978-981-10-2993-6_12"
] |
2016 | Word2Vec vs DBnary: Augmenting METEOR using Vector Representations or Lexical Resources? | [
"C Servan, A Berard, Z Elloumi, H Blanchon, L Besacier - arXiv preprint arXiv: …, 2016",
"C Servan, A Bérard, Z Elloumi, H Blanchon, L Besacier"
] | ... German–English Europarl V7 + news commentary V10 2.1 M 57.2 M 59.7 M Russian–English Common Crawl + news commentary V10 + Yandex 2.0 M 47.2 M 50.3 M Table 2: Bilingual corpora used to train the word embeddings for each language pair. ... | [
"http://www.aclweb.org/anthology/C/C16/C16-1110.pdf",
"https://arxiv.org/pdf/1610.01291"
] |
2016 | Yandex School of Data Analysis approach to English-Turkish translation at WMT16 News Translation Task | [
"A Dvorkovich, S Gubanov, I Galinskaya - Proceedings of the First Conference on …, 2016"
] | ... 2.7 Data For training translation model, language models, and NMT reranker, we used only the provided constrained data (SETIMES 2 parallel TurkishEnglish corpus, and monolingual Turkish and En- glish Common Crawl corpora). ... | [
"http://www.aclweb.org/anthology/W/W16/W16-2311.pdf"
] |