|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:31:25.476984Z" |
|
}, |
|
"title": "Learning to Lemmatize in the Word Representation Space", |
|
"authors": [ |
|
{ |
|
"first": "Jarkko", |
|
"middle": [], |
|
"last": "Lagus", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Helsinki", |
|
"location": {} |
|
}, |
|
"email": "jarkko.lagus@helsinki.fi" |
|
}, |
|
{ |
|
"first": "Arto", |
|
"middle": [], |
|
"last": "Klami", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Helsinki", |
|
"location": {} |
|
}, |
|
"email": "arto.klami@helsinki.fi" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Lemmatization is often used with morphologically rich languages to address issues caused by morphological complexity, performed by grammar-based lemmatizers. We propose an alternative for this, in form of a tool that performs lemmatization in the space of word embeddings. Word embeddings as distributed representations natively encode some information about the relationship between the base and inflected forms, and we show that it is possible to learn a transformation that approximately maps the embeddings of inflected forms to the embeddings of the corresponding lemmas. This facilitates an alternative processing pipeline that replaces traditional lemmatization with the lemmatizing transformation in downstream processing for any application. We demonstrate the method in the Finnish language, outperforming traditional lemmatizers in an example task of document similarity comparison, but the approach is language independent and can be trained for new languages with mild requirements.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Lemmatization is often used with morphologically rich languages to address issues caused by morphological complexity, performed by grammar-based lemmatizers. We propose an alternative for this, in form of a tool that performs lemmatization in the space of word embeddings. Word embeddings as distributed representations natively encode some information about the relationship between the base and inflected forms, and we show that it is possible to learn a transformation that approximately maps the embeddings of inflected forms to the embeddings of the corresponding lemmas. This facilitates an alternative processing pipeline that replaces traditional lemmatization with the lemmatizing transformation in downstream processing for any application. We demonstrate the method in the Finnish language, outperforming traditional lemmatizers in an example task of document similarity comparison, but the approach is language independent and can be trained for new languages with mild requirements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Morphologically rich languages (MRLs) encode more information (such as case, gender, and tense) into single word units, compared to analytical languages like English. For example, Finnish has 15 different word cases for nouns and adjectives. The different cases generate new words from the syntactical point of view, and in combination with plural forms Finnish ends up having 30 different word forms for each noun and adjective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A rich morphology results in extremely large vocabulary and hence low frequency for most word forms in corpora of reasonable size, causing problems, e.g., when learning distributed representations -word embeddings -today widely used in most language processing tasks. While embeddings can be trained for MRLs using the traditional methods, such as fastText (Bojanowski et al., 2016) , Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) , their quality still leaves a lot to desire. For example, the results on standard word embedding tests are often worse for MRLs (Cotterell et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 357, |
|
"end": 382, |
|
"text": "(Bojanowski et al., 2016)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 416, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 452, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 582, |
|
"end": 606, |
|
"text": "(Cotterell et al., 2018)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The natural solution for addressing morphological complexity is lemmatizing, often used as preprocessing before analysis. Even though lemmatization loses information by completely ignoring the case, it typically improves performance in various language processing tasks. Transformers and other flexible language models (Devlin et al., 2019; Brown et al., 2020) , as well as advanced tokenization methods (Schuster and Nakajima, 2012; Kudo and Richardson, 2018) , may have reduced the need for lemmatization in general, but it still remains vital for MRLs for many tasks (Ebert et al., 2016; Cotterell et al., 2018; Kutuzov and Kuzmenko, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 319, |
|
"end": 340, |
|
"text": "(Devlin et al., 2019;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 360, |
|
"text": "Brown et al., 2020)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 433, |
|
"text": "(Schuster and Nakajima, 2012;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 460, |
|
"text": "Kudo and Richardson, 2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 570, |
|
"end": 590, |
|
"text": "(Ebert et al., 2016;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 614, |
|
"text": "Cotterell et al., 2018;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 615, |
|
"end": 642, |
|
"text": "Kutuzov and Kuzmenko, 2019)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Traditional lemmatization does not, however, resolve all issues caused by rich morphology, especially as part of a pipeline that uses word embeddings. The embeddings themselves are difficult to estimate for MRLs and the embedding methods are typically not transparent about their uncertainty. For instance, the lemma itself may be rare in a typical training corpus and hence we may even switch to using a less reliable embedding, without knowing it. Ebert et al. (2016) proposed a possible resolution of training the embeddings on a lemmatized corpus, but this prevents the use of high-quality pretrained embeddings available for many languages and may otherwise hurt embedding quality. The standard processing pipeline also requires access to a good lemmatizer, which may not be available for rare languages, and some-Embedding layer ...", |
|
"cite_spans": [ |
|
{ |
|
"start": 450, |
|
"end": 469, |
|
"text": "Ebert et al. (2016)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Partitive removal", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Genitive removal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Embedding layer Lemmatizer Task specific model We use embeddings of all word forms but normalize them in the embedding space, integrating naturally into the task model. times they do not work ideally for specialized vocabularies (e.g. medical language). The creation of such a lemmatizer often requires expert knowledge of the target language. We propose a novel approach for addressing rich morphology, illustrated in Figure 1 . Instead of using a traditional lemmatizer to find the lemmas and using the embeddings for those to represent the content, we do the opposite: We start with the embeddings for all original word forms and then perform lemmatization in the embedding space. This is carried out by a neural network that approximately maps the embeddings of inflected forms into the embeddings of the lemmas. We believe that this may provide embeddings that are better for downstream processing tasks compared to the ones available for the lemmas, for instance when the lemma itself is rare since the model is implicitly able to leverage information across multiple words and cases. Another advantage of lemmatization in the embedding space is easy integration as part of the standard modeling workflow that often builds on neural networks anyway, instead of requiring a separate lemmatizer.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 419, |
|
"end": 427, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task specific model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Traditional lemmatization is basically a character-level operation, where grammar rules are used to backtrack the basic form that could have generated the inflected form. We, however, consider word inflections as \"bias\" in the embedding space, so that the embedding for the inflected word combines (in some unknown way) the semantic meaning of the word and the case information. Consequently, our formulation resembles conceptually the problem of bias removal widely studied in the word embedding literature (Bolukbasi et al., 2016; Brunet et al., 2019) . The task in bias removal is to transform the embeddings of individual words such that unwanted systematic biases related to gender etc. disappear. Our approach can be interpreted in this context as a method of removing undesired morphological information while retaining the semantic meaning of the word.", |
|
"cite_spans": [ |
|
{ |
|
"start": 508, |
|
"end": 532, |
|
"text": "(Bolukbasi et al., 2016;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 533, |
|
"end": 553, |
|
"text": "Brunet et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task specific model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We demonstrate the approach on the Finnish language, restricting the analysis for nouns and adjectives that often contain the most important content words for tasks like document similarity comparison or information retrieval. We use pretrained fastText embeddings (Bojanowski et al., 2016) that use subword-level information to provide embeddings for all possible word forms and train a model for mapping them for embeddings of the lemmas using on the dataset extracted from Wiktionary by Durrett and DeNero (2013) . The approach is, however, directly applicable to other word classes and languages. Besides the pretrained embeddings, it requires only access to (a) existing list of pairs of lemmas and inflected words as in our case, (b) dictionary and morphological generator, or (c) existing traditional lemmatizer for the language. For instance, fastText provides such embeddings for 157 languages, and morphological analyzers or generators exist for most of these.", |
|
"cite_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 290, |
|
"text": "(Bojanowski et al., 2016)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 490, |
|
"end": 515, |
|
"text": "Durrett and DeNero (2013)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task specific model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Besides the core concept of lemmatizing in the embedding space, our main contributions are in the specification of practical details for learning the lemmatizers. We specify four alternative neural network architectures, define a suitable objective function and quality metric, and propose a novel idempotency regularization technique to prevent the models from doing anything else besides the lemmatization. We evaluate the approach in document comparison, outperforming the standard pipeline using traditional lemmatizers, and demonstrate it additionally in the task of word list generation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task specific model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An open-source implementation of the method in Python is made available at https://github.com/jalagus/ embedding-level-lemmatization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task specific model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Even though we are the first to directly consider the task of transforming embeddings to lemmatize words, the general question of addressing rich morphology in distributed representations has been studied from various perspectives. Cotterell et al. (2018) studied the effect of morphological complexity for task performance over multiple languages. They showed that morphological complexity correlates with poor performance but that lemmatization helps to cope with the complexity. Kutuzov and Kuzmenko (2019) showed a similar effect to hold even with more complex language models, at least for the Russian language. Ebert et al. (2016) , in turn, showed that for MRLs we can improve word similarity comparisons by learning Word2Vec embeddings from a lemmatized corpus, rather than training them on all data and lemmatizing while learning the task model. Kondratyuk et al. (2018) studied supervised lemmatization and morphological tagging using bidirectional RNNs with character and word-level embeddings in MRLs. They showed that a combination of lemma information and morphological tags improve lemmatization and tagging, but may hurt for English. Along similar lines, Rosa and\u017dabokrtsk\u1ef3 (2019) suggested using wordembedding clustering to improve lemmatization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 255, |
|
"text": "Cotterell et al. (2018)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 509, |
|
"text": "Kutuzov and Kuzmenko (2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 617, |
|
"end": 636, |
|
"text": "Ebert et al. (2016)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 855, |
|
"end": 879, |
|
"text": "Kondratyuk et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task specific model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As we consider lemmatization from the perspective of bias removal, our work relates to methods for removal of gender bias (Bolukbasi et al., 2016; Zhao et al., 2017) . In this line of work, the embedding space is assumed to encode gender information in specific dimensions, so that bias can be minimized by removing them. The main difference to our work is that their goal is primarily in removing the bias, whereas we look for embeddings that retain the semantic meaning of the word well and that are good for downstream task performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 146, |
|
"text": "(Bolukbasi et al., 2016;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 147, |
|
"end": 165, |
|
"text": "Zhao et al., 2017)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task specific model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A traditional lemmatizer either returns the true lemma or not, but when operating in the embedding space of continuous vector representations the question of correctness needs more attention. We start by discussing the evaluation before proceeding to explain the approach itself that builds on these insights.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Lemmatization in Embedding Spaces", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "First of all, we note that we can use task performance in any downstream task to evaluate the quality -our ultimate goal is in solving the task well, not in learning the embeddings. We will demonstrate this later in the task of document similarity comparison. However, it is highly useful to also have a generic task-independent metric directly measuring the lemmatization accuracy, which can also be used for motivating the objective for training. We want a good word embedding space lemmatizer M (e w ) to simultaneously satisfy two different criteria:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Lemmatization in Embedding Spaces", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "1. Ability to transform any embedding e w to the embedding of its lemma w , and 2. Ability to retain embeddings of lemmas or lemmatized embeddings as is.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Lemmatization in Embedding Spaces", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The first criterion is intuitive, matching our goal, but we need to decide how to measure the similarity. For high-dimensional spaces, it is not reasonable to expect a perfect recovery of the embedding e w itself, but instead, we should count all embeddings that are close enough as correct. To determine 'close enough', we use a simple definition based on neighborhoods: Lemmatization is correct if the closest neighbor for the transformed embedding of a word w is the embedding of its lemma w . We denote by ACC LEM the accuracy of nearest-neighbor (rank-1) retrieval accuracy for w in the neighborhood of M (e w ), using Euclidean distance for similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Lemmatization in Embedding Spaces", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The second criterion is imposed as we want to consider the lemmatization step as a black box for which we can feed in arbitrary words, including those that are already lemmas. The lemmatizer should not alter them in any way. We measure this by an indirect metric of ACC IDEM , which corresponds to the rank-1 retrieval accuracy for w in the neighborhood of M (M (e w )), the output for an embedding e w passed twice through the model. For a more detailed discussion and justification, see Section 4.3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Lemmatization in Embedding Spaces", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Together the metrics ACC LEM and ACC IDEM characterize the general ability of any embedding-space lemmatizer in a modelindependent way; both are based on retrieval accuracy and can be evaluated without additional assumptions besides the distance measure. We will later use them also to motivate our objective function, a differentiable approximation for their weighted combination.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Lemmatization in Embedding Spaces", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Denoting an arbitrary inflected form word embedding by e w and the related lemma word embedding by e w , we wish to learn some mapping M (\u2022) such that M (e w |\u03b8) \u2248 e w . We do this by assuming a parametric model family, a neural network, and learning its parameters \u03b8 based on a collection of (e w , e w ) pairs of pretrained embeddings in a supervised fashion. For simplicity of notation, we omit the parameters and simply write M (e w ) instead of M (e w |\u03b8) for the rest of the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Lemmatization in Embedding Spaces", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We hypothesize that inflected forms lie on a specific subspace of the embedding space (see Figure 2) and that we can retrieve the lemmatized forms by a simple, but a possibly nonlinear, transformation in the embedding space. This can be interpreted as the removal of \"bias\" caused by the inflection. We want this mapping to be lightweight so that it can be integrated as part of a task model with a small computational overhead. Complex transformations are discouraged also because they would increase the risk of altering the semantic content captured by the embedding.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 97, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of Lemmatization in Embedding Spaces", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We discuss two alternative ways of lemmatizing in the embedding space. The first approach learns a separate model M c (e w ) for each word case c so that e.g. partitives and genitives are processed with different models. This allows using simple models even if all of the rich morphology was not constrained in low-dimensional subspaces, and also allows reversing the model for morphological generation (see Section 7).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Lemmatization in Embedding Spaces", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For the processing of arbitrary words with an unknown case, we can make a function composite of multiple models, so that the output of one model is always fed as input for the next one. For instance, to lemmatize both partitives and genitives we can compute (M p \u2022 M g )(e w ) = M partitive (M genitive (e w )), in either order. Assuming the models do nothing else besides remove the effect of the particular case, then this composite function performs the same operation as either model alone, depending on the case of the input word. We naturally cannot guarantee the transformations work exactly like this, but will later present a regularization technique that specifically encourages the models to focus only on the case removal and show empirically that such function composition of multiple models works well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Lemmatization in Embedding Spaces", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The other alternative is learning a single global model M (e w ) that can lemmatize all word forms. We demonstrate also this approach, but our main focus is on the separate models for each case.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Lemmatization in Embedding Spaces", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We use feedforward neural networks as models M c (e w ), restricting the architecture choice for small networks to retain computational efficiency. Both input and output dimensionality needs to match the dimensionality of the embedding, in our case d = 300. We investigate empirically four alternative architectures:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Architectures", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "1. Linear W 1 e w + b 1 , where W 1 \u2208 R d\u00d7d 2. Simple W 2 R(W 1 e w +b 1 )+b 2 , where W 1 \u2208 R 500\u00d7d , W 2 \u2208 R d\u00d7500 3. Compression W 2 R(W 1 e w +b 1 )+b 2 , where W 1 \u2208 R 100\u00d7d , W 2 \u2208 R d\u00d7100 4. Complex W 3 (R(W 2 R(W 1 e w + b 1 ) + b 2 ) + b 3 , where W 1 \u2208 R 500\u00d7d , W 2 \u2208 R 500\u00d7500 , W 3 \u2208 R d\u00d7500", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Architectures", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In all variants, R(\u2022) denotes the rectified linear unit and b i is a bias term of proper size. The linear model is motivated by the property of some embeddings encoding various properties as linear relationships (e.g. king \u2212man+woman \u2248 queen) and fast computation. However, there are no guarantees a linear transformation is sufficient for lemmatization and hence we consider also the three simple nonlinear architectures with at most two hidden layers. Other architectures could certainly be used and a more careful choice of a specific architecture could further improve the lemmatization accuracy, but we will later show that already these lightweight models work well in practice.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Architectures", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To learn models such that M c (e w ) \u2248 e w we need to optimize for a loss function that penalizes for difference between M c (e w ) and e w for known pairs of w and w . As explained in Section 3, we will eventually measure the quality by nearest-neighbor retrieval in the embedding space. Directly optimizing for that is difficult, and hence we optimize for a natural proxy instead, minimizing the squared Euclidean distance", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective and Training", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "D(M c (e w ), e w ) = M c (e w ) \u2212 e w 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective and Training", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Note that often the norm of the embeddings is considered irrelevant and consequently e.g. Word2Vec (Mikolov et al., 2013) used cosine similarity to measure distances. We want to retain the norms that for some embeddings encode information about e.g. word frequency (Schakel and Wilson, 2015) and hence chose the Euclidean distance. For training the model we need a collection of N pairs of embeddings for words w and their lemmas w . Assuming an embedding library that provides embeddings for large vocabulary (or even arbitrary word forms, building on subword-level embeddings (Bojanowski et al., 2016)) we simply need some way of constructing these pairs. The two practical alternatives for this are", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 121, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective and Training", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 Dictionary of lemmas w and a morphological generator to form w c for cases c", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective and Training", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 Collection of words w and a traditional lemmatizer for obtaining their lemmas w", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective and Training", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For case-specific models we only use pairs corresponding to the case, whereas for the global model we can pool all pairs, potentially having multiple cases for the same lemma in the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective and Training", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Any model trained as above learns to map w to w , but we cannot tell what it does for words that are already lemmas or that belong to some other case if training a case-specific model. One could in principle add pairs of (w , w ) into the training set to address the former, but to prevent transforming words of other classes we would need similar pairs for all possible cases. This would be extremely inefficient.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Idempotency Regularization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "To avoid transforming the embeddings of other word forms, we propose an alternative of novel regularization strategy encouraging idempotency, meaning that the same transformation applied multiple times will not change the output beyond the initial result. We do this by measuring the Euclidean distance D(M c (e w ), M c (M c (e w )) between the output of the model M c (e w ) (the supposed lemmatized embedding) and the result of passing the input through the model twice, M c (M c (e w )). By encouraging this distance to be small we encourage the model to only remove the information about the particular case, without otherwise changing the embedding. Conceptually this is related to regularization techniques like Barone et al. (2017) designed to prevent catastrophic forgetting (Kirkpatrick et al., 2017) ; both prevent losing the already learned structure while allowing the model to adapt to a new task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 784, |
|
"end": 810, |
|
"text": "(Kirkpatrick et al., 2017)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Idempotency Regularization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In practice we minimize the objective", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Idempotency Regularization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L(e w , e w ) = \u03b1 \u00d7 D(M c (e w ), e w )+ (1 \u2212 \u03b1)\u00d7D(M c (e w ), M c (M c (e w ))),", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Idempotency Regularization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "where \u03b1 \u2208 [0, 1] controls the amount of regularization. With \u03b1 = 1 we only optimize the loss and by decreasing the parameter we start regularizing the solution using idempotency. Note that the extreme of \u03b1 = 0 is not meaningful, since the loss term disappears.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Idempotency Regularization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We validate the approach and the modeling choices (architecture and regularization), using morphologically rich Finnish as an example language. We first evaluate the performance in a taskagnostic manner, before demonstrating case examples in the following two sections.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Validation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Data We validate the approach on Finnish language, using pretrained embeddings provided by the fastText library (Bojanowski et al., 2016) . The embeddings were trained on Common Crawl and Wikipedia corpora and have dimensionality of d = 300.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 137, |
|
"text": "(Bojanowski et al., 2016)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Validation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The lemmatization models are trained on the data provided by Durrett and DeNero (2013) which contains words extracted from the open dictionary Wiktionary. It directly provides pairs of inflected and base forms for words, so we do not need to construct them. For Finnish, the dataset contains 1,136,492 word pairs of adjectives and nouns both in singular and plural, resulting in roughly 42,000 word pairs per word case. Each row in the dataset is a pair of form (w, w ) which are then transformed to pairs of word embeddings (e w , e w ) using the fastText library.", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 86, |
|
"text": "Durrett and DeNero (2013)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Validation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Training We use AdamW optimizer (Kingma and Ba, 2014; Loshchilov and Hutter, 2018) with a learning rate of 0.0002 and a batch size of 32 for training the models in all experiments, but all reasonable stochastic optimization algorithms would work. We separately validated in preliminary tests that running the optimization until convergence of the training objective does not result in overfitting, and hence for the rest of the experiments we used 50 epochs for training to make sure the models are fully converged. In practice, 20-30 epochs were always enough. All experiments shown here are efficient, so that training individual models on consumer-grade 8-core CPU was done in the order of minutes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Validation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Model architectures To compare different architectures, we trained individual models M c (e w ) on all 15 word cases of Finnish with \u03b1 = 1.0 (i.e. no regularization), not separating plural and singular word cases so that always 10,000 word pairs were used for training and 1,000 for testing. The word pairs for training and test sets were chosen randomly. For the final score, we averaged 10 different runs over randomized splits of the data so that the splits were the same for all models for each run. Table 1 compares the four different architectures in terms of metrics explained in Section 3, presenting the average accuracy over all word cases (the results are consistent over different cases, not shown here). The main result is that except for the compression architecture the accuracies ACC LEM are very similar. This suggests there may not be a specific low-dimensional subspace that is sufficient for lemmatization, but that it can be modeled with fairly simple architectures nevertheless. In terms of ACC IDEM , all models here coincidentally converge to the same value that is close to perfect despite not regularizing for idempotency.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 504, |
|
"end": 511, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Validation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We also trained a global model M (e w ) for lemmatizing all cases using the simple model architec- ture, using a combined data set of 50,000 examples covering the different cases and 5,000 word pairs for evaluation. Note, however, that the evaluation set was not the same as for the case-specific models that all used only pairs for the specific case. Hence the numbers in Table 1 are not directly comparable, but we can still confirm that also the global model learns to lemmatize well.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 373, |
|
"end": 380, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Validation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Function composition and idempotency regularization When training separate models for each word case c, we need a function composition of multiple models in order to process arbitrary input word forms. To perform this, we need idempotency regularization to prevent individual models from transforming words of wrong cases. Table 2 demonstrates the effect of the regularization parameter \u03b1 for an example sentence, using two models trained for lemmatizing genitives and partitives and their combination as function composition. For very small \u03b1 already the individual models fail due to almost ignoring the main task, whereas for very large \u03b1 (no regularization) the composition breaks. With \u03b1 = 0.4 we can accurately lemmatize both forms.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 323, |
|
"end": 330, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Validation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To demonstrate the method in a typical application, we consider the task of document comparison where the lemmas of content words often provide sufficient information on similarity. We use a dataset provided by the Finnish national broadcasting company Yle 1 containing news articles written in easy-to-read Finnish. We created an artificial dataset by splitting news articles into Table 2 : Idempotency regularization for function composition of separate models for lemmatizing genitives and partitives. Both too large and small \u03b1 introduce mistakes for this example sentence, but with \u03b1 = 0.4 and the alternative of global model the result is near perfect. The words in genitive form in the original sentence are {hyv\u00e4n, n\u00f6yr\u00e4n, mielen}, and the words in partitive form are {uutta, yst\u00e4v\u00e4\u00e4} two halves and try to predict which two parts belong together by ranking the articles via average vector document representations. We take only a subset of the data, using the first 10,000 news articles from the first three months of the year 2018. We compare the proposed approach against a conventional pipeline that first lemmatizes the words using the uralicNLP library (H\u00e4m\u00e4l\u00e4inen, 2019) (and then uses embeddings for the lemmas for the task) and a pipeline that directly uses the embeddings for all word forms. For the proposed approach we perform lemmatization in the embedding space for four different cases and their combinations, using the simple architecture.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 382, |
|
"end": 389, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Application: Document Comparison", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For all methods, we form a representation for the document by computing the mean of the word embeddings for all words in the document and use cosine similarity between these mean embeddings to compare documents. One could alternatively consider richer document representations (Wieting et al., 2015; Arora et al., 2017; Gupta et al., 2020) or more accurate similarity metrics (Torki, 2018; Lagus et al., 2019) that might improve the overall accuracy, but we chose the most commonly used approach that is easy to understand to focus on demonstrating the effect of the lemmatization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 277, |
|
"end": 299, |
|
"text": "(Wieting et al., 2015;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 319, |
|
"text": "Arora et al., 2017;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 339, |
|
"text": "Gupta et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 376, |
|
"end": 389, |
|
"text": "(Torki, 2018;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 409, |
|
"text": "Lagus et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application: Document Comparison", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We measure performance by retrieval accuracy, by computing the rank of the second half of a given document amongst the set of all 10,000 second halves. Figure 3 shows the overall performance of the different model variants as a func-tion of the regularization parameter, measured by rank-1 accuracy. We observe three clear results: (a) all ways of lemmatization clearly improve the task performance compared to no lemmatization at all, (b) lemmatization in the embedding space using case-specific models is considerably better than the alternatives of traditional lemmatizer and the global model lemmatizing in the embedding space, and (c) idempotency regularization is crucial, but the method is extremely robust with respect to the specific choice of \u03b1 -all values between 0.2 and 0.9 result in almost identical performance. Table 3 illustrates the task performance in more detail for models trained using good choices for the regularization parameter \u03b1, measured using retrieval accuracy with different ranks. The results are consistent over the ranks, with case-specific lemmatizers in the embedding space consistently outperforming the other methods.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 160, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 827, |
|
"end": 834, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Application: Document Comparison", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Even though our main goal is to learn lemmatizers, we note that that the approach is more general. Instead of training a lemmatizer M c (e w ) \u2248 e w , we can use the exact same architectures and data for training G c (e w ) \u2248 e w to learn generators that provide the embedding for the inflected form for some particular case c.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application: Word List Generation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We demonstrate this via the simple application of word list expansion, which could be used simi-Model Word case \u03b1 \u03b1 \u03b1 R@1 R@2 R@3 R@4 R@5 R@6 R@7 R@8 R@9 R@10 Table 3 : The best combinations of each model version averaged over 10 different subsets of the news data. R@K means that we rank the documents by similarity and measure the accuracy of the relevant document being within the top K documents.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 166, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Application: Word List Generation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "simple", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application: Word List Generation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "larly to query expansion for retrieval tasks. Given a list of words w provided in base form and their embeddings e w , we form a list of embeddings for different inflected forms. We trained case-specific models G c (e w ) similar to before with different values for \u03b1, observing a similar trend: the method is robust for the choice, as long as extreme values are avoided. Table 4 illustrates the method for the word list {j\u00e4\u00e4kiekko, Suomi, V en\u00e4j\u00e4} ({ice hockey, F inland, Russia} in English) one could use as keywords for searching information about ice hockey matches between the two countries. We show here the words with the embeddings closest to the ones provided by the generator models to verify it works as intended, but in real use, we would naturally use the transformed embeddings directly for the retrieval task -they are likely to be better representations especially for rare cases for which the actual pre-computed embedding e w is likely to be noisy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 372, |
|
"end": 379, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Application: Word List Generation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For MRLs lemmatization helps in many tasks. We showed that the conventional pipeline using traditional lemmatizers as preprocessing can be replaced by lemmatization in the embedding space. Already simple neural networks can transform the embeddings of inflected words so that the closest word in the embedding space is of the correct lemma. This verifies lemmatization in the embedding space is possible, but in real applications, we naturally would not convert the result back to the lemma. Instead, any downstream task simply processes the lemmatized embeddings directly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We showed that the method outperforms conventional lemmatization preprocessing in the document similarity comparison task, which implies we are not merely learning to replicate the exact lemmatization but instead learn embeddings that Regularization parameter Accuracy Figure 3 : The effect of the regularization parameter (the \u03b1 parameter) on full-length document comparison task using rank-1 accuracy as the scoring method. There is a notable improvement over the baselines (lemmatizer and none) when using our models with idempotency regularization parameter chosen within the range (0.2, 0.9), and the improvement is highly insensitive to the specific value of the parameter. better capture the word content. We hypothesize this is related to how rare words are represented in the embedding space; for rare words, the embeddings for all word forms are unreliable, including the one for the lemma itself. Subword-level embeddings, like fastText used in our experiments, may still be able to learn sensible embeddings for the collection of all inflected forms together, and by lemmatizing in the embedding space we borrow some information from all of the forms. In other words, we argue that the approximate lemmatization performed by the neural network may have the regularizing ability to reduce noise in embeddings of rare words so that the 'approximation' is actually better than the target embedding used during training. Table 4 : Example word list expansion generated for the word list {j\u00e4\u00e4kiekko, Suomi, V en\u00e4j\u00e4} ({ice hockey, F inland, Russia}) using morphological generator models for genitive, inessive, elative, partitive, and illative cases with regularization parameter \u03b1 = 0.4. Note the mistake for the inessive case of \"j\u00e4\u00e4kiekko\", which should be \"j\u00e4\u00e4kiekossa\" and not \"j\u00e4\u00e4kiekkossa\" -the word has the correct \"-ssa\" suffix but the root is incorrect. It is also worth noting that \"j\u00e4\u00e4kiekkossa\" is not a valid word form in Finnish at all, but the fastText library provides embeddings for arbitrary strings using sub-word information. The embeddings for the two forms are likely very close, and hence the mistake would have no effect in retrieval tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 269, |
|
"end": 277, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1429, |
|
"end": 1436, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In this work we presented the overall concept for lemmatization in the embedding space and experimented on various technical choices, building the basis for future development. Our main findings were that a global model can perform lemmatization well when measured only by accuracy, but for the task of document comparison, we reached considerably better results by function composition of case-specific models. To make this possible we proposed a novel idempotency regularization, and showed that the approach is highly robust for the choice of the regularization parameter, making it essentially parameter-free. Finally, we note that even though we demonstrated the approach for an example MRL language Finnish and only for lemmatization of nouns and adjectives, the method is general and directly applicable for other languages and word classes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "http://urn.fi/urn:nbn:fi:lb-2019121205", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by the Academy of Finland Flagship programme: Finnish Center for Artificial Intelligence, FCAI.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A simple but tough-to-beat baseline for sentence embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Sanjeev", |
|
"middle": [], |
|
"last": "Arora", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yingyu", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tengyu", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In Proceedings of International Confer- ence on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Regularization techniques for fine-tuning in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Valerio Miceli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Barone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulrich", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Germann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1707.09920" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antonio Valerio Miceli Barone, Barry Haddow, Ulrich Germann, and Rico Sennrich. 2017. Regularization techniques for fine-tuning in neural machine transla- tion. arXiv preprint arXiv:1707.09920.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1607.04606" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems", |
|
"authors": [ |
|
{ |
|
"first": "Tolga", |
|
"middle": [], |
|
"last": "Bolukbasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "James", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Venkatesh", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Saligrama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kalai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "29", |
|
"issue": "", |
|
"pages": "4349--4357", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Ad- vances in neural information processing systems, 29:4349-4357.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Language models are few-shot learners", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Tom B Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melanie", |
|
"middle": [], |
|
"last": "Ryder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jared", |
|
"middle": [], |
|
"last": "Subbiah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prafulla", |
|
"middle": [], |
|
"last": "Kaplan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Dhariwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Neelakantan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Girish", |
|
"middle": [], |
|
"last": "Shyam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Sastry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Askell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.14165" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Understanding the origins of bias in word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Marc-Etienne", |
|
"middle": [], |
|
"last": "Brunet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colleen", |
|
"middle": [], |
|
"last": "Alkalay-Houlihan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashton", |
|
"middle": [], |
|
"last": "Anderson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zemel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "803--811", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ash- ton Anderson, and Richard Zemel. 2019. Under- standing the origins of bias in word embeddings. In International Conference on Machine Learning, pages 803-811. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Are all languages equally hard to language-model?", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabrina", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "536--541", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Sabrina J. Mielke, Jason Eisner, and Brian Roark. 2018. Are all languages equally hard to language-model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 536-541, New Orleans, Louisiana. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Supervised learning of complete morphological paradigms", |
|
"authors": [ |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Durrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Denero", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1185--1195", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1185-1195, Atlanta, Georgia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Lamb: A good shepherd of morphologically rich languages", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ebert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "M\u00fcller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "742--752", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ebert, Thomas M\u00fcller, and Hinrich Sch\u00fctze. 2016. Lamb: A good shepherd of morphologically rich languages. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 742-752.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Psif: Document embeddings using partition averaging", |
|
"authors": [ |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankit", |
|
"middle": [], |
|
"last": "Saw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pegah", |
|
"middle": [], |
|
"last": "Nokhiz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Praneeth", |
|
"middle": [], |
|
"last": "Netrapalli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piyush", |
|
"middle": [], |
|
"last": "Rai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Partha", |
|
"middle": [], |
|
"last": "Talukdar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "7863--7870", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vivek Gupta, Ankit Saw, Pegah Nokhiz, Praneeth Ne- trapalli, Piyush Rai, and Partha Talukdar. 2020. P- sif: Document embeddings using partition averag- ing. Proceedings of the AAAI Conference on Artifi- cial Intelligence, 34(05):7863-7870.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "UralicNLP: An NLP library for Uralic languages", |
|
"authors": [ |
|
{ |
|
"first": "Mika", |
|
"middle": [], |
|
"last": "H\u00e4m\u00e4l\u00e4inen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of Open Source Software", |
|
"volume": "4", |
|
"issue": "37", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mika H\u00e4m\u00e4l\u00e4inen. 2019. UralicNLP: An NLP library for Uralic languages. Journal of Open Source Soft- ware, 4(37):1345.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Overcoming catastrophic forgetting in neural networks", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Kirkpatrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Razvan", |
|
"middle": [], |
|
"last": "Pascanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neil", |
|
"middle": [], |
|
"last": "Rabinowitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Veness", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Desjardins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Rusu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kieran", |
|
"middle": [], |
|
"last": "Milan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Quan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tiago", |
|
"middle": [], |
|
"last": "Ramalho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Agnieszka", |
|
"middle": [], |
|
"last": "Grabska-Barwinska", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the national academy of sciences", |
|
"volume": "114", |
|
"issue": "", |
|
"pages": "3521--3526", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Ag- nieszka Grabska-Barwinska, et al. 2017. Over- coming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521-3526.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "LemmaTag: Jointly tagging and lemmatizing for morphologically rich languages with BRNNs", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Kondratyuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom\u00e1\u0161", |
|
"middle": [], |
|
"last": "Gaven\u010diak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Straka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4921--4928", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Kondratyuk, Tom\u00e1\u0161 Gaven\u010diak, Milan Straka, and Jan Haji\u010d. 2018. LemmaTag: Jointly tagging and lemmatizing for morphologically rich languages with BRNNs. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pages 4921-4928, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
|
"authors": [ |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--71", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "To lemmatize or not to lemmatize: How word normalisation affects ELMo performance in word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Andrey", |
|
"middle": [], |
|
"last": "Kutuzov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizaveta", |
|
"middle": [], |
|
"last": "Kuzmenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "22--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrey Kutuzov and Elizaveta Kuzmenko. 2019. To lemmatize or not to lemmatize: How word normal- isation affects ELMo performance in word sense disambiguation. In Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing, pages 22-28, Turku, Finland. Link\u00f6ping University Electronic Press.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Low-rank approximations of second-order document representations", |
|
"authors": [ |
|
{ |
|
"first": "Jarkko", |
|
"middle": [], |
|
"last": "Lagus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janne", |
|
"middle": [], |
|
"last": "Sinkkonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arto", |
|
"middle": [], |
|
"last": "Klami", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jarkko Lagus, Janne Sinkkonen, Arto Klami, et al. 2019. Low-rank approximations of second-order document representations. In Proceedings of the 23rd Conference on Computational Natural Lan- guage Learning (CoNLL). ACL.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Decoupled weight decay regularization", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Loshchilov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Hutter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In International Con- ference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1301.3781" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Unsupervised lemmatization as embeddings-based word clustering", |
|
"authors": [ |
|
{ |
|
"first": "Rudolf", |
|
"middle": [], |
|
"last": "Rosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zden\u011bk\u017eabokrtsk\u1ef3", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.08528" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rudolf Rosa and Zden\u011bk\u017dabokrtsk\u1ef3. 2019. Unsu- pervised lemmatization as embeddings-based word clustering. arXiv preprint arXiv:1908.08528.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Measuring word significance using distributed representations of words", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Adriaan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schakel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Benjamin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.02297" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adriaan MJ Schakel and Benjamin J Wilson. 2015. Measuring word significance using dis- tributed representations of words. arXiv preprint arXiv:1508.02297.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Japanese and korean voice search", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaisuke", |
|
"middle": [], |
|
"last": "Nakajima", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5149--5152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In 2012 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149-5152. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A document descriptor using covariance of word vectors", |
|
"authors": [ |
|
{ |
|
"first": "Marwan", |
|
"middle": [], |
|
"last": "Torki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "527--532", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marwan Torki. 2018. A document descriptor using covariance of word vectors. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 527-532.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Towards universal paraphrastic sentence embeddings", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Wieting", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Livescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1511.08198" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Towards universal para- phrastic sentence embeddings. arXiv preprint arXiv:1511.08198.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Men also like shopping: Reducing gender bias amplification using corpus-level constraints", |
|
"authors": [ |
|
{ |
|
"first": "Jieyu", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianlu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Yatskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vicente", |
|
"middle": [], |
|
"last": "Ordonez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2979--2989", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2979-2989, Copenhagen, Denmark. Association for Computational Linguis- tics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "(Left): Traditional task models are trained on the embeddings of either all word forms or the lemmas, obtained by preprocessing with a lemmatizer. (Right):" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Embedding space as a union of inflected subspaces. Each word class creates a subspace and arrows represents the mappings we wish to learn in order to do lemmatization in the embedding space." |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table><tr><td>: Lemmatization accuracy (ACC LEM ) and</td></tr><tr><td>idempotency criterion (ACC IDEM ) for alterna-</td></tr><tr><td>tive network architectures for case-specific mod-</td></tr><tr><td>els, averaged over all 15 word cases. The global</td></tr><tr><td>model can process all cases, but the numerical ac-</td></tr><tr><td>curacy is not directly comparable due to a different</td></tr><tr><td>number of test instances.</td></tr></table>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table><tr><td/><td>gen</td><td>0.8 0.368 0.455 0.502 0.534 0.560 0.582 0.599 0.613 0.625 0.636</td></tr><tr><td>simple</td><td>gen + part</td><td>0.8 0.390 0.483 0.533 0.569 0.594 0.613 0.632 0.647 0.658 0.669</td></tr><tr><td>simple</td><td>gen + ine + part</td><td>0.5 0</td></tr></table>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": ".395 0.491 0.540 0.573 0.597 0.620 0.636 0.649 0.663 0.674 simple gen + ine + ela + part 0.5 0.390 0.482 0.533 0.567 0.594 0.614 0.631 0.645 0.657 0.668 global -0.9 0.329 0.411 0.458 0.488 0.513 0.532 0.548 0.561 0.573 0.585 lemmatizer --0.311 0.391 0.434 0.464 0.485 0.503 0.518 0.532 0.543 0.552 none --0.286 0.362 0.404 0.431 0.454 0.474 0.490 0.501 0.514 0.526" |
|
} |
|
} |
|
} |
|
} |