{ "paper_id": "N15-1034", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:32:37.519311Z" }, "title": "Sign constraints on feature weights improve a joint model of word segmentation and phonology", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "", "affiliation": { "laboratory": "", "institution": "Macquarie University Sydney", "location": { "country": "Australia" } }, "email": "markjohnson@mq.edu.au" }, { "first": "Joe", "middle": [], "last": "Pater", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "settlement": "Amherst Amherst", "region": "MA", "country": "USA" } }, "email": "pater@linguist.umass.edu" }, { "first": "Robert", "middle": [], "last": "Staubs", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "settlement": "Amherst Amherst", "region": "MA", "country": "USA" } }, "email": "rstaubs@linguist.umass.edu" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "", "affiliation": { "laboratory": "Ecole des Hautes Etudes en Sciences Sociales, ENS", "institution": "CNRS", "location": { "settlement": "Paris", "country": "France" } }, "email": "emmanuel.dupoux@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes a joint model of word segmentation and phonological alternations, which takes unsegmented utterances as input and infers word segmentations and underlying phonological representations. The model is a Maximum Entropy or log-linear model, which can express a probabilistic version of Optimality Theory (OT; Prince and Smolensky (2004)), a standard phonological framework. The features in our model are inspired by OT's Markedness and Faithfulness constraints. Following the OT principle that such features indicate \"violations\", we require their weights to be non-positive. We apply our model to a modified version of the Buckeye corpus (Pitt et al., 2007) in which the only phonological alternations are deletions of word-final /d/ and /t/ segments. The model sets a new state-ofthe-art for this corpus for word segmentation, identification of underlying forms, and identification of /d/ and /t/ deletions. We also show that the OT-inspired sign constraints on feature weights are crucial for accurate identification of deleted /d/s; without them our model posits approximately 10 times more deleted underlying /d/s than appear in the manually annotated data.", "pdf_parse": { "paper_id": "N15-1034", "_pdf_hash": "", "abstract": [ { "text": "This paper describes a joint model of word segmentation and phonological alternations, which takes unsegmented utterances as input and infers word segmentations and underlying phonological representations. The model is a Maximum Entropy or log-linear model, which can express a probabilistic version of Optimality Theory (OT; Prince and Smolensky (2004)), a standard phonological framework. The features in our model are inspired by OT's Markedness and Faithfulness constraints. Following the OT principle that such features indicate \"violations\", we require their weights to be non-positive. We apply our model to a modified version of the Buckeye corpus (Pitt et al., 2007) in which the only phonological alternations are deletions of word-final /d/ and /t/ segments. The model sets a new state-ofthe-art for this corpus for word segmentation, identification of underlying forms, and identification of /d/ and /t/ deletions. We also show that the OT-inspired sign constraints on feature weights are crucial for accurate identification of deleted /d/s; without them our model posits approximately 10 times more deleted underlying /d/s than appear in the manually annotated data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper unifies two different strands of research on word segmentation and phonological rule induction. The word segmentation task is the task of segmenting utterances represented as sequences of phones into sequences of words. This is an idealisation of the lexicon induction problem, since the resulting words are phonological forms for lexical entries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In its simplest form, the data for a word segmentation task is obtained by looking up the words of an orthographic transcript (of, say, child-directed speech) in a pronouncing dictionary and concatenating the results. However, this formulation significantly oversimplifies the problem because it assumes that each token of a word type is pronounced identically in the form specified by the pronouncing dictionary (usually its citation form). In reality there is usually a significant amount of pronunciation variation from token to token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Buckeye corpus, on which we base our experiments here, contains manually-annotated surface phonetic representations of each word as well as the corresponding underlying form (Pitt et al., 2007) . For example, a token of the word \"lived\" has the underlying form /l.ih.v.d/ and could have the surface form [l.ah.v] (we follow standard phonological convention by writing underlying forms with slashes and surface forms with square brackets, and use the Buckeye transcription format).", "cite_spans": [ { "start": 178, "end": 197, "text": "(Pitt et al., 2007)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There is a large body of work in the phonological literature on inferring phonological rules mapping underlying forms to their surface realisations. While most of this work assumes that the underlying forms are available to the inference procedure, there is work that induces underlying forms as well as the phonological processes that map them to sur-face forms (Eisenstat, 2009; Pater et al., 2012) . We present a model that takes a corpus of unsegmented surface representations of sentences and infers a word segmentation and underlying forms for each hypothesised word. We test this model on data derived from the Buckeye corpus where the only phonological variation consists of word-final /d/ and /t/ deletions, and show that it outperforms a state-ofthe-art model that only handles word-final /t/ deletions.", "cite_spans": [ { "start": 363, "end": 380, "text": "(Eisenstat, 2009;", "ref_id": "BIBREF8" }, { "start": 381, "end": 400, "text": "Pater et al., 2012)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our model is a MaxEnt or log-linear model, which means that it is formally equivalent to a Harmonic Grammar, which is a continuous version of Optimality Theory (OT) (Smolensky and Legendre, 2005) . We use features inspired by OT, and show that sign constraints on feature weights result in models that recover underlying /d/s significantly more accurately than models that don't include such contraints. We present results suggesting that these constraints simplify the search problem that the learner faces.", "cite_spans": [ { "start": 165, "end": 195, "text": "(Smolensky and Legendre, 2005)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is structured as follows. The next section describes related work, including previous work that this paper builds on. Section 3 describes our model, while section 4 explains how we prepared the data, presents our experimental results and investigates the effects of design choices on model performance. Section 5 concludes the paper and discusses possible future directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The word segmentation task is the task of segmenting utterances represented as sequences of phones into sequences of words. Elman (1990) introduced the word segmentation task as a simplified form of lexical acquisition, and Brent and Cartwright (1996) and Brent (1999) introduced the unigram model of word segmentation, which forms the basis of the model used here. described a non-parametric Bayesian model of word segmentation, and highlighted the importance of contextual dependencies. Johnson (2008) and showed that word segmentation accuracy improves when phonotactic constraints on word shapes are incorporated into the model. That model has been extended to also exploit stress cues (B\u00f6rschinger and Johnson, 2014) , the \"topics\" present in the non-linguistic context (Johnson et al., 2010) and the special properties of function words . Liang and Klein (2009) proposed a simple unigram model of word segmentation much like the original Brent unigram model, and introduced a \"word length penalty\" to avoid under-segmentation that we also use here. (As Liang et al note, without this the maximum likelihood solution is not to segment utterances at all, but to analyse each utterance as a single word). Berg-Kirkpatrick et al. (2010) extended this model by defining the unigram distribution with a MaxEnt model. The MaxEnt features can capture phonotactic generalisations about possible word shapes, and their model achieves a stateof-the-art word segmentation f-score.", "cite_spans": [ { "start": 124, "end": 136, "text": "Elman (1990)", "ref_id": "BIBREF9" }, { "start": 224, "end": 251, "text": "Brent and Cartwright (1996)", "ref_id": "BIBREF5" }, { "start": 256, "end": 268, "text": "Brent (1999)", "ref_id": "BIBREF6" }, { "start": 489, "end": 503, "text": "Johnson (2008)", "ref_id": "BIBREF21" }, { "start": 690, "end": 721, "text": "(B\u00f6rschinger and Johnson, 2014)", "ref_id": "BIBREF2" }, { "start": 775, "end": 797, "text": "(Johnson et al., 2010)", "ref_id": "BIBREF17" }, { "start": 845, "end": 867, "text": "Liang and Klein (2009)", "ref_id": "BIBREF22" }, { "start": 1208, "end": 1238, "text": "Berg-Kirkpatrick et al. (2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Background and related work", "sec_num": "2" }, { "text": "The phonological learning task is to learn the phonological mapping from underlying forms to surface forms. Johnson (1984) and Johnson (1992) describe a search procedure for identifying underlying forms and the phonological rules that map them to surface forms given surface forms organised into inflectional paradigms. Goldwater and Johnson (2003) and Goldwater and Johnson (2004) showed how Harmonic Grammar phonological constraint weights (Smolensky and Legendre, 2005) can be learnt using a Maximum Entropy parameter estimation procedure given data consisting of underlying and surface word form pairs. There is now a significant body of work using Maximum Entropy techniques to learn phonological constraint weights (see esp. Hayes and Wilson (2008) , as well as the review in Coetzee and Pater (2011) ).", "cite_spans": [ { "start": 108, "end": 122, "text": "Johnson (1984)", "ref_id": "BIBREF19" }, { "start": 127, "end": 141, "text": "Johnson (1992)", "ref_id": "BIBREF20" }, { "start": 320, "end": 348, "text": "Goldwater and Johnson (2003)", "ref_id": "BIBREF12" }, { "start": 353, "end": 381, "text": "Goldwater and Johnson (2004)", "ref_id": "BIBREF13" }, { "start": 442, "end": 472, "text": "(Smolensky and Legendre, 2005)", "ref_id": "BIBREF27" }, { "start": 731, "end": 754, "text": "Hayes and Wilson (2008)", "ref_id": "BIBREF15" }, { "start": 782, "end": 806, "text": "Coetzee and Pater (2011)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Background and related work", "sec_num": "2" }, { "text": "Recently there has been work attempting to integrate these two approaches. The word segmentation work generally ignores pronunciation variation by assuming that the input to the learner consists of sequences of citation forms of words, which is highly unrealistic. The phonology learning work has generally assumed that the learner has access to the underlying forms of words, which is also unrealistic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and related work", "sec_num": "2" }, { "text": "In the word segmentation area, Elsner et al. (2012) and Elsner et al. (2013) generalise the Goldwater bigram model by assuming that the bigram model generates underlying forms, which a finite state transducer maps to surface forms. While this is an extremely general model, inference in such a model is very challenging, and they restrict attention to transducers where the underlying to surface mapping consists of simple substitutions, so their model cannot handle the deletion phenomena studied here. B\u00f6rschinger et al. (2013) also generalise the Goldwater bigram model by including an underlyingto-surface mapping, but their mapping only allows word-final underlying /t/ to be deleted, which enables them to use a straight-forward generalisation of Goldwater's Gibbs sampling inference procedure.", "cite_spans": [ { "start": 31, "end": 51, "text": "Elsner et al. (2012)", "ref_id": "BIBREF10" }, { "start": 56, "end": 76, "text": "Elsner et al. (2013)", "ref_id": "BIBREF11" }, { "start": 504, "end": 529, "text": "B\u00f6rschinger et al. (2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Background and related work", "sec_num": "2" }, { "text": "In phonology, Eisenstat 2009and Pater et al. (2012) showed how to generalise a MaxEnt model so it also learns underlying forms as well as MaxEnt phonological constraint weights given surface forms in paradigm format. The vast sociolinguistic literature on /t/-/d/-deletion is surveyed in Coetzee and Pater (2011) , together with prior OT and MaxEnt analyses of the phenomena.", "cite_spans": [ { "start": 32, "end": 51, "text": "Pater et al. (2012)", "ref_id": "BIBREF23" }, { "start": 288, "end": 312, "text": "Coetzee and Pater (2011)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Background and related work", "sec_num": "2" }, { "text": "This section contains a more technical description of the Berg-Kirkpatrick et al. 2010MaxEnt unigram model of word segmentation, which our model directly builds on. Our model integrates the Max-Ent unigram word segmentation model of Berg-Kirkpatrick et al. with the MaxEnt phonology models developed by Goldwater and Johnson (2003) and Goldwater and Johnson (2004) . Because both kinds of models are MaxEnt models, this integration is fairly easy, and the inference procedure requires optimisation of a fairly straight-forward objective function. We use a customised version of the OWLQN-LBFGS procedure (Andrew and Gao, 2007) that allows us to impose sign constraints on individual feature weights.", "cite_spans": [ { "start": 303, "end": 331, "text": "Goldwater and Johnson (2003)", "ref_id": "BIBREF12" }, { "start": 336, "end": 364, "text": "Goldwater and Johnson (2004)", "ref_id": "BIBREF13" }, { "start": 604, "end": 626, "text": "(Andrew and Gao, 2007)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "As is standard in the word-segmentation literature, the model's input is a sequence of utterances D = (w 1 , . . . , w n ), where each utterance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "w i = (w i,1 , . . . , w i,m i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "is a sequence of (surface) phones. The Berg-Kirkpatrick et al model is a unigram model, so it defines a probability distribution over possible words s, where s is also a sequence of phones. The probability of an utterance w is the sum of the probability of all word sequences that generate it:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "P(w | \u03b8) = s 1 ...s s.t.s 1 ...s =w j=1 P(s j | \u03b8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "Berg-Kirkpatrick et al's model of word probabilities P(s | \u03b8) is a MaxEnt model with parameters \u03b8, where the features f (s) of surface form s are chosen to encourage the model to generalise appropriately over word shapes. While they don't describe their features in complete detail, they include features for each word s, features for the prefix and suffix of s and features for the CV skeleton of the prefix and suffix of s. In more detail, P(s | \u03b8) is a MaxEnt model as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "P(s | \u03b8) = 1 Z exp(\u03b8 \u2022 f (s))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": ", where:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "Z = s \u2208S exp(\u03b8 \u2022 f (s ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "The set of possible surface word forms S is the set of substrings (i.e., sequences of phones) occuring in the training data D that are shorter than a userspecified length bound. We follow Berg-Kirkpatrick in imposing a length bound on possible words; for the Brent corpus the maximum word length is 10 phones, while for the Buckeye corpus the maximum word length is 15 phones (reflecting the fact that words are longer in this adult-directed corpus).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "While restricting the set of possible word forms S to the substrings appearing in D is reasonable for a simple multinomial model like the one in Liang and Klein (2009) , it's interesting that this produces good results with a MaxEnt model like Berg-Kirkpatrick et al's, since one might expect such a model would have to learn generalisations about impossible word shapes in order to perform well. Because S only contains a small fraction of the possible phone strings, one might worry that the model would not see enough \"impossible words\" to learn to distinguish possible words from impossible ones, but the model's good performance suggests this is not the case. 1", "cite_spans": [ { "start": 145, "end": 167, "text": "Liang and Klein (2009)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "Berg-Kirkpatrick et al follow Liang et al in using maximum likelihood estimation to estimate their model's parameters (Berg-Kirkpatrick et al actually use L 2 -regularised maximum likelihood estimates). As Liang et al note, it's easy to show that the maximum likelihood segmentation leaves each utterance unsegmented, i.e., each utterance is analysed as a single word. To avoid this, Berg-Kirkpatrick et al follow Liang et al by multiplying the word probabilities by a word length penalty term. Thus the likelihood L D they actually maximise is as shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "L D (\u03b8) = n i=1 P(w i | \u03b8) P(w | \u03b8) = s 1 ...s s.t.s 1 ...s =w j=1 P(s j | \u03b8) exp(\u2212|s i | d )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "where d is a constant chosen to optimise segmentation performance. This means that the model is deficient, i.e., s\u2208S P(s | \u03b8) < 1. (Because our model uses a word length penalty in the same way, it too is deficient).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "As Figure 1 shows, performance is very sensitive to the word length penalty parameter d: the best word segmentation on the Brent corpus is obtained when d \u2248 1.6, while the best segmentation on the Buckeye corpus is obtained when d \u2248 1.5. As far as we know there is no principled way to set d in an unsupervised fashion, so this sensitivity to d is perhaps the greatest weakness of this kind of model. Even so, it's interesting that a unigram model without the kind of inter-word dependencies that argues for can do so well. It's possible that the improvement that Goldwater et al found with the bigram model is because modelling individual bigram dependencies splits the data in a way that reduces overlearning (B\u00f6rschinger et al., 2012) . mapping underlying forms to surface forms. For example, word-final /t/ deletion is the function mapping underlying underlying forms ending in /t/ to surface forms lacking that final segment.", "cite_spans": [ { "start": 711, "end": 737, "text": "(B\u00f6rschinger et al., 2012)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "Our model is also a unigram model, but it defines a distribution over pairs (s, u) of surface/underlying form pairs, where s is a surface form and u is an underlying form. Below we allow this distribution to condition on phonological properties of the neighbouring surface forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "The set X of possible (s, u) surface/underlying form pairs is defined as follows. For each surface form s \u2208 S (the set of length-bounded phone substrings of the data D), (s, s) \u2208 X . In addition, if u \u2208 S and some phonological alternation p \u2208 P maps u to a surface form s \u2208 p(u) \u2208 S, then (s, u) \u2208 X . That is, we require that potential underlying forms appear as surface substrings somewhere in the data D (which means this model cannot handle e.g., absolute neutralisation).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "In the experiments below, we let P be phonological processes that delete word-final /d/ and /t/ phonemes. Given the Buckeye data, ([l.ih. ", "cite_spans": [ { "start": 130, "end": 137, "text": "([l.ih.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "v], /l.ih.v/), ([l.ih.v], /l.ih.v.d/) and ([l.ih.v], /l.ih.v.t/)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "are all members of X (i.e., candidate (s, u) pairs), corresponding to \"live\", \"lived\" and the non-word \"livet\" respectively, where the latter two surface forms are generated by final /d/ and /t/ deletion respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "Word-final /d/ and /t/ deletion depends on various aspects of the phonological context, such as whether the following word begins with a consonant or a vowel. Our model handles this dependency by learning a conditional model over surface/underlying form pairs (s, u) \u2208 X that depends on the phonological context c:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "P(s, u | c, \u03b8) = 1 Z c exp(\u03b8 \u2022 f (s, u, c))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": ", where:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "Z c = (s,u)\u2208X exp(\u03b8 \u2022 f (s, u, c))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "In our experiments below, the set of possible contexts is C = {C, V, #}, encoding whether the following word begins with a consonant, a vowel or is the end of the utterance respectively. We leave for future research the exploration of other sorts of contextual conditioning. Note that the set X is the same for all contexts c; we show below that restricting attention to just those surface/underlying pairs appearing in the context c degrades the model's performance. In other words, the model benefits from the implicit negative evidence provided by underlying/surface pairs that do not occur in a given context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "We define the probability of a surface form s \u2208 S in a context c \u2208 C by marginalising out the underlying form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "P(s | c, \u03b8) = u:(s,u)\u2208X P(s, u | c, \u03b8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "We optimise a penalised log likelihood Q D (\u03b8), with the word length penalty term d applied to the underlying form u.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "Q(s | c, \u03b8) = u:(s,u)\u2208X P(s, u | c, \u03b8) exp(\u2212|u| d ) Q(w | \u03b8) = s 1 ...s s.t.s 1 ...s =w j=1 Q(s j | c, \u03b8) Q D (\u03b8) = n i=1 log Q(w i | \u03b8) \u2212 \u03bb ||\u03b8|| 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "We are somewhat cavalier about the conditional contexts c here: in our model below the context c for a word is determined by the following word, so one can view our model as a generative model that generates the words in an utterance from right to left.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "Because our model is a MaxEnt model, we have considerable freedom in the choice of features, and as Berg-Kirkpatrick et al. (2010) emphasise, the choice of features directly determines the kinds of generalisations the model can learn. The features f (s, u, c) of a surface form s, underlying form u and context c we use here are inspired by OT. We describe our features using an example where s = [l.ih.v], u = /l.ih.v.t/ and c = C (i.e., the word is followed by a consonant).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Berg-Kirkpatrick et al. model", "sec_num": "2.1" }, { "text": "A feature for each underlying form u. In our example, the feature is . These features enable the model to learn language-specific lexical entries. There are 4,803,734 underlying form lexical features (one for each possible substring in the training data).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Underlying form lexical features:", "sec_num": null }, { "text": "The length of the surface string (<#L 3>), the number of vowels (<#V 1>) (this is a rough indication of the number of syllables), the surface suffix (), the surface prefix and suffix CV shape ( and ), and suffix+context CV shape ( and ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surface markedness features:", "sec_num": null }, { "text": "There are 108 surface markedness features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surface markedness features:", "sec_num": null }, { "text": "Faithfulness features: A feature for each divergence between underlying and surface forms (in this case, < * F t>). There are two faithfulness features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surface markedness features:", "sec_num": null }, { "text": "We used L 1 regularisation here, rather than the L 2 regularisation used by Berg-Kirkpatrick et al. (2010) , in the hope that its sparsity-inducing \"feature selection\" capabilities would enable it to \"learn\" lexical entries for the language, as well as precisely which markedness features are required to account for the data. However, we found that the choice of L 1 versus L 2 regression makes little difference, and the model is insensitive to the value of the regulariser constant \u03bb (we set to \u03bb = 1 in the experiments below).", "cite_spans": [ { "start": 76, "end": 106, "text": "Berg-Kirkpatrick et al. (2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Surface markedness features:", "sec_num": null }, { "text": "We developed a specially modified version of the LBFGS-OWLQN optimisation procedure for optimising L 1 -regularised loss functions (Andrew and Gao, 2007) that allows us to constrain certain feature weights \u03b8 k to have a particular sign. This is a natural extension of the LBFGS-OWLQN procedure since it performs orthant-constrained line searches in any case. We describe experiments below where we require the feature weights for the markedness and faithfulness features to be non-positive, and where the underlying lexical form features are required to be non-negative. The requirement that the lexical form features are positive, combined with the sparsity induced by the L 1 regulariser, was intended to force the model to learn an explicit lexicon encoded by the underlying form features with positive weights (although our results below suggest that it did not in fact do this).", "cite_spans": [ { "start": 131, "end": 153, "text": "(Andrew and Gao, 2007)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Surface markedness features:", "sec_num": null }, { "text": "The inspiration for the requirement that markedness and faithfulness features are non-positive comes from OT, which claims that the presence of such features can only reduce the \"harmony\", i.e., the well-formedness, of an (s, u) pair. Versions of Harmonic Grammar that aim to produce OTlike behavior with weighted constraints often bound weights at zero (see e.g. Pater (2009) ). The results below are the first to show that these constraints matter for word segmentation.", "cite_spans": [ { "start": 364, "end": 376, "text": "Pater (2009)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Surface markedness features:", "sec_num": null }, { "text": "This section describes the experiments we performed to evaluate the model just described. We first describe how we prepared the data on which the model is trained and evaluated, and then we describe the performance of that model. Finally we perform an analysis of how the model's performance varies as parameters of the model are changed. We ran this model on data extracted from the Buckeye corpus of conversational speech (Pitt et al., 2007) which was modified so the only alternations it contained are final /d/ and /t/ deletions. The Buckeye corpus gives a surface realisation and an underlying form for each word token, and following B\u00f6rschinger et al. (2013) , we prepared the data as follows. We used the Buckeye underlying forms as our underlying forms. Our surface forms were also identical to the Buckeye underlying forms, except when the underlying form ends in either a /d/ or a /t/. In this case, if the Buckeye surface form does not end in an allophonic variant of that segment, then our surface form consists of the Buckeye underlying form with that final segment deleted. Thus the only phonological variation in our data are deletions of word-final /d/ and /t/ appearing in the Buckeye corpus, otherwise our surface forms are identical to Buckeye underlying forms.", "cite_spans": [ { "start": 424, "end": 443, "text": "(Pitt et al., 2007)", "ref_id": "BIBREF25" }, { "start": 639, "end": 664, "text": "B\u00f6rschinger et al. (2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "For example, consider a token whose Buckeye underlying form is /l.ih.v.d/ \"lived\". If the Buckeye surface form is [l.ah.v] then our surface form would be [l.ih.v] , while if the Buckeye surface form is [l.ah.v.d ] then our surface form would be [l.ih.v.d] .", "cite_spans": [ { "start": 154, "end": 162, "text": "[l.ih.v]", "ref_id": null }, { "start": 202, "end": 211, "text": "[l.ah.v.d", "ref_id": null }, { "start": 245, "end": 255, "text": "[l.ih.v.d]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "We now present some descriptive statistics on our data. The data contains 48,796 sentences and 890,597 segments. The longest sentence has 187 segments. The \"gold\" data has the following properties. There are 236,996 word boundaries, 285,792 word tokens, and 9,353 underlying word types. The longest word has 17 segments. Of the 41,186 /d/s and 73,392 /t/s in the underlying forms, 24,524 /d/s and 40,720 /t/s are word final, and of these 13,457 /d/s and 11,727 /t/s are deleted (i.e., do not appear on the surface).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "Our model considers all possible substrings of length 15 or less as a possible surface form of a word, yielding 4,803,734 possible word types and 5,292,040 possible surface/underlying word type pairs. Taking the 3 contexts derived from the following word into account, there are 4,969,718 possible word+context types. When all possible surface/underlying pairs are considered in all possible contexts there are 15,876,120 possible surface/underlying/context triples. Table 1 summarises the major experimental results for this model, and compares them to the results of B\u00f6rschinger et al. (2013) . Note that their model only recovers word-final /t/ deletions and was run on data without word-final /d/ deletions, so it is solving a simpler problem than the one studied here. Even so, our model achieves higher overall accuracies.", "cite_spans": [ { "start": 569, "end": 594, "text": "B\u00f6rschinger et al. (2013)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 467, "end": 474, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "We also conducted experiments on several of the design choices in our model. Figure 2 shows the effect of the sign constraints on feature weights discussed above. This plot shows that the contraints on the weights of markedness and faithfulness features seems essential for good word segmentation performance. Interestingly, we found that the weight constraints make very little difference if the data does B\u00f6rschinger et al. 2013 Our model Surface token f-score 0.72 0.76 (0.01) Underlying type f-score -0.37 (0.02) Deleted /t/ f-score 0.56 0.58 (0.03) Deleted /d/ f-score -0.62 (0.19) Table 1 : Results summary for our model compared to that of the B\u00f6rschinger et al. (2013) model. Surface token fscore is the standard token f-score, while underlying type or \"lexicon\" f-score measures the accuracy with which the underlying word types are recovered. Deleted /t/ and /d/ f-scores measure the accuracy with which the model recovers segments that don't appear in the surface. These results are averaged over 40 runs (standard deviations in parentheses) with the word length penalty d = 1.525 applied to underlying forms; standard deviations are given in parentheses. Figure 2 : The effect of constraints on feature weights on surface token f-score. \"OT\" indicates that the markedness and faithfulness features are required to be non-positive, while \"Lexical\" indicates that the underlying lexical features are required to be non-negative. not any /t/ or /d/ deletions (i.e., the case that Berg-Kirkpatrick et al. (2010) studied).", "cite_spans": [ { "start": 407, "end": 430, "text": "B\u00f6rschinger et al. 2013", "ref_id": "BIBREF4" }, { "start": 651, "end": 676, "text": "B\u00f6rschinger et al. (2013)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 77, "end": 85, "text": "Figure 2", "ref_id": null }, { "start": 587, "end": 594, "text": "Table 1", "ref_id": null }, { "start": 1167, "end": 1175, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "Investigating this further, we found that the weight constraints on the markedness and faithfulness features has a dramatic effect on the recovery of underlying segments, particularly underlying /d/s. Figure 3 shows that with these constraints the model recovers approximately the correct number of deleted underlying segments, while without this constraint the model posits far too many underlying /d/s. Figure 4 shows that these constraints help the model find higher regularised likelihood sets of feature weights with fewer non-zero feature weights.", "cite_spans": [], "ref_spans": [ { "start": 201, "end": 209, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 405, "end": 413, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "We examined how the number of non-zero feature weights (most of which are for underlying type features) relate to the number of underlying types posited by the model. Figure 5 shows that the weight constraints on markedness and faithfulness constraints have great impact on the number of nonzero feature weights and on the number of underlying forms the model posits. In all cases, the model recovers far more underlying forms than it finds nonzero weights.", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 175, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "The lexicon weight constraints have much less impact than the OT weight constraints. As Figure 3 shows, without the OT weight constraints the models posit too many deleted /d/ and essentially no deleted /t/. Figure 4 shows that OT weight constraints enable the model to find higher likelihood solutions, i.e., the OT weight constraints help search. Inspired by a reviewer's comments, we studied typetoken ratios and the number of boundaries our models posit. We found that the models without OT weight constraints posit far too few word boundaries compared to the gold data, so the number of surface tokens is too low, so the words are too long, and the number of underlying types is too high. This is consistent with Figures 4-5.", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 96, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 208, "end": 216, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "We also examined whether it is necessary to consider all surface/underlying pairs X in each context C, or whether it is possible to restrict attention to the much smaller sets X c that occur in each c \u2208 C (this dramatically reduces the amount of memory required and speeds the computation). Figure 6 shows that working with the smaller, context-specific sets dramatically decreases the model's ability to recover deleted segments.", "cite_spans": [], "ref_spans": [ { "start": 291, "end": 299, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4" }, { "text": "The MaxEnt unigram model of word segmentation developed by Berg-Kirkpatrick et al. (2010) integrates straight-forwardly with the MaxEnt phonology models of Goldwater and Johnson (2003) to produce a MaxEnt model that jointly models word segmentation and the mapping from underlying to surface forms. We tested our model on data derived from the manually-annotated Buckeye corpus of conversational speech (Pitt et al., 2007) in which the only phonological alternations are deletions of word-final /d/ and /t/ segments. We demonstrated that our model improves on the state-of-the-art for word seg-mentation, recovery of underlying forms and recovery of deleted segments for this corpus.", "cite_spans": [ { "start": 59, "end": 89, "text": "Berg-Kirkpatrick et al. (2010)", "ref_id": "BIBREF1" }, { "start": 156, "end": 184, "text": "Goldwater and Johnson (2003)", "ref_id": "BIBREF12" }, { "start": 403, "end": 422, "text": "(Pitt et al., 2007)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "Our model is a MaxEnt or log-linear unigram model over the set of possible surface/underlying form pairs. Inspired by the work of Berg-Kirkpatrick et al. (2010) , the set of surface/underlying form pairs our model calculates the partition function over is restricted to those actually appearing in the training data, and doesn't include all logically possible pairs. We found that even with this restriction, the model produces good results.", "cite_spans": [ { "start": 130, "end": 160, "text": "Berg-Kirkpatrick et al. (2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "Because our model is a Maximum Entropy or loglinear model, it is formally an instance of a Harmonic Grammar (Smolensky and Legendre, 2005 ), so we investigated features inspired by OT, which is a discretised version of Harmonic Grammar that has been extensively developed in the linguistics literature. The features our model uses consist of underlying form features (one for each possible underlying form), together with markedness and faithfulness phonological features inspired by OT phonological analyses. According to OT, these markedness and faithfulness features should always have negative weights (i.e., when such a feature \"fires\", it should always make the analysis less probable). We found that constraining feature weights in this way dramatically improves the model's accuracy, apparently helping to find higher likelihood solutions.", "cite_spans": [ { "start": 108, "end": 137, "text": "(Smolensky and Legendre, 2005", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "Looking forwards, a major drawback of the Max-Ent approaches to word segmentation are their sensitivity to the word length penalty parameter, which this model shares with the models of Berg-Kirkpatrick et al. (2010) and (Liang and Klein, 2009) on which it is based. It would be very desirable to have a principled way to set this parameter in an unsupervised manner.", "cite_spans": [ { "start": 185, "end": 215, "text": "Berg-Kirkpatrick et al. (2010)", "ref_id": "BIBREF1" }, { "start": 220, "end": 243, "text": "(Liang and Klein, 2009)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "Because our goal was to explore the MaxEnt approach to joint segmenation and alternation, we deliberately used a minimal feature set here. As the reviewers pointed out, we did not include any morphological features, which could have a major impact on the model. Investigating the impact of richer feature sets, including a combination of phonotactic and morphological features, would be an excellent topic for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "It would be interesting to extend this approach to a wider range of phonological processes in addition to the word-final /t/ and /d/ deletion studied here. Because this model enumerates the possible surface/underlying/context triples before beginning to search for potential surface and underlying words, its memory requirements would grow dramatically if the set of possible surface/underlying alternations were increased. (The fact that we only considered word final /d/ and /t/ deletions means that there are only three possible underlying word forms for each surface word forms). Perhaps there is a way of identifying potential underlying forms that avoids enumerating them. For example, it might be possible to sample possible underlying word forms during the learning process rather than enumerating them ahead of time, perhaps by adapting non-parametric Bayesian approaches B\u00f6rschinger et al., 2013) .", "cite_spans": [ { "start": 881, "end": 906, "text": "B\u00f6rschinger et al., 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "The non-parametric Bayesian approach of andJohnson (2008) can be viewed as setting S to the set of all possible phone strings (i.e., a possible word can be any string of phones, whether or not it appears in D). The success of Berg-Kirkpatrick et al's approach suggests that these nonparametric methods might not be necessary here, i.e., the set of substrings actually occuring in D is \"large enough\" to enable the model to learn \"implicit negative evidence\" generalisations about impossible word shapes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported under the Australian Research Council's Discovery Projects funding scheme (project numbers DP110102506 and DP110102593), by the Mairie de Paris, the fondation Pierre Gilles de Gennes, the\u00c9cole des Hautes Etudes en Sciences Sociales, the\u00c9cole Normale Sup\u00e9rieure, the Region Ile de France, by the US National Science Foundation under Grant No. S121000000211 to the third author and Grant BCS-424077 to the University of Massachusetts, and by grants from the European Research Council (ERC-2011-AdG-295810 BOOTPHON) and the Agence Nationale pour la Recherche (ANR-10-LABX-0087 IEC, ANR-10-IDEX-0001-02 PSL*). We'd also like to thank the three anonymous reviewers for helpful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Scalable training of l1-regularized log-linear models", "authors": [ { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 24th International Conference on Machine Learning, ICML '07", "volume": "", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Galen Andrew and Jianfeng Gao. 2007. Scalable train- ing of l1-regularized log-linear models. In Proceed- ings of the 24th International Conference on Machine Learning, ICML '07, pages 33-40, New York, New York. ACM.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Painless unsupervised learning with features", "authors": [ { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Bouchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "John", "middle": [], "last": "Denero", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "582--590", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taylor Berg-Kirkpatrick, Alexandre Bouchard-C\u00f4t\u00e9, John DeNero, and Dan Klein. 2010. Painless unsu- pervised learning with features. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Com- putational Linguistics, pages 582-590. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Exploring the role of stress in Bayesian word segmentation using adaptor grammars", "authors": [ { "first": "Benjamin", "middle": [], "last": "B\u00f6rschinger", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "93--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin B\u00f6rschinger and Mark Johnson. 2014. Explor- ing the role of stress in Bayesian word segmentation using adaptor grammars. Transactions of the Associa- tion for Computational Linguistics, 2(1):93-104.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Studying the effect of input size for Bayesian word segmentation on the Providence corpus", "authors": [ { "first": "Benjamin", "middle": [], "last": "B\u00f6rschinger", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Demuth", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 24th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "325--340", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin B\u00f6rschinger, Katherine Demuth, and Mark Johnson. 2012. Studying the effect of input size for Bayesian word segmentation on the Providence cor- pus. In Proceedings of the 24th International Con- ference on Computational Linguistics (Coling 2012), pages 325-340, Mumbai, India. Coling 2012 Organiz- ing Committee.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A joint model of word segmentation and phonological variation for English word-final /t/-deletion", "authors": [ { "first": "Benjamin", "middle": [], "last": "B\u00f6rschinger", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Demuth", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1508--1516", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin B\u00f6rschinger, Mark Johnson, and Katherine De- muth. 2013. A joint model of word segmentation and phonological variation for English word-final /t/- deletion. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1508-1516, Sofia, Bul- garia. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Distributional regularity and phonotactic constraints are useful for segmentation", "authors": [ { "first": "M", "middle": [], "last": "Brent", "suffix": "" }, { "first": "T", "middle": [], "last": "Cartwright", "suffix": "" } ], "year": 1996, "venue": "Cognition", "volume": "61", "issue": "", "pages": "93--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Brent and T. Cartwright. 1996. Distributional reg- ularity and phonotactic constraints are useful for seg- mentation. Cognition, 61:93-125.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning", "authors": [ { "first": "M", "middle": [], "last": "Brent", "suffix": "" } ], "year": 1999, "venue": "", "volume": "34", "issue": "", "pages": "71--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Brent. 1999. An efficient, probabilistically sound algorithm for segmentation and word discovery. Ma- chine Learning, 34:71-105.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The place of variation in phonological theory", "authors": [ { "first": "Andries", "middle": [], "last": "Coetzee", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Pater", "suffix": "" } ], "year": 2011, "venue": "The Handbook of Phonological Theory", "volume": "", "issue": "", "pages": "401--431", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andries Coetzee and Joe Pater. 2011. The place of vari- ation in phonological theory. In John Goldsmith, Ja- son Riggle, and Alan Yu, editors, The Handbook of Phonological Theory, pages 401-431. Blackwell, 2nd edition.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning underlying forms with MaxEnt", "authors": [ { "first": "Sarah", "middle": [ "Eisenstat" ], "last": "", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah Eisenstat. 2009. Learning underlying forms with MaxEnt. Master's thesis, Brown University.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Finding structure in time", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Elman", "suffix": "" } ], "year": 1990, "venue": "Cognitive Science", "volume": "14", "issue": "", "pages": "197--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Elman. 1990. Finding structure in time. Cogni- tive Science, 14:197-211.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Bootstrapping a unified model of lexical and phonetic acquisition", "authors": [ { "first": "Micha", "middle": [], "last": "Elsner", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "184--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micha Elsner, Sharon Goldwater, and Jacob Eisenstein. 2012. Bootstrapping a unified model of lexical and phonetic acquisition. In Proceedings of the 50th An- nual Meeting of the Association for Computational Linguistics, pages 184-193, Jeju Island, Korea. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A joint learning model of word segmentation, lexical acquisition, and phonetic variability", "authors": [ { "first": "Micha", "middle": [], "last": "Elsner", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Naomi", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Wood", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "42--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micha Elsner, Sharon Goldwater, Naomi Feldman, and Frank Wood. 2013. A joint learning model of word segmentation, lexical acquisition, and phonetic vari- ability. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 42-54, Seattle, Washington, USA, October. As- sociation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning OT constraint rankings using a Maximum Entropy model", "authors": [ { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Stockholm Workshop on Variation within Optimality Theory", "volume": "", "issue": "", "pages": "111--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon Goldwater and Mark Johnson. 2003. Learn- ing OT constraint rankings using a Maximum Entropy model. In J. Spenader, A. Eriksson, and Osten Dahl, editors, Proceedings of the Stockholm Workshop on Variation within Optimality Theory, pages 111-120, Stockholm. Stockholm University.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Priors in Bayesian learning of phonological rules", "authors": [ { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Seventh Meeting Meeting of the ACL Special Interest Group on Computational Phonology: SIGPHON", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon Goldwater and Mark Johnson. 2004. Priors in Bayesian learning of phonological rules. In Pro- ceedings of the Seventh Meeting Meeting of the ACL Special Interest Group on Computational Phonology: SIGPHON 2004.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A Bayesian framework for word segmentation: Exploring the effects of context", "authors": [ { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2009, "venue": "Cognition", "volume": "112", "issue": "1", "pages": "21--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark John- son. 2009. A Bayesian framework for word segmen- tation: Exploring the effects of context. Cognition, 112(1):21-54.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A Maximum Entropy model of phonotactics and phonotactic learning", "authors": [ { "first": "Bruce", "middle": [], "last": "Hayes", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 2008, "venue": "Linguistic Inquiry", "volume": "39", "issue": "3", "pages": "379--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bruce Hayes and Colin Wilson. 2008. A Maximum En- tropy model of phonotactics and phonotactic learning. Linguistic Inquiry, 39(3):379-440.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "317--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor gram- mars. In Proceedings of Human Language Technolo- gies: The 2009 Annual Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics, pages 317-325, Boulder, Colorado, June. As- sociation for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Synergies in learning words and their referents", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Demuth", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Bevan", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2010, "venue": "Advances in Neural Information Processing Systems 23", "volume": "", "issue": "", "pages": "1018--1026", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson, Katherine Demuth, Michael Frank, and Bevan Jones. 2010. Synergies in learning words and their referents. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1018-1026.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Modelling function words improves unsupervised word segmentation", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Anne", "middle": [], "last": "Christophe", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Demuth", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "282--292", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson, Anne Christophe, Emmanuel Dupoux, and Katherine Demuth. 2014. Modelling function words improves unsupervised word segmentation. In Proceedings of the 52nd Annual Meeting of the Asso- ciation for Computational Linguistics, pages 282-292. Association for Computational Linguistics, June.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A discovery procedure for certain phonological rules", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1984, "venue": "10th International Conference on Computational Linguistics and 22nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson. 1984. A discovery procedure for certain phonological rules. In 10th International Conference on Computational Linguistics and 22nd Annual Meet- ing of the Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Identifying a rule's context from data", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1992, "venue": "The Proceedings of the 11th West Coast Conference on Formal Linguistics", "volume": "", "issue": "", "pages": "289--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson. 1992. Identifying a rule's context from data. In The Proceedings of the 11th West Coast Con- ference on Formal Linguistics, pages 289-297, Stan- ford, CA. Stanford Linguistics Association.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Using Adaptor Grammars to identify synergies in the unsupervised acquisition of linguistic structure", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "398--406", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson. 2008. Using Adaptor Grammars to iden- tify synergies in the unsupervised acquisition of lin- guistic structure. In Proceedings of the 46th Annual Meeting of the Association of Computational Linguis- tics, pages 398-406, Columbus, Ohio. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Online EM for unsupervised models", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "611--619", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang and Dan Klein. 2009. Online EM for un- supervised models. In Proceedings of Human Lan- guage Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 611-619, Boulder, Colorado, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning probabilities over underlying representations", "authors": [ { "first": "Joe", "middle": [], "last": "Pater", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Staubs", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Jesney", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Twelfth Meeting of the ACL-SIGMORPHON: Computational Research in Phonetics, Phonology, and Morphology", "volume": "", "issue": "", "pages": "62--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joe Pater, Robert Staubs, Karen Jesney, and Brian Smith. 2012. Learning probabilities over underlying repre- sentations. In Proceedings of the Twelfth Meeting of the ACL-SIGMORPHON: Computational Research in Phonetics, Phonology, and Morphology, pages 62-71.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Weighted constraints in generative linguistics", "authors": [ { "first": "Joe", "middle": [], "last": "Pater", "suffix": "" } ], "year": 2009, "venue": "Cognitive Science", "volume": "33", "issue": "", "pages": "999--1035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joe Pater. 2009. Weighted constraints in generative lin- guistics. Cognitive Science, 33:999-1035.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Buckeye corpus of conversational speech", "authors": [ { "first": "A", "middle": [], "last": "Mark", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Pitt", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Dilley", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "William", "middle": [], "last": "Kiesling", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Raymond", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Hume", "suffix": "" }, { "first": "", "middle": [], "last": "Fosler-Lussier", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark A. Pitt, Laura Dilley, Keith Johnson, Scott Kies- ling, William Raymond, Elizabeth Hume, and Eric Fosler-Lussier. 2007. Buckeye corpus of conversa- tional speech.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Optimality Theory: Constraint Interaction in Generative Grammar", "authors": [ { "first": "Alan", "middle": [], "last": "Prince", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Smolensky", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Prince and Paul Smolensky. 2004. Optimality The- ory: Constraint Interaction in Generative Grammar. Blackwell.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The Harmonic Mind: From Neural Computation To Optimality-Theoretic Grammar", "authors": [ { "first": "Paul", "middle": [], "last": "Smolensky", "suffix": "" }, { "first": "G\u00e9raldine", "middle": [], "last": "Legendre", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Smolensky and G\u00e9raldine Legendre. 2005. The Harmonic Mind: From Neural Computation To Optimality-Theoretic Grammar. The MIT Press.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "3 A MaxEnt unigram model of word segmentation and word-final /d/ and /t/ deletionThis section explains how we extend the Berg-Kirkpatrick et al. (2010) model to handle a set P of phonological processes, where a phonological process p \u2208 P is a partial, non-deterministic function Sensitivity of surface token f-score to word length penalty factor d for the Brent and Buckeye corpora on data with no /d/ or /t/ deletions. Performance is sensitive to the value of the word length penalty d, and the optimal value of d depends on the corpus.", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "The effect of constraints feature weights on the number of deleted underlying /d/ and /t/ segments posited by the model (d = 1.525). The red diamond indicates the 13,457 deleted underlying /d/ and 11,727 deleted underlying /t/ in the \"gold\" data.", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "The regularised log-likelihood as a function of the number of non-zero weights for different constraints on feature weights (d = 1.525).", "type_str": "figure", "uris": null, "num": null }, "FIGREF4": { "text": "The number of underlying types proposed by the model as a function of the number of non-zero weights, for different constraints on feature weights (d = 1.525). There are 9,353 underlying types in the \"gold\" data. F-score for deleted /d/ and /t/ recovery as a function of word length penalty d and whether all surface/underlying pairs X are included in all contexts C (d = 1.525).", "type_str": "figure", "uris": null, "num": null } } } }