{ "paper_id": "N15-1036", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:34:16.730597Z" }, "title": "Continuous Space Representations of Linguistic Typology and their Application to Phylogenetic Inference", "authors": [ { "first": "Yugo", "middle": [], "last": "Murawaki", "suffix": "", "affiliation": { "laboratory": "", "institution": "Kyushu University Fukuoka", "location": { "country": "Japan" } }, "email": "murawaki@ait.kyushu-u.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "For phylogenetic inference, linguistic typology is a promising alternative to lexical evidence because it allows us to compare an arbitrary pair of languages. A challenging problem with typology-based phylogenetic inference is that the changes of typological features over time are less intuitive than those of lexical features. In this paper, we work on reconstructing typologically natural ancestors To do this, we leverage dependencies among typological features. We first represent each language by continuous latent components that capture feature dependencies. We then combine them with a typology evaluator that distinguishes typologically natural languages from other possible combinations of features. We perform phylogenetic inference in the continuous space and use the evaluator to ensure the typological naturalness of inferred ancestors. We show that the proposed method reconstructs known language families more accurately than baseline methods. Lastly, assuming the monogenesis hypothesis, we attempt to reconstruct a common ancestor of the world's languages.", "pdf_parse": { "paper_id": "N15-1036", "_pdf_hash": "", "abstract": [ { "text": "For phylogenetic inference, linguistic typology is a promising alternative to lexical evidence because it allows us to compare an arbitrary pair of languages. A challenging problem with typology-based phylogenetic inference is that the changes of typological features over time are less intuitive than those of lexical features. In this paper, we work on reconstructing typologically natural ancestors To do this, we leverage dependencies among typological features. We first represent each language by continuous latent components that capture feature dependencies. We then combine them with a typology evaluator that distinguishes typologically natural languages from other possible combinations of features. We perform phylogenetic inference in the continuous space and use the evaluator to ensure the typological naturalness of inferred ancestors. We show that the proposed method reconstructs known language families more accurately than baseline methods. Lastly, assuming the monogenesis hypothesis, we attempt to reconstruct a common ancestor of the world's languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Linguistic typology is a cross-linguistic study that classifies the world's languages according to structural properties such as complexity of syllable structure and object-verb ordering. The availability of a large typology database (Haspelmath et al., 2005) makes it possible to take computational approaches to this area of study (Daum\u00e9 III and Campbell, 2007; Georgi et al., 2010; Rama and Kolachina, 2012) . In this paper, we consider its application to phylogenetic inference. We aim at reconstructing evolutionary trees that illustrate how modern languages have descended from common ancestors.", "cite_spans": [ { "start": 234, "end": 259, "text": "(Haspelmath et al., 2005)", "ref_id": "BIBREF15" }, { "start": 333, "end": 363, "text": "(Daum\u00e9 III and Campbell, 2007;", "ref_id": "BIBREF5" }, { "start": 364, "end": 384, "text": "Georgi et al., 2010;", "ref_id": "BIBREF10" }, { "start": 385, "end": 410, "text": "Rama and Kolachina, 2012)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Typological features have two advantages over other linguistic traits. First, they allow us to compare an arbitrary pair of languages. By contrast, historical linguistics has worked on regular sound changes (see (Bouchard-C\u00f4t\u00e9 et al., 2013) for computational models). Glottochronology and computational phylogenetics make use of the presence and absence of lexical items (Swadesh, 1952; Gray and Atkinson, 2003) . All these approaches require that certain sets of cognates, or words with common etymological origins, are shared by the languages in question. For this reason, it is hardly possible to use lexical evidence to search for external relations involving language isolates and tiny language families such as Ainu, Basque, and Japanese. For these languages, typology can be seen as the last hope.", "cite_spans": [ { "start": 212, "end": 240, "text": "(Bouchard-C\u00f4t\u00e9 et al., 2013)", "ref_id": "BIBREF2" }, { "start": 371, "end": 386, "text": "(Swadesh, 1952;", "ref_id": "BIBREF30" }, { "start": 387, "end": 411, "text": "Gray and Atkinson, 2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The second advantage is that typological features are potentially capable of tracing evolutionary history on the order of 10,000 years because they change far more slowly than lexical traits. A glottochronological study indicates that even if Japanese is genetically related to Korean, they diverged from a common ancestor no earlier than 6,700 years ago (Hattori, 1999) . Even the basic vocabulary vanishes so rapidly that after some 6,000 years, the retention rate becomes comparable to chance similarity. By contrast, the word order of Japanese, for example is astonishingly stable. It remains intact from the earliest attested data. Thus we argue that if we manage to develop a statistical model of typological An abridged version of Table 1 of (Donegan and Stampe, 2004) .", "cite_spans": [ { "start": 355, "end": 370, "text": "(Hattori, 1999)", "ref_id": "BIBREF16" }, { "start": 749, "end": 775, "text": "(Donegan and Stampe, 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 738, "end": 745, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "changes with predictive power, we can understand a much deeper past.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A challenging problem with typology-based inference is that the changes of typological features over time are less intuitive than those of lexical features. Regular sound changes have been well known since the time of the Neogrammarians. The binary representations of lexical items commonly used in computational phylogenetics correspond to their their presence and absence. The alternations of each feature value can be straightforwardly interpreted as the birth and death (Le Quesne, 1974) of a lexical item. By contrast, it is difficult to understand how a language switches from SOV to SVO.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Practically speaking, since each language is represented by a vector of categorical features, we can easily perform distance-based hierarchical clustering. Still, the extent to which the resultant tree reflects evolutionary history is unclear. Teh et al. (2008) proposed a generative model for hierarchical clustering, which straightforwardly explains evolutionary history. However, features used in their experiments were binarized in a one-versus-rest manner (i.e., expanding a feature with K possible values into K binary features) (Daum\u00e9 III and Campbell, 2007) although the model itself had an ability to handle categorical values. With the independence assumption of binary features, the model was likely to reconstruct ancestors with logically impossible states.", "cite_spans": [ { "start": 244, "end": 261, "text": "Teh et al. (2008)", "ref_id": "BIBREF31" }, { "start": 535, "end": 565, "text": "(Daum\u00e9 III and Campbell, 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Typological studies have shown that dependencies among typological features are not limited to the categorical constraints. For example, objectverb ordering is said to imply adjective-noun ordering (Greenberg, 1963) . A natural question arises as to what would happen to adjective-noun ordering if object-verb ordering were altered. While dependencies among feature pairs were discussed in previous studies (Greenberg, 1978; Dunn et al., 2011) , dependencies among more than two features are yet to be exploited.", "cite_spans": [ { "start": 198, "end": 215, "text": "(Greenberg, 1963)", "ref_id": "BIBREF12" }, { "start": 407, "end": 424, "text": "(Greenberg, 1978;", "ref_id": "BIBREF13" }, { "start": 425, "end": 443, "text": "Dunn et al., 2011)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To gain a better insight into typological changes, we take Austroasiatic languages as an example. Table 1 compares some typological features of the Munda and Mon-Khmer branches. Although their genetic relationship was firmly established, they are almost opposite in structure. Their common ancestor is considered to have been Mon-Khmer-like. This indicates that the holistic changes have happened in the Munda branch (Donegan and Stampe, 2004) . To generalize from this example, we suggest the following hypotheses:", "cite_spans": [ { "start": 417, "end": 443, "text": "(Donegan and Stampe, 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. The holistic polarization can be explained by latent components that control dependencies among observable features. 2. Typological changes can occur in a way such that typologically unnatural intermediate states are avoided. To incorporate these hypotheses, we propose continuous space representations of linguistic typology. Specifically, we use an autoencoder (see (Bengio, 2009 ) for a review) to map each language into the latent space. In analogy with principal component analysis (PCA), each element of the encoded vector is referred to as a component. We combine the autoencoder with a typology evaluator that distinguishes typologically natural languages from other possible combinations of features.", "cite_spans": [ { "start": 371, "end": 384, "text": "(Bengio, 2009", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Armed with the typology evaluator, we perform phylogenetic inference in the continuous space. The evaluator ensures that inferred ancestors are also typologically natural. The inference procedure is guided by known language families so that each component's stability with respect to evolutionary history can be learned. To evaluate the proposed method, we hide some trees to see how well they are reconstructed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lastly, we build a binary tree on top of known language families. This experiment is based on a controversial assumption that the world's languages descend from one common ancestor. Our goal here is not to address the validity of the monogenesis hypothesis. Rather, we address the questions of how the common ancestor looked like if it existed and how modern languages have evolved from it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In linguistic typology, much attention has been given to non-tree-like evolution (Trubetzkoy, 1928) . Daum\u00e9 III (2009) incorporated linguistic areas into a phylogenetic model and reported that the extended model outperformed a simple tree model. This result motivates us to use known language families for supervision rather than to perform phylogenetic inference in purely unsupervised settings.", "cite_spans": [ { "start": 81, "end": 99, "text": "(Trubetzkoy, 1928)", "ref_id": "BIBREF32" }, { "start": 102, "end": 118, "text": "Daum\u00e9 III (2009)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Dunn et al. (2011) applied a state-process model to reference phylogenetic trees to test if a pair of features is independent. The model they adopted can hardly be extended to handle multiple features. They separately applied the model to each language family and claimed that most dependencies were lineage-specific rather than universal tendencies. However, each known language family is so shallow in time depth that few feature changes can be observed in it (Croft et al., 2011) . We mitigate data sparsity by letting our model share parameters among language families all over the world.", "cite_spans": [ { "start": 462, "end": 482, "text": "(Croft et al., 2011)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The typology database we used is the World Atlas of Language Structures (WALS) (Haspelmath et al., 2005) . As of 2014, it contains 2,679 languages and 192 typological features. It covers less than 15% of the possible language/feature pairs, however.", "cite_spans": [ { "start": 79, "end": 104, "text": "(Haspelmath et al., 2005)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Typology Database and Phylogenetic Trees", "sec_num": "3.1" }, { "text": "WALS provides phylogenetic trees but they only have two layers above individual languages: family and genus. Language families include Indo-European, Austronesian and Niger-Congo, and genera within Indo-European include Germanic, Indic and Slavic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Database and Phylogenetic Trees", "sec_num": "3.1" }, { "text": "For more detailed trees, we used hierarchical classifications provided by Ethnologue (Lewis et al., 2014) . The mapping between WALS and Ethnologue was done using ISO 639-3 language codes. We manually corrected some obsolete language codes used by WALS and dropped lan-guages without language codes. We also excluded languages labeled by Ethnologue as Deaf sign language, Mixed language, Creole or Unclassified. For both WALS and Ethnologue trees, we removed intermediate nodes that had only one child. Language isolates were treated as family trees of their own. We obtained 193 family trees for WALS and 189 for Ethnologue.", "cite_spans": [ { "start": 74, "end": 105, "text": "Ethnologue (Lewis et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Typology Database and Phylogenetic Trees", "sec_num": "3.1" }, { "text": "We made no further modifications to the trees although we were aware that some language families and their subgroups were highly controversial. In the future work, the Altaic language family, for example, should be disassembled into Turkic, Mongolic and Tungusic to test if the Altaic hypothesis is valid (Vovin, 2005) .", "cite_spans": [ { "start": 305, "end": 318, "text": "(Vovin, 2005)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Typology Database and Phylogenetic Trees", "sec_num": "3.1" }, { "text": "Next, we removed features with low coverage. Some features such as \"Inclusive/Exclusive Forms in Pama-Nyungan\" (39B) and \"Irregular Negatives in Sign Languages\" (139A) were not supposed to cover the world. We selected 98 features that covered at least 10% of languages. 1 We used the original, categorical feature values. The mergers of some fine-grained feature values seem desirable (Daum\u00e9 III and Campbell, 2007; Greenhill et al., 2010; Dunn et al., 2011) . Some features like \"Consonant Inventories\" might be better represented as real-valued features. We leave them for future work.", "cite_spans": [ { "start": 270, "end": 271, "text": "1", "ref_id": null }, { "start": 385, "end": 415, "text": "(Daum\u00e9 III and Campbell, 2007;", "ref_id": "BIBREF5" }, { "start": 416, "end": 439, "text": "Greenhill et al., 2010;", "ref_id": "BIBREF14" }, { "start": 440, "end": 458, "text": "Dunn et al., 2011)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Typology Database and Phylogenetic Trees", "sec_num": "3.1" }, { "text": "In the end, we created two sets of data. The first set PARTIAL was used to train the typology evaluator. We selected 887 languages that covered at least 30% of features. The second set FULL was for phylogenetic inference. We chose language families in each of which at least 30% of features were covered by one or more languages in the family. The numbers of language families (including language isolates) were reduced to 103 for WALS and 110 for Ethnologue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Database and Phylogenetic Trees", "sec_num": "3.1" }, { "text": "We imputed missing data using the R package miss-MDA (Josse et al., 2012) . It handled missing values using multiple correspondence analysis (MCA). Specifically, we used the imputeMCA function to predict missing feature values. The substituted data are used (1) to train the typology evaluator and 2to initialize phylogenetic inference.", "cite_spans": [ { "start": 53, "end": 73, "text": "(Josse et al., 2012)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Missing Data Imputation", "sec_num": "3.2" }, { "text": "To evaluate the performance of missing data imputation, we hid some known features to see how well they were predicted. A 10-fold cross-validation test using the PARTIAL dataset showed that 64.6% of feature values were predicted correctly. It considerably outperformed (1) the random baseline of 22.4% and (2) the most-frequent-value baseline of 28.1%. Thus our assumption of dependencies among features was confirmed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Missing Data Imputation", "sec_num": "3.2" }, { "text": "We use a combination of an autoencoder to transform typological features into continuous latent components, and an energy-based model to evaluate how a given feature vector is typologically natural.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "We begin with the autoencoder. Figure 1 shows various representations of a language. The original feature representation v is a vector of categorical features. v is binarized into x \u2208 {0, 1} d 0 in a oneversus-rest manner. x is mapped by an encoder to a latent representation h \u2208 [0, 1] d 1 , in which d 1 is the dimension of the latent space:", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 39, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "h = s(W e x + b e ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "where s is the sigmoid function, and matrix W e and vector b e are weight parameters to be estimated. A decoder then maps h back to x \u2032 through a similar transformation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "x \u2032 = s(W d h + b d ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "We use tied weights:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "W d = W T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "e . Note that x \u2032 is a real vector. To recover a categorical vector, we need to first binarize x \u2032 according to categorical constraints and then to debinarize the resultant vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "The training objective of the autoencoder alone is to minimize cross-entropy of reconstruction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "L AE (x, x \u2032 ) = \u2212 d \u2211 k=1 x k log x \u2032 k +(1\u2212x k ) log(1\u2212x \u2032 k ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "where x k is the k-th element of x. Next, we plug an energy-based model into the autoencoder. It gives a probability to x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "p(x) = exp(W T s g) \u2211 x \u2032 exp(W T s g \u2032 ) , g = s(W l h + b l ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "where vector W s , matrix W l and bias term b l are the weights to be estimated. h is mapped to g \u2208 [0, 1] d 2 before evaluation. This transformation is motivated by our speculation that typologically natural languages may not be linearly separable from unnatural ones in the latent space since biplots of principal components of PCA often show sinusoidal waves (Novembre and Stephens, 2008) . The denominator sums over all possible states of x \u2032 , including those which violate categorical constraints. By maximizing the average log probability of training data, we can distinguish typologically natural languages from other possible combinations of features. Given a set of N languages with missing data imputed, 2 our training objective is to maximize the following:", "cite_spans": [ { "start": 362, "end": 391, "text": "(Novembre and Stephens, 2008)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "N \u2211 i=1 (\u2212L AE (x i , x \u2032 i ) + C log p(x i ))),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "where C is some constant. Weights are optimized by the gradient-based AdaGrad algorithm (Duchi et al., 2011 ) with a mini-batch. A problem with this optimization is that the derivative of the second term contains an expectation that involves a summation over all possible states of x \u2032 , which is computationally intractable. Inspired by contrastive divergence (Hinton, 2002) , we do not compute the expectation exactly but approximate it by few negative samples collected from Gibbs samplers.", "cite_spans": [ { "start": 88, "end": 107, "text": "(Duchi et al., 2011", "ref_id": "BIBREF8" }, { "start": 361, "end": 375, "text": "(Hinton, 2002)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Typology Evaluator", "sec_num": "4" }, { "text": "To analyze the continuous space representations, we generated mixtures of two languages, which were potential candidates for their common ancestor. The pair of languages A and B was mixed in two ways. First, we replaced elements of A's categorical vector v A with v B , with the specified probability. We repeated this procedure 1,000 times to obtain a mean and a standard deviation. Second, we applied linear interpolation of two vectors h A and h B and mapped the resultant vector to v \u2032 . In this experiment, d 0 = 539 and we set d 1 = 100 and d 2 = 10. Figure 2 shows the case of the Austroasiatic languages. In the original, categorical representations, the mixtures of two languages form a deep valley (i.e., typologically unnatural intermediate states). By contrast, the continuous space representations allow a language to change into another without harming typological naturalness. This indicates that in the continuous space, we can easily reconstruct typologically natural ancestors. The major feature changes include \"postpositional\" to \"prepositional\" (0.46-0.47), \"strongly suffixing\" to \"little affixation\" (0.53-0.54) and \"SOV\" to \"SVO\" (0.60-0.61).", "cite_spans": [], "ref_spans": [ { "start": 557, "end": 565, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Mixing Languages: An Experiment", "sec_num": "4.1" }, { "text": "We use continuous space representations and the typology evaluator for phylogenetic inference. Our strategy is to find a tree in which (1) nodes are typologically natural and (2) edges are shorter by the principle of Occam's razor. The first point is realized by applying the typology evaluator. To implement the second point, we define a probability distribution over a parent-to-child move in the continuous space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Model", "sec_num": "5.1" }, { "text": "We assume that latent components are independent. For the k-th component, the node's value h k is drawn from a Normal distribution with mean h P k (its parent's value) and precision \u03bb k (inverse variance). The further the node moves, the smaller probability it receives. Precision controls each component's stability with respect to evolutionary history.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Model", "sec_num": "5.1" }, { "text": "We set a gamma prior over \u03bb k , with hyperparameters \u03b1 and \u03b2. 3 Taking advantage of the conjugacy property, we marginalize out \u03bb k . Suppose that we have drawn n samples and let m i be the difference between the i-th node and its parent, h k \u2212 h P k . Then the posterior hyperparameters are \u03b1 n = \u03b1+n/2 and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Model", "sec_num": "5.1" }, { "text": "\u03b2 n = \u03b2 + 1 2 \u2211 n i=1 m 2 i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Model", "sec_num": "5.1" }, { "text": "The posterior predictive distribution is Student's t-distribution (Murphy, 2007) :", "cite_spans": [ { "start": 66, "end": 80, "text": "(Murphy, 2007)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Tree Model", "sec_num": "5.1" }, { "text": "p k (h k |h P k , M hist , \u03b1, \u03b2) = t 2\u03b1n (h k |h P k , \u03c3 2 = \u03b2 n /\u03b1 n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Model", "sec_num": "5.1" }, { "text": ", where M hist is a collection of \u03b1, \u03b2 and a history of previously observed differences. The probability of a parent-to-child move is a product of the probabilities of its component moves:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Model", "sec_num": "5.1" }, { "text": "p MOVE (h|h P , M hist ) = d \u220f k=1 p k (h k |h P k , M hist ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Model", "sec_num": "5.1" }, { "text": "The root node is drawn from a uniform distribution. To sum up, the probability of a phylogenetic tree \u03c4 is given by p EVAL (tree) \u00d7 p CONT (tree), where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Model", "sec_num": "5.1" }, { "text": "p EVAL (tree) = Uniform(tree) \u220f x\u2208nodes(\u03c4 ) p(x), p CONT (tree) = Uniform(root) \u00d7 \u220f (h,h P )\u2208edges(\u03c4 ) p MOVE (h|h P , M hist ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Model", "sec_num": "5.1" }, { "text": "nodes(\u03c4 ) is the set of nodes in \u03c4 , and edges(\u03c4 ) is the set of edges in \u03c4 , We abuse notation as M hist is updated each time a node is observed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Model", "sec_num": "5.1" }, { "text": "Given observed data, we aim at reconstructing the best phylogenetic tree. The data observed are (1) leaves (with some missing feature values) and (2) some tree topologies. We need to infer (1) the missing feature values of leaves, (2) the latent components of internal nodes including the root and (3) the remaining portion of tree topologies. Since leaves are tied to observed categorical vectors, our inference procedures also work on them. We map categorical vectors into the latent space every time we attempt to change a feature value. By contrast, we adopt latent vectors as the primary representations of internal nodes. Take the Indo-European language family for example. Its tree topology is given but the states of its internal nodes such as Indo-European, Germanic and Indic need to be inferred. Dutch has some missing feature values. Although they have been imputed with multiple correspondence analysis, its close relatives such as Danish and German might be helpful for better estimation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5.2" }, { "text": "We need to infer portions of tree topologies even though a set of trees (language families) is given. To evaluate the performance of phylogenetic inference, we hide some trees to see how well they are reconstructed. To reconstruct a common ancestor of the world's languages, we build a binary tree on top of the set of trees. Note that while we only infer binary trees, a node may have more than two children in the fixed portions of tree topologies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5.2" }, { "text": "We use Gibbs sampling for inference. We define four operators, CAT, COMP, SWAP and MOVE. The first tree operators correspond to missing feature values, latent components and tree topologies, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5.2" }, { "text": "CAT -For the target categorical feature of a leaf node, we sample from K possible values. Let x \u2032 be a binary feature representation with the target feature value altered, let h P be the state of the node's parent, and let h \u2032 = s(W e x \u2032 +b e ). The probability of choosing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5.2" }, { "text": "x \u2032 is proportional to p(x \u2032 ) p MOVE (h \u2032 |h P , M hist ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5.2" }, { "text": "where h is removed from the history. The second term is omitted if the target node has no parent. 4 COMP -For the target k-th component of an internal node, we choose its new value using the Metropolis algorithm. It stochastically proposes a new state and accepts it with some probability. If the proposal is rejected, the current state is reused as the next state. The proposal distribution Q(h \u2032 k |h k ) is a Gaussian distribution centered at h k . The acceptance probability is", "cite_spans": [ { "start": 98, "end": 99, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5.2" }, { "text": "a(h k , h \u2032 k ) = min(1, P (h \u2032 k )/P (h k )), where P (h \u2032 k ) is defined as P (h \u2032 k ) = p(x \u2032 ) p MOVE (h \u2032 |h P , M hist ) \u220f h C \u2208children(h \u2032 ) p MOVE (h C |h \u2032 , M hist )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5.2" }, { "text": "where children(h \u2032 ) is the set of the target node's children. SWAP -For the target internal node (which cannot be the root), we use the Metropolis-Hastings algorithm to locally rearrange its neighborhood in a way similar to Li et al. (2000) . We first propose a new state as illustrated in Figure 3 . The target node has a parent P, a sibling S and two children C1 and C2. From among S, C1 and C2, we choose two nodes. If C1 and C2 are chosen, the topology remains the same; otherwise S is swapped for one of the node's children. It is shown that one topology can be transformed into any other topology in a finite number of steps (Li et al., 2000) .", "cite_spans": [ { "start": 225, "end": 241, "text": "Li et al. (2000)", "ref_id": "BIBREF23" }, { "start": 632, "end": 649, "text": "(Li et al., 2000)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 291, "end": 299, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Inference", "sec_num": "5.2" }, { "text": "To improve mobility, we also move the target node toward C1, C2 or S, depending on the proposed topology. Here the selected node is denoted by * . We first draw r \u2032 from a log-normal distribution whose underlying Gaussian distribution has mean \u22121 and variance 1. The target's proposed state is h \u2032 = (1 \u2212 r \u2032 )h + r \u2032 h * . r \u2032 can be greater than 1, and in that case, the proposed state h \u2032 is more distant from h * than the current state h. This ensures that the transition is reversible because r = 1/r \u2032 . The acceptance probability can be calculated in a similar manner to that described for COMP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5.2" }, { "text": "MOVE -Propose to move the target internal node, without swapping its neighbors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5.2" }, { "text": "For initialization, missing feature values are imputed by missMDA. The initial tree is constructed by distance-based agglomerative clustering. The state of an internal node is set to the average of those of its children. We first conducted a quantitative evaluation of phylogenetic inference, using known family trees. We ran 5-fold cross-validations. For each of WALS and Ethnologue, we subdivided a set of language families into 5 subsets with roughly the same number of leaves. Because of some huge language families, the number of language families per subset was uneven. We disassembled family trees in the target subset and to let the model reconstruct a binary tree for each language family. Unlike ordinary held-out evaluation, this experiment used all data for inference at once.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5.2" }, { "text": "We used the parameter settings described in Section 4.1. For phylogenetic inference, we ran 9,000 burn-in iterations after which we collected 100 samples at an interval of 10 iterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Settings", "sec_num": "6.1.2" }, { "text": "For comparison, we performed average-link agglomerative clustering (ALC). It has two variants, ALC-CAT and ALC-CONT. ALC-CAT worked on categorical features and used the ratio of disagreement as a distance metric. ALC-CONT performed clustering in the continuous space, using cosine distance. In other words, we can examine the effects of the typology evaluator and precision parameters. For these models, missing feature values are imputed by missMDA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Settings", "sec_num": "6.1.2" }, { "text": "We present purity (Heller and Ghahramani, 2005) , subtree (Teh et al., 2008) and outlier fraction scores (Krishnamurthy et al., 2012) . All scores are between 0 and 1 and higher scores are better. We calculated these scores for each language family and WALS Ethnologue purity subtree outlier outlier ALC- Table 2: Results of the reconstruction of known family trees. Macro-averages are followed by microaverages.", "cite_spans": [ { "start": 18, "end": 47, "text": "(Heller and Ghahramani, 2005)", "ref_id": "BIBREF17" }, { "start": 58, "end": 76, "text": "(Teh et al., 2008)", "ref_id": "BIBREF31" }, { "start": 105, "end": 133, "text": "(Krishnamurthy et al., 2012)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Measures", "sec_num": "6.1.3" }, { "text": "report macro-and micro-averages. Only non-trivial family trees (trees with more than two children) were considered. Purity and subtree scores compare inferred trees with gold-standard class labels. In WALS, genera were treated as class labels because they were the only intermediate layer between families and leaves. By contrast, Ethnologue provided more complex trees and we were unable to assign one class label to each language. For this reason, only outlier fraction scores are reported for Ethnologue. Table 2 shows the scores for reconstructed family trees. The proposed method outperformed the baselines in 5 out of 8 metrics. Three methods performed almost equally for Ethnologue. We suspect that typological features reflect long term trends in comparison to Ethnologue's fine-grained classification. For WALS, the proposed method was beaten by average-link agglomerative clustering only in the macro-average of subtree scores. One possible explanation is randomness of the proposed method. Apparently, random sampling distributed errors more evenly than deterministic clustering. It was penalized more often by subtree scores because they required that all leaves of an internal node belonged to the same class.", "cite_spans": [], "ref_spans": [ { "start": 508, "end": 515, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Measures", "sec_num": "6.1.3" }, { "text": "We reconstructed a single tree that covers the world. To do this, we build a binary tree on top of known language families, a product of historical linguistics. It is generally said that historical linguistics cannot go far beyond 6,000-7,000 years (Nichols, 2011 ). Here we attempt to break the brick wall.", "cite_spans": [ { "start": 249, "end": 263, "text": "(Nichols, 2011", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Reconstruction of a Common Ancestor of the World's Languages", "sec_num": "6.2" }, { "text": "It is no surprise that this experiment is full of problems and difficulties. No quantitative evaluation is possible. Underlying assumptions are questionable. No one knows for sure if there was such a thing as one common ancestor of all modern lan- guages. Moreover, language capacity of humans, in addition to languages themselves, is likely to have evolved over time (Nichols, 2011) . This casts doubt on the applicability of the typology evaluator, which is trained on modern languages, to languages of far distant past. Nevertheless, it is fascinating to make inference on the world's ancestral languages. We used Ethnologue as the known tree topologies. For Gibbs sampling, we ran 3,000 burn-in iterations after which we collected 100 samples at an interval of 10 iterations. Figure 4 shows a reconstructed tree. To summarize multiple sample trees, we constructed a maximum clade credibility tree. For each clade (a set of all leaves that share a common ancestor), we calculated the fraction of times it appears in the collected samples, which we call a support in this pa- Table 3 : Some features of the world's ancestor with sample frequencies. per. A tree was scored by the product of supports of all clades within it, and we created a tree that maximized the score. Each edge label shows the support of the corresponding clade. As indicated by generally low supports, the sample trees were very unstable. Some geographically distant groups of languages were clustered near the bottom. We partially attribute this to the underspecificity of linguistic typology: even if a pair of languages shares the same feature vector, they are not necessarily the same language. This problem might be eased by incorporating geospatial information into phylogenetic inference (Bouckaert et al., 2012) . Table 3 shows some features of the root. The reconstructed ancestor is moderate in phonological typology, uses suffixing in morphology and prefers the SOV word order. The inferred word order agrees with speculations given by previous studies (Maurits and Griffiths, 2014). Figure 5 shows the histogram of variance parameters. Some latent components had smaller variances and thus were more stable with respect to evolutionary history. Figure 6 displays languages using the components with the two smallest variances. Unlike PCA plots, data concentrated at the edges.", "cite_spans": [ { "start": 368, "end": 383, "text": "(Nichols, 2011)", "ref_id": "BIBREF27" }, { "start": 1769, "end": 1793, "text": "(Bouckaert et al., 2012)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 780, "end": 788, "text": "Figure 4", "ref_id": "FIGREF4" }, { "start": 1078, "end": 1085, "text": "Table 3", "ref_id": null }, { "start": 1796, "end": 1803, "text": "Table 3", "ref_id": null }, { "start": 2069, "end": 2077, "text": "Figure 5", "ref_id": null }, { "start": 2231, "end": 2239, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Reconstruction of a Common Ancestor of the World's Languages", "sec_num": "6.2" }, { "text": "We used a geometric mean of p MOVE of multiple samples to calculate how a modern language is Table 4 : Modern languages ranked by the similarity to Japanese. similar to another. The case of Japanese is shown in Table 4 . This ranked list is considerably different from that of disagreement rates of categorical vectors (Spearman's \u03c1 = 0.76). When features' stability with respect to evolutionary history is considered, Japanese is less closer to Korean and Ainu than to some Tibeto-Burman languages south of the Himalayas. As the importance of these minor languages of Northeast India is recognized, the Sino-Tibetan tree might be drastically revised in the future (Blench and Post, 2013) . The least similar languages include the Malayo-Polynesian and Nilo-Saharan languages.", "cite_spans": [ { "start": 665, "end": 688, "text": "(Blench and Post, 2013)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 93, "end": 100, "text": "Table 4", "ref_id": null }, { "start": 211, "end": 218, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Reconstruction of a Common Ancestor of the World's Languages", "sec_num": "6.2" }, { "text": "In this paper, we proposed continuous space representations of linguistic typology and used them for phylogenetic inference. Feature dependencies are a major focus of linguistic typology, and typology data have occasionally been used for computational phylogenetics. To our knowledge, however, we are the first to integrate the two lines of research. In addition, the continuous space representations underlying interdependent discrete features are applicable to other data including phonological inventories (Moran et al., 2014) . We believe that typology provides important clues for long-term language change. The currently available database only contains modern languages, but we expect that data of some ancestral languages could greatly facilitate computational approaches to diachronic linguistics.", "cite_spans": [ { "start": 509, "end": 529, "text": "(Moran et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Additional cleanup is needed. For example, the highcoverage feature \"The Position of Negative Morphemes in SOV Languages\" (144L) is not defined for non-SOV languages. A natural solution is to add another feature value (Undefined).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We tried a joint inference of weight optimization and missing data imputation but dropped it for its instability. A crossvalidation test revealed that the joint inference caused a big accuracy drop in missing data imputation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the experiments, we set \u03b1 = \u03b2 = 0.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It is easy to extend the operator to handle internal nodes supplied with some categorical features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was partly supported by JSPS KAKENHI Grant Number 26730122.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning deep architectures for AI. Foundations and Trends in Machine Learning", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2009, "venue": "", "volume": "2", "issue": "", "pages": "1--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio. 2009. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1-127.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Rethinking Sino-Tibetan phylogeny from the perspective of North East Indian languages", "authors": [ { "first": "Roger", "middle": [], "last": "Blench", "suffix": "" }, { "first": "Mark", "middle": [ "W" ], "last": "Post", "suffix": "" } ], "year": 2013, "venue": "Trans-Himalayan Linguistics", "volume": "", "issue": "", "pages": "71--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger Blench and Mark W. Post. 2013. Rethinking Sino- Tibetan phylogeny from the perspective of North East Indian languages. In Nathan Hill and Tom Owen- Smith, editors, Trans-Himalayan Linguistics, pages 71-104. De Gruyter.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automated reconstruction of ancient languages using probabilistic models of sound change", "authors": [ { "first": "Alexandre", "middle": [], "last": "Bouchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "David", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2013, "venue": "", "volume": "110", "issue": "", "pages": "4224--4229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandre Bouchard-C\u00f4t\u00e9, David Hall, Thomas L. Grif- fiths, and Dan Klein. 2013. Automated reconstruc- tion of ancient languages using probabilistic models of sound change. PNAS, 110(11):4224-4229.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Mapping the origins and expansion of the Indo-European language family", "authors": [ { "first": "Remco", "middle": [], "last": "Bouckaert", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Lemey", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Dunn", "suffix": "" }, { "first": "Simon", "middle": [ "J" ], "last": "Greenhill", "suffix": "" }, { "first": "Alexander", "middle": [ "V" ], "last": "Alekseyenko", "suffix": "" }, { "first": "Alexei", "middle": [ "J" ], "last": "Drummond", "suffix": "" }, { "first": "Russell", "middle": [ "D" ], "last": "Gray", "suffix": "" }, { "first": "Marc", "middle": [ "A" ], "last": "Suchard", "suffix": "" }, { "first": "Quentin", "middle": [ "D" ], "last": "Atkinson", "suffix": "" } ], "year": 2012, "venue": "Science", "volume": "337", "issue": "6097", "pages": "957--960", "other_ids": {}, "num": null, "urls": [], "raw_text": "Remco Bouckaert, Philippe Lemey, Michael Dunn, Si- mon J. Greenhill, Alexander V. Alekseyenko, Alexei J. Drummond, Russell D. Gray, Marc A. Suchard, and Quentin D. Atkinson. 2012. Mapping the origins and expansion of the Indo-European language family. Sci- ence, 337(6097):957-960.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Greenbergian universals, diachrony, and statistical analyses", "authors": [ { "first": "William", "middle": [], "last": "Croft", "suffix": "" }, { "first": "Tanmoy", "middle": [], "last": "Bhattacharya", "suffix": "" }, { "first": "Dave", "middle": [], "last": "Kleinschmidt", "suffix": "" }, { "first": "D", "middle": [ "Eric" ], "last": "Smith", "suffix": "" }, { "first": "T", "middle": [ "Florian" ], "last": "Jaeger", "suffix": "" } ], "year": 2011, "venue": "Linguistic Typology", "volume": "15", "issue": "2", "pages": "433--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Croft, Tanmoy Bhattacharya, Dave Klein- schmidt, D. Eric Smith, and T. Florian Jaeger. 2011. Greenbergian universals, diachrony, and statistical analyses. Linguistic Typology, 15(2):433-453.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Bayesian model for discovering typological implications", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Campbell", "suffix": "" } ], "year": 2007, "venue": "ACL", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daum\u00e9 III and Lyle Campbell. 2007. A Bayesian model for discovering typological implications. In ACL, pages 65-72.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Non-parametric Bayesian areal linguistics", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2009, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "593--601", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daum\u00e9 III. 2009. Non-parametric Bayesian areal linguistics. In HLT-NAACL, pages 593-601.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Rhythm and the synthetic drift of Munda", "authors": [ { "first": "Patricia", "middle": [], "last": "Donegan", "suffix": "" }, { "first": "David", "middle": [], "last": "Stampe", "suffix": "" } ], "year": 2004, "venue": "The Yearbook of South Asian Languages and Linguistics", "volume": "", "issue": "", "pages": "3--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patricia Donegan and David Stampe. 2004. Rhythm and the synthetic drift of Munda. In Rajendra Singh, edi- tor, The Yearbook of South Asian Languages and Lin- guistics, pages 3-36. Mouton de Gruyter.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "The Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121-2159.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Evolved structure of language shows lineage-specific trends in word-order universals", "authors": [ { "first": "Michael", "middle": [], "last": "Dunn", "suffix": "" }, { "first": "Simon", "middle": [ "J" ], "last": "Greenhill", "suffix": "" }, { "first": "Stephen", "middle": [ "C" ], "last": "Levinson", "suffix": "" }, { "first": "Russell", "middle": [ "D" ], "last": "Gray", "suffix": "" } ], "year": 2011, "venue": "Nature", "volume": "473", "issue": "7345", "pages": "79--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Dunn, Simon J. Greenhill, Stephen C. Levinson, and Russell D. Gray. 2011. Evolved structure of lan- guage shows lineage-specific trends in word-order uni- versals. Nature, 473(7345):79-82.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Comparing language similarity across genetic and typologically-based groupings", "authors": [ { "first": "Ryan", "middle": [], "last": "Georgi", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" }, { "first": "William", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 2010, "venue": "COLING", "volume": "", "issue": "", "pages": "385--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Georgi, Fei Xia, and William Lewis. 2010. Comparing language similarity across genetic and typologically-based groupings. In COLING, pages 385-393.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Language-tree divergence times support the Anatolian theory of Indo-European origin", "authors": [ { "first": "Russell", "middle": [ "D" ], "last": "Gray", "suffix": "" }, { "first": "Quentin", "middle": [ "D" ], "last": "Atkinson", "suffix": "" } ], "year": 2003, "venue": "Nature", "volume": "426", "issue": "6965", "pages": "435--439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Russell D. Gray and Quentin D. Atkinson. 2003. Language-tree divergence times support the Ana- tolian theory of Indo-European origin. Nature, 426(6965):435-439.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Universals of language", "authors": [ { "first": "Joseph", "middle": [ "H" ], "last": "Greenberg", "suffix": "" } ], "year": 1963, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph H. Greenberg, editor. 1963. Universals of lan- guage. MIT Press.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Diachrony, synchrony and language universals", "authors": [ { "first": "Joseph", "middle": [ "H" ], "last": "Greenberg", "suffix": "" } ], "year": 1978, "venue": "", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph H. Greenberg. 1978. Diachrony, synchrony and language universals. In Joseph H. Greenberg, Charles A. Ferguson, and Edith A. Moravesik, edi- tors, Universals of human language, volume 1. Stan- ford University Press.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The shape and tempo of language evolution", "authors": [ { "first": "J", "middle": [], "last": "Simon", "suffix": "" }, { "first": "Quentin", "middle": [ "D" ], "last": "Greenhill", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Atkinson", "suffix": "" }, { "first": "Russel", "middle": [ "D" ], "last": "Meade", "suffix": "" }, { "first": "", "middle": [], "last": "Gray", "suffix": "" } ], "year": null, "venue": "Proc. of the Royal Society B", "volume": "277", "issue": "", "pages": "2443--2450", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon J. Greenhill, Quentin D. Atkinson, Andrew Meade, and Russel D. Gray. 2010. The shape and tempo of language evolution. Proc. of the Royal Soci- ety B, 277(1693):2443-2450.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The World Atlas of Language Structures", "authors": [ { "first": "Martin", "middle": [], "last": "Haspelmath", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Dryer", "suffix": "" }, { "first": "David", "middle": [], "last": "Gil", "suffix": "" }, { "first": "Bernard", "middle": [], "last": "Comrie", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Haspelmath, Matthew Dryer, David Gil, and Bernard Comrie, editors. 2005. The World Atlas of Language Structures. Oxford University Press.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Nihongo no keito (The Genealogy of Japanese)", "authors": [ { "first": "Shiro", "middle": [], "last": "Hattori", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shiro Hattori. 1999. Nihongo no keito (The Genealogy of Japanese). Iwanami Shoten.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Bayesian hierarchical clustering", "authors": [ { "first": "Katherine", "middle": [ "A" ], "last": "Heller", "suffix": "" }, { "first": "Zoubin", "middle": [], "last": "Ghahramani", "suffix": "" } ], "year": 2005, "venue": "ICML", "volume": "", "issue": "", "pages": "297--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katherine A. Heller and Zoubin Ghahramani. 2005. Bayesian hierarchical clustering. In ICML, pages 297- 304.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Training products of experts by minimizing contrastive divergence", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2002, "venue": "Neural Computation", "volume": "14", "issue": "8", "pages": "1771--1800", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural Com- putation, 14(8):1771-1800.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Handling missing values with regularized iterative multiple correspondence analysis", "authors": [ { "first": "Julie", "middle": [], "last": "Josse", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Chavent", "suffix": "" } ], "year": 2012, "venue": "Journal of Classification", "volume": "29", "issue": "1", "pages": "91--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julie Josse, Marie Chavent, Benot Liquet, and Fran\u00e7ois Husson. 2012. Handling missing values with regular- ized iterative multiple correspondence analysis. Jour- nal of Classification, 29(1):91-116.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Efficient active algorithms for hierarchical clustering", "authors": [ { "first": "Akshay", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "Sivaraman", "middle": [], "last": "Balakrishnan", "suffix": "" }, { "first": "Min", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Aarti", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2012, "venue": "ICML", "volume": "", "issue": "", "pages": "887--894", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akshay Krishnamurthy, Sivaraman Balakrishnan, Min Xu, and Aarti Singh. 2012. Efficient active algorithms for hierarchical clustering. In ICML, pages 887-894.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The uniquely evolved character concept and its cladistic application", "authors": [ { "first": "J", "middle": [], "last": "Walter", "suffix": "" }, { "first": "", "middle": [], "last": "Le Quesne", "suffix": "" } ], "year": 1974, "venue": "Systematic Biology", "volume": "23", "issue": "4", "pages": "513--517", "other_ids": {}, "num": null, "urls": [], "raw_text": "Walter J. Le Quesne. 1974. The uniquely evolved char- acter concept and its cladistic application. Systematic Biology, 23(4):513-517.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Fennig", "authors": [ { "first": "M", "middle": [], "last": "", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Gary", "middle": [ "F" ], "last": "Simons", "suffix": "" }, { "first": "Charles", "middle": [ "D" ], "last": "", "suffix": "" } ], "year": 2014, "venue": "Ethnologue: Languages of the World", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Paul Lewis, Gary F. Simons, and Charles D. Fen- nig, editors. 2014. Ethnologue: Languages of the World, 17th Edition. SIL International. Online ver- sion: http://www.ethnologue.com.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Phylogenetic tree construction using Markov chain Monte Carlo", "authors": [ { "first": "Shuying", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dennis", "middle": [ "K" ], "last": "Pearl", "suffix": "" }, { "first": "Hani", "middle": [], "last": "Doss", "suffix": "" } ], "year": 2000, "venue": "Journal of the American Statistical Association", "volume": "95", "issue": "450", "pages": "493--508", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuying Li, Dennis K. Pearl, and Hani Doss. 2000. Phy- logenetic tree construction using Markov chain Monte Carlo. Journal of the American Statistical Associa- tion, 95(450):493-508.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Tracing the roots of syntax with Bayesian phylogenetics", "authors": [ { "first": "Luke", "middle": [], "last": "Maurits", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" } ], "year": 2014, "venue": "", "volume": "111", "issue": "", "pages": "13576--13581", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke Maurits and Thomas L. Griffiths. 2014. Tracing the roots of syntax with Bayesian phylogenetics. PNAS, 111(37):13576-13581.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Conjugate Bayesian analysis of the Gaussian distribution", "authors": [ { "first": "Kevin", "middle": [ "P" ], "last": "Murphy", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin P. Murphy. 2007. Conjugate Bayesian analysis of the Gaussian distribution. Technical report, University of British Columbia.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Monogenesis or polygenesis: A single ancestral language for all humanity?", "authors": [ { "first": "Johanna", "middle": [], "last": "Nichols", "suffix": "" } ], "year": 2011, "venue": "The Oxford Handbook of Language Evolution", "volume": "", "issue": "", "pages": "558--572", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johanna Nichols. 2011. Monogenesis or polygenesis: A single ancestral language for all humanity? In Mag- gie Tallerman and Kathleen R. Gibson, editors, The Oxford Handbook of Language Evolution, pages 558- 572. Oxford Univ Press.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Interpreting principal component analyses of spatial population genetic variation", "authors": [ { "first": "John", "middle": [], "last": "Novembre", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Stephens", "suffix": "" } ], "year": 2008, "venue": "Nature Genetics", "volume": "40", "issue": "5", "pages": "646--649", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Novembre and Matthew Stephens. 2008. Interpret- ing principal component analyses of spatial population genetic variation. Nature Genetics, 40(5):646-649.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "How good are typological distances for determining genealogical relationships among languages?", "authors": [ { "first": "Taraka", "middle": [], "last": "Rama", "suffix": "" }, { "first": "Prasanth", "middle": [], "last": "Kolachina", "suffix": "" } ], "year": 2012, "venue": "COLING Posters", "volume": "", "issue": "", "pages": "975--984", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taraka Rama and Prasanth Kolachina. 2012. How good are typological distances for determining genealogical relationships among languages? In COLING Posters, pages 975-984.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Lexicostatistic dating of prehistoric ethnic contacts", "authors": [ { "first": "Morris", "middle": [], "last": "Swadesh", "suffix": "" } ], "year": 1952, "venue": "Proc. of American Philosophical Society", "volume": "96", "issue": "", "pages": "452--463", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morris Swadesh. 1952. Lexicostatistic dating of prehis- toric ethnic contacts. Proc. of American Philosophical Society, 96:452-463.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Bayesian agglomerative clustering with coalescents", "authors": [ { "first": "Yee Whye", "middle": [], "last": "Teh", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Roy", "suffix": "" } ], "year": 2008, "venue": "NIPS", "volume": "", "issue": "", "pages": "1473--1480", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Whye Teh, Hal Daum\u00e9 III, and Daniel Roy. 2008. Bayesian agglomerative clustering with coalescents. In NIPS, pages 1473-1480.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Proposition 16", "authors": [ { "first": "Nikolai", "middle": [], "last": "Sergeevich", "suffix": "" }, { "first": "Trubetzkoy", "middle": [], "last": "", "suffix": "" } ], "year": 1928, "venue": "Acts of the First International Congress of Linguists", "volume": "", "issue": "", "pages": "17--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikolai Sergeevich Trubetzkoy. 1928. Proposition 16. In Acts of the First International Congress of Linguists, pages 17-18.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "The end of the Altaic controversy", "authors": [ { "first": "Alexander", "middle": [], "last": "Vovin", "suffix": "" } ], "year": 2005, "venue": "Central Asiatic Journal", "volume": "49", "issue": "1", "pages": "71--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Vovin. 2005. The end of the Altaic contro- versy. Central Asiatic Journal, 49(1):71-132.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Representations of a language." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Mixtures of Mundari (a Munda language) and Khmer (a Mon-Khmer language). The transitions from Mundari (leftmost) to Khmer (rightmost). The vertical axis denotes typological naturalness log p(x) + C." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "SWAP operator. The gray circle is the target node. Its parent P, sibling S and two children C1 and C2 are shown. (a) The current state. (b-e) The proposed states. (b-c) The topology remains the same but the target is moved toward C1 and C2, respectively. (d) C1 is swapped for S. (e) C2 is swapped for S." }, "FIGREF4": { "uris": null, "num": null, "type_str": "figure", "text": "Maximum clade credibility tree of the world. (a) The whole tree. Three-letter labels are ISO 639-3 codes. Nodes below language families are omitted. (b-c) Portions of the tree are enlarged." }, "FIGREF5": { "uris": null, "num": null, "type_str": "figure", "text": "Histogram of posterior variances \u03c3 2 = \u03b2 n /\u03b1 n of the 4,000th sample. Scatter plot of languages using the components with the two smallest variances." }, "TABREF0": { "content": "
fusionagglutinativefusional
consonants stable/assimilativeshifting/dissimilative
vowelsharmonizing/stablereducing/diphthongizing
", "num": null, "html": null, "type_str": "table", "text": "MundaMon-Khmer grammar synthetic analytic word order head-last, OV, postpositional head-first, VO, prepositional affixation pre/infixing, suffixing pre/infixing or isolating" }, "TABREF1": { "content": "", "num": null, "html": null, "type_str": "table", "text": "Typological comparison of the Munda and Mon-Khmer branches of the Austroasiatic languages." } } } }