{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:24:23.846570Z" }, "title": "Switching Contexts: Transportability Measures for NLP", "authors": [ { "first": "Guy", "middle": [], "last": "Marshall", "suffix": "", "affiliation": { "laboratory": "", "institution": "Idiap Research Institute", "location": { "country": "Switzerland \u2021" } }, "email": "guy.marshall@postgrad.manchester.ac.uk" }, { "first": "Mokanarangan", "middle": [], "last": "Thayaparan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Idiap Research Institute", "location": { "country": "Switzerland \u2021" } }, "email": "mokanarangan.thayaparan@manchester.ac.uk" }, { "first": "Philip", "middle": [], "last": "Osborne", "suffix": "", "affiliation": { "laboratory": "", "institution": "Idiap Research Institute", "location": { "country": "Switzerland \u2021" } }, "email": "philip.osborne@postgrad.manchester.ac.uk" }, { "first": "Andr\u00e9", "middle": [], "last": "Freitas", "suffix": "", "affiliation": { "laboratory": "", "institution": "Idiap Research Institute", "location": { "country": "Switzerland \u2021" } }, "email": "andre.freitas@manchester.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper explores the topic of transportability, as a sub-area of generalisability. By proposing the utilisation of metrics based on well-established statistics, we are able to estimate the change in performance of NLP models in new contexts. Defining a new measure for transportability may allow for better estimation of NLP system performance in new domains, and is crucial when assessing the performance of NLP systems in new tasks and domains. Through several instances of increasing complexity, we demonstrate how lightweight domain similarity measures can be used as estimators for the transportability in NLP applications. The proposed transportability measures are evaluated in the context of Named Entity Recognition and Natural Language Inference tasks.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper explores the topic of transportability, as a sub-area of generalisability. By proposing the utilisation of metrics based on well-established statistics, we are able to estimate the change in performance of NLP models in new contexts. Defining a new measure for transportability may allow for better estimation of NLP system performance in new domains, and is crucial when assessing the performance of NLP systems in new tasks and domains. Through several instances of increasing complexity, we demonstrate how lightweight domain similarity measures can be used as estimators for the transportability in NLP applications. The proposed transportability measures are evaluated in the context of Named Entity Recognition and Natural Language Inference tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The empirical evaluation of the quality of NLP models under a specific task is a fundamental part of the scientific method of the NLP community. However, commonly, many proposed models are found to perform well in the specific context in which they are evaluated and state-of-the-art claims are usually found not transportable to similar but different settings. The current evaluation metrics may only indicate which algorithm or setup performs best: they are unable to estimate performance in a new context, to demonstrate internal validity, or to verify causality. To offset this, statistical significance testing is sometimes applied in conjunction with performance measures (e.g. F1-score, BLEU) to attempt to establish validity. However, statistical significance testing has been shown to be lacking. Dror et al. (2018) reviewed NLP papers from ACL17 and TACL17 and found that only a third of these papers use significance * equal contribution testing. Further, many papers did not specify the type of test used, and some even employed an inappropriate statistical test.", "cite_spans": [ { "start": 806, "end": 824, "text": "Dror et al. (2018)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Performance is measured in NLP tasks primarily through F1 score or task-specific metrics such as BLEU. The limited scope of these as performance evaluation techniques has been shown to have issues. S\u00f8gaard et al. (2014) highlights the data selection bias in NLP system performance. Gorman and Bedrick (2019) show issues of using standard splits, as opposed to random splits. We support their statement that \"practitioners who wish to firmly establish that a new system is truly state-of-the-art augment their evaluations with Bonferroni-corrected random split hypothesis testing\". In an NLI task, using SNLI and MultiNLI datasets with a set of different models, it has been shown that permutations of training data leads to substantial changes in performance (Schluter and Varab, 2018) .", "cite_spans": [ { "start": 198, "end": 219, "text": "S\u00f8gaard et al. (2014)", "ref_id": "BIBREF29" }, { "start": 282, "end": 307, "text": "Gorman and Bedrick (2019)", "ref_id": "BIBREF13" }, { "start": 759, "end": 785, "text": "(Schluter and Varab, 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Further, the lack of transportability for NLP tasks has been raised by specialists in applied domains. For example, healthcare experts have expressed their frustration in the limitations of algorithms built in research settings for practical applications (Demner-Fushman and Elhadad, 2016) and the reduction of performance \"outside of their development frame\" (Maddox and Matheny, 2015). More generally, \"machine learning researchers have noted current systems lack the ability to recognize or react to new circumstances they have not been specifically programmed or trained for\" (Pearl, 2019) .", "cite_spans": [ { "start": 255, "end": 289, "text": "(Demner-Fushman and Elhadad, 2016)", "ref_id": "BIBREF8" }, { "start": 580, "end": 593, "text": "(Pearl, 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The advantages of \"more transportable\" approaches, such as BERT, in terms of their performance in multiple different domains, is currently not expressed (other than the prevalence of such architectures across a range of state-of-the-art tasks and domains). To support analysis and investigation into the insight that could be gained by examination of these properties, we suggest metrics and a method for measuring the transportability of models to new domains. This has immediate relevance for domain experts, wishing to implement existing solutions on novel datasets, as well as for NLP researchers wishing to assemble new dataset, design new models, or evaluate approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To support this, we propose feature gradient, and show it to have promise as a way to gain lexical or semantic insight into factors influencing the performance of different architectures in new domains. This differs from data complexity, being a comparative measure between two datasets. We aim to start a conversation about evaluation of systems in a broader setting, and to encourage the creation and utilisation of new datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper focuses on the design and evaluation of a lightweight transportability measure in the context of the empirical evaluation of NLP models. A further aim is to provide a category of measures which can be used to estimate the stability of the performance of a system across different domains. An initial transportability measure is built by formalising properties of performance stability and variation under a statistical framework. The proposed model is evaluated in the context of Named Entity Recognition tasks (NER) and Natural Language Inference (NLI) tasks across different domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contribution is to present a measure that evaluates the transportability and robustness of an NLP model, to evaluate domain similarity measures to understand and anticipate the transportability of an NLP model, and to compare state of the art models across different datasets for NER and NLI.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To quote Campbell and Stanley (2015), \"External validity asks the question of generalizability: To what populations, settings, treatment variables, and measurement variables can this effect be generalized?\". For Pearl and Bareinboim (2014) , transportability is how generalisable an experimentally identified causal effect is to a new population where only observational studies can be conducted. \"However, there is an important difference, not often distinguished, between what might be called the potential (or generic) transferability of a study and its actual (or specific) transferability to another policy or practice decision context at another time and place.\" (Walker et al., 2010) Bareinboim and Pearl (2013) explore transfer of causal information, culminating in an algorithm for identifying transportable relations. Transportability in this sense does not permit retraining in the new population, and guides our choices in this paper. Other definitions of transfer learning allow for training of the model in the new context (Pan and Yang, 2010), or highlight the distinction between evidential knowledge and causal assumptions (Singleton et al., 2014).", "cite_spans": [ { "start": 212, "end": 239, "text": "Pearl and Bareinboim (2014)", "ref_id": "BIBREF22" }, { "start": 691, "end": 718, "text": "Bareinboim and Pearl (2013)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Relevant background and related work 2.1 Terminology", "sec_num": "2" }, { "text": "Rezaeinia et al. (2019) consider improving transportability by demonstrating word embeddings' accuracy degrades over different datasets, and propose an algorithmic method for improved word embeddings by using word2vec, adding gloVe when missing, and filling any further missing values with random entries. In a medical tagging task, Ferr\u00e1ndez et al. 2012used different train/test datasets, and compared precision and recall with self-trained vs transported-trained, finding that some tag-categories performed better than others. They postulate that degradation differences were due to the differing prevalence of entities in the transported training data. Another term from this domain is \"portability\", in the sense that a model could be successfully used with consideration of implementation issues such as different data formats and target NLP vocabularies (Carroll et al., 2012) . Blitzer et al. (2007) created a multi-domain dataset for sentiment analysis, and propose a measure of domain similarity for sentiment analysis based on the distance between the probability distributions in terms of characteristic functions of linear classifiers. In image processing, domain transfer is an active area of research. Pan et al. 2010propose transfer component analysis as a method to learn subspaces which have similar data properties and data distributions in different domains. They state that domain adaptation is \"a special setting of transfer learning which aims at transferring shared knowledge across different but related tasks or domains\". In computer vision, Peng et al. (2019) combine multiple datasets into a larger dataset DomainNet, and consider multi-source domain adaptation, formalising for binary classification. They demonstrate multisource training improves model accuracy, and publish baselines for state of the art methods.", "cite_spans": [ { "start": 860, "end": 882, "text": "(Carroll et al., 2012)", "ref_id": "BIBREF7" }, { "start": 885, "end": 906, "text": "Blitzer et al. (2007)", "ref_id": "BIBREF3" }, { "start": 1567, "end": 1585, "text": "Peng et al. (2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Transportability: Models evaluated across different datasets", "sec_num": "2.2" }, { "text": "The language used in literature is not consistent. Bareinboim and Pearl (2013) highlights that generalisability goes under different \"rubrics\" such as external validity, meta-analysis, overgeneralisation, quasi-experiments and heterogeneity. Boulenger et al. (2005) disambiguate terms in the context of healthcare economics (such as generalisability, external validity, and transferability), and created a self-reporting checklist to attempt to quantify transferability. They define generalisability as \"the degree to which the results of a study hold true in other settings\", and \"the data, methods and results of a given study are transferable if (a) potential users can assess their applicability to their setting and (b) they are applicable to that setting\". They advocate a user-centric view of transferability, considering specific usability aspects such as explicit currency conversion rates. Antonanzas et al. (2009) create a transferability index at general, specific and global levels. Their \"general index\" is comprised of \"critical factors\", which utilise Boulenger et al.'s factors, adding subjective dimensions.", "cite_spans": [ { "start": 51, "end": 78, "text": "Bareinboim and Pearl (2013)", "ref_id": "BIBREF2" }, { "start": 242, "end": 265, "text": "Boulenger et al. (2005)", "ref_id": "BIBREF4" }, { "start": 900, "end": 924, "text": "Antonanzas et al. (2009)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Generalisability", "sec_num": "2.3" }, { "text": "3 Transportability in NLP", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalisability", "sec_num": "2.3" }, { "text": "To support a rigorous discussion, notational conventions are introduced. Extending the choices of Pearl and Bareinboim (2011), we denote a domain D with population \u03a0, governed by feature probability distribution P , which is data taken from a particular domain. We denote the source with a 0 subscript. Definition 1. Generalisability: A system \u03a8 has performance p for solving task T 0 in domain D 0 . Generalisability is how the system \u03a8 performs for solving task T i in domain D j , relative to the original task, without retraining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definitions", "sec_num": "3.1" }, { "text": "Special cases, such as transportability or transference, have some i, j = 0 in the definition above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definitions", "sec_num": "3.1" }, { "text": "Definition 2. Transportability: A system \u03a8 has performance p for solving task T 0 in domain D 0 . Transportability is the performance of system \u03a8 for solving task T 0 in a new domain D i , relative to the original task, without retraining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definitions", "sec_num": "3.1" }, { "text": "Across multiple D i , we have relative performance \u03c4 p (D 0 , D i ), from which we can establish statistical measures for transportability performance and variation. Transfer learning is a specific way of achieving transportability (between populations or domains) or generalisability (including between tasks). Singleton et al. (2014) state that \"transport encompasses transfer learning in attempting to use statistical evidence from a source on a target, but differs by incorporating causal assumptions derived from a combination of empirical source data and outside domain knowledge.\". Note that this is different to generalisation in the Machine Learning sense, which is akin to internal validity (Marsland, 2011) . Figure 1 shows the definitions associated with transportability discussed in this paper. Table 1 summarises terminology, of how the target differs from source (\u03a8 0 , T 0 , D 0 (\u03a0 0 )). Chance, bias and confounding are the three broad categories of \"threat to validity\". Broadly, chance and bias can be assessed by cross-validity, as it applies a model to the same task in the same domain on different data population. Confounding, error in interpretation of what is being measured, is more difficult to assess. Transportability is concerned with the transfer of learned information, with particular advances in the transport of causal knowledge.", "cite_spans": [ { "start": 701, "end": 717, "text": "(Marsland, 2011)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 720, "end": 728, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 809, "end": 816, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Definitions", "sec_num": "3.1" }, { "text": "Term \u03a8 T D \u03a0 Cross-validation 0 0 0 i New modeling i 0 0 0 Transportability 0 0 i i Transferability 0 0,i i i Generalisability 0 0,i 0,i 0,i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definitions", "sec_num": "3.1" }, { "text": "Generalisability is the catch-all term for how externally valid a result or model is. Any combination of task, domain and data can be used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definitions", "sec_num": "3.1" }, { "text": "We define transportability performance \u03c4 p as the gradient of the change in the performance metric's score from one domain to another. This measure does not take into account the underlying probability distributions, only the change in resulting performance measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transportability performance", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c4 p (D 0 , D i ) = p(\u03a8, T, D i ) p(\u03a8, T, D 0 )", "eq_num": "(1)" } ], "section": "Transportability performance", "sec_num": "3.2" }, { "text": "The measure uses a ratio in order to allow comparison between different systems. To generalise this measure across different settings, we can take an average to give Equation 2. Note that this is the average percentage change in performance, not an aggregated performance measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transportability performance", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c4 p (D 0 ) = 1 n n i=1 p(\u03a8, T, D i ) p(\u03a8, T, D 0 )", "eq_num": "(2)" } ], "section": "Transportability performance", "sec_num": "3.2" }, { "text": "An analogous definition holds for different tasks over the same domain, \u03c4 p (T ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transportability performance", "sec_num": "3.2" }, { "text": "Performance variation reflects how stable performance is across different contexts and can include, for example, to what extent the sampling method from the source data effects the performance metric of the algorithm. Part of this is data representativeness, the extent to which the source data representation also represents the target data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance variation", "sec_num": "3.3" }, { "text": "More formally, performance variation \u03c4 s (\u03a8, T, D) is the change in performance of (\u03a8, T, D) across different contexts. This is useful in order to gain specific insight into external validity and generalisability. Indeed, we can assess the change in performance between source context D 0 and target context D i . The source context has a privileged position, in that it is this space which the \"learning\" takes place, and the proposed metric for performance variation to multiple different domains is based on \u03c4 p to reflect this. Through repeated measurement in different contexts, we can go further.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance variation", "sec_num": "3.3" }, { "text": "Definition 3. Performance Variation: For a model trained on domain D 0 and applied on n new domains D i , we define the performance variation as the coefficient of variation of performance across this set of domains so that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance variation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c4 var (D 0 ) = 1 + 1 4n n i=1 (\u03c4p(D 0 ,D i )\u2212\u03c4p(D 0 )) 2 n\u22121 \u03c4 p (D 0 )", "eq_num": "(3)" } ], "section": "Performance variation", "sec_num": "3.3" }, { "text": "The 1 + 1 4n term corrects for bias. In order to be meaningful, the target contexts must to have a good coverage of different domains. Enumerating these would be a task of ontological proportions, but can be pragmatically approximated by using the available Gold Standard datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance variation", "sec_num": "3.3" }, { "text": "We can also assess ability to generalise not just over different domains, but also different tasks, provided they can be meaningfully assessed by the same performance measure. We can consider n different domain-task combinations, and with \u03c4 p = n i,j=0 \u03c4 p (\u03a8, T i , D j )/n, this gives a more general form for Equation 3, with n large:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance variation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c4 var = i 0,j 0 (\u03c4p(\u03a8,T i ,D j )\u2212\u03c4p) 2 n\u22121 \u03c4 p", "eq_num": "(4)" } ], "section": "Performance variation", "sec_num": "3.3" }, { "text": "In the case where different tasks cannot be assessed by the same measure, we are still able to compare different systems by looking at how the respective measures change.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance variation", "sec_num": "3.3" }, { "text": "For a purely random system, the transportability should be related to how similar the distributions of \"answers\" in the test dataset are. A random system should really be transportable by our measures. Similarly, we can consider trivial systems, such as identity and constant functions, which are necessarily entirely transportable. That is, for a system that is an identity function \u03a8 = I, \u03c4 p = f (P ), and \u03c4 var (I, T, D i ) = \u03c4 var (I, T, D j ) = 0, \u2200i, j. Note that we would not expect the same performance of these functions on different tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance variation properties", "sec_num": "3.3.1" }, { "text": "A stable system will have \u03c4 var (\u03a8, T, D 0 ) \u2248 \u03c4 var (\u03a8, T, D i )\u2200i, reflecting that it is resilient to the domain on which it is trained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance variation properties", "sec_num": "3.3.1" }, { "text": "Through repeated measurement, we can quantify how F 1 -score changes with respect to different measures A (e.g. dataset complexity), \u2202F 1 \u2202A , with other properties held constant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factors influencing performance variation", "sec_num": "3.3.2" }, { "text": "NLP system performance is dependent on A. This list may include gold standard feature distribution (in terms of representativeness of the semantic or linguistic phenomena), and task difficulty or sensitivity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factors influencing performance variation", "sec_num": "3.3.2" }, { "text": "Users of NLP systems would benefit from being able to estimate the performance of an existing NLP system on a new domain, without performing the full implementation. Important for the performance of an NLP system, especially for few or zero shot learning, is having a common set of features (or phenomena) across domains. We proceed to propose three measures of increasing complexity, in order to attempt to understand how \"similar\" two domains are.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factors influencing performance variation", "sec_num": "3.3.2" }, { "text": "Lexical feature difference: A measure grounded on lexical features (i.e. bag of words). The intuition behind this measure is for treating the set of lexical features as a representation. Linguistic space is observed as materialised tokens, which in turn are in some higher-dimensional semantic space, which enable interpretation. The measure considers the overlap of these linguistic spaces, and indeed the extent to which the linguistic space is covered by the data. Due to the simplicity of this measure, correlation between this and actual transportability performance is likely to be weaker than other measures but is simpler to calculate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Factors influencing performance variation", "sec_num": "3.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= 1\u2212 |D i \u2229 D 0 | |D i | , i > 0", "eq_num": "(" } ], "section": "Lexical Feature Difference", "sec_num": null }, { "text": "5) Where |D i | is the number of features in the target domain D i , and |D i \u2229 D 0 | is the number of features overlapping. This measure is then the proportion of unseen features in the new dataset. If all features of D i are found in D 0 , then the feature difference is 0. If no features of D i are found in D 0 , then the feature difference is 1. The feature overlap is task specific, and therefore appropriate to consider for transportability, but not generalisability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Feature Difference", "sec_num": null }, { "text": "In the simplest case, the transported performance of a bag of words model should be precisely the lexical feature difference combined with distributions of the source and target domains. The feature set can range from binary lexical features to latent vector spaces. For different models, which target different aspects of semantic phenomena, different semantic and syntactic features will matter more. For this reason, considering a set of measures for domain complexity is warranted. In the context of this work, two measures are used over more complex feature spaces. Cosine distance: Specifically, we use Doc2Vec (Le and Mikolov, 2014) to embed the documents from each domain in a 300-dimensional featurevector space, normalise, and calculate cosine distance to compare source and target domains.", "cite_spans": [ { "start": 617, "end": 639, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Feature Difference", "sec_num": null }, { "text": "Considering each domain as a distribution of features, we can use relative entropy to understand the difference between the source and target domains. Similar to cosine distance, we convert the corpus to a vector using Doc2Vec and normalize. We treat these values as discrete probability distributions to calculate the KL divergence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kullback-Leibler divergence:", "sec_num": null }, { "text": "The usefulness of any of these domain similarity measures depends on the semantic phenomena and supporting corpora underlying the system, for example if the system requires a large training dataset, it may be more appropriate to use a measure which considers the underlying probability distributions in each feature. In this case, we can restrict to the case of the same task in order to keep the essential features reasonably consistent across domains. This makes this a measure of transportability (rather than generalisability).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kullback-Leibler divergence:", "sec_num": null }, { "text": "There are additional dimensions of transportability potentially worthy of further investigation and quantification: (i) domain similarity (e.g. missing features), (ii) data efficiency (redundant/repeated features), (iii) data preparation (initial setup and formatting) and (iv) data manipulation required (data pipeline).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kullback-Leibler divergence:", "sec_num": null }, { "text": "The experiments aim to evaluate the consistency of the proposed transportability measures in the context of two standard tasks: named entity recognition and natural language inference. For reproducibility purposes the code and supporting data are available online 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "4.1" }, { "text": "We calculated the F1 score of multiple models on (Figure 2 ). Note that in general the applicability of the proposed transportability measures are not limited to the use of F1 score, but this is simpler as the same measure applies for both tasks. All models and datasets are standard. For NER, the datasets were chosen as they have the consistent tags: Location, Person and Organisation. Stanford NER (Finkel et al., 2005 ) is a CRF classifier, SpaCy v2 is a CNN, ELMo (Peters et al., 2018) is a vector embedding model which outperforms GloVe and word2vec. Each of the three models used are trained on the CoNLL-2003 dataset (Sang and De Meulder, 2003) . We evaluated these models on CoNLL-2003, Wikipedia NER (Ghaddar and Langlais, 2017) (Wiki) and WNUT datasets (Baldwin et al., 2015) for NER in twitter microposts.", "cite_spans": [ { "start": 401, "end": 421, "text": "(Finkel et al., 2005", "ref_id": "BIBREF11" }, { "start": 469, "end": 490, "text": "(Peters et al., 2018)", "ref_id": "BIBREF24" }, { "start": 606, "end": 634, "text": "CoNLL-2003 dataset (Sang and", "ref_id": null }, { "start": 635, "end": 652, "text": "De Meulder, 2003)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 49, "end": 58, "text": "(Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Setup", "sec_num": "4.1" }, { "text": "For NLI, we chose to use standard datasets. SNLI (Bowman et al., 2015) is well established with a limited range of NLI statements, MultiNLI (Williams et al., 2018) is multigenre with a more diverse range of texts, and SciTail (Khot et al., 2018) is based on scientific exam questions. We applied BERT (Devlin et al., 2018) , a state of the art embedding model, to these datasets. Table 2 shows results for the NER task, trained on CoNLL. Unsurprisingly, all models performed better when the target was in the CoNLL domain. The reduced performance on Wiki was more extreme than expected, particularly for ELMo, which was expected to be resilient to domain change (i.e. transportable). Table 6 and Table 4 illustrate the transportability and domain similarity scores for different NER models respectively. NLI: Table 3 shows results for the NLI task, using BERT. We find that, despite the vast training data, BERT's performance is substantially higher when it has been trained on data from that domain. BERT trained on SciTail performs poorly when transported to SNLI or MultiNLI. Table 7 and Table 5 illustrates the transportability and domain similarity scores for different NLI corpora.", "cite_spans": [ { "start": 140, "end": 163, "text": "(Williams et al., 2018)", "ref_id": "BIBREF31" }, { "start": 226, "end": 245, "text": "(Khot et al., 2018)", "ref_id": "BIBREF14" }, { "start": 301, "end": 322, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 380, "end": 387, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 684, "end": 703, "text": "Table 6 and Table 4", "ref_id": "TABREF6" }, { "start": 809, "end": 816, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 1079, "end": 1098, "text": "Table 7 and Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Setup", "sec_num": "4.1" }, { "text": "Every model had \u03c4 p 1, meaning they performed worse on the new domain. This is as expected, though this would not be true in general.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "4.3" }, { "text": "NER: Examining the F1 scores (88.11 vs. 88.78) of SpaCy and Stanford they appear almost comparable. However, the latter transports much more effectively, with \u03c4 p score difference (0.671 Vs 0.524 when transporting to Wiki) (refer Table 6 ).", "cite_spans": [ { "start": 29, "end": 46, "text": "(88.11 vs. 88.78)", "ref_id": null } ], "ref_spans": [ { "start": 230, "end": 237, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Analysis", "sec_num": "4.3" }, { "text": "ELMo is one of the state of the art approaches for NER, as evidenced by the high F1 scores for the source corpus. However, Stanford NER transports equally well, and when transported outperforms ELMo for twitter domain. While the absolute F1 score difference between them is 5, the \u03c4 p scores are almost identical, with a difference of 0.003. In terms of transportability, it is notable that an approach that employs CRF tagger with linguistic features outperforms significantly the CNN-based SpaCy approach and stands in comparison to a computationally expensive model like ELMo.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "4.3" }, { "text": "Stanford NER also has the lowest \u03c4 var . This indicates this to be the most robust model out of the three. This conclusion was facilitated by the \u03c4 p and \u03c4 var measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "4.3" }, { "text": "NER for English is assumed to be an accomplished task as supported by the traditional F1 scores. By using \u03c4 p we argue that there is a need for more robust models, with better transportability performance. Figure 3a and Figure 3b illustrates the decrease in F1 scores as cosine distance and KL divergence increase. A simple 3 parameter non-linear regression model on KL Divergence and Cosine distance is able to predict the F1 score with an mean error of 3.33 and 2.66 respectively. Considering the lexical difference has similar results (Table 4) . This implies that by using these measures we may be able to anticipate the accuracy of a model in a new domain based on easy to compute domain similarity, which is straightforward to compute. NLI: Applying BERT to different domains was not as resilient to domain transport as we expected. The average \u03c4 p is 0.612 over transported domains, despite these being standard corpora from the domains. We found MultiNLI(Train) to be more transportable than the others, since its performance in new domains is not much worse than new data from the same domain. This is as expected, since MultiNLI has been built to have good domain coverage. Specifically, MultiNLI has \u03c4 p = 0.744 and \u03c4 var = 8.582, whilst SNLI has \u03c4 p = 0.646 and \u03c4 var = 15.22 and SciTail has \u03c4 p = 0.446 and \u03c4 var = 3.921. SciTail transports poorly, and does so reliably! SNLI transports in between, but variably, being quite \"hit or miss\" with different samples of SciTail. These results suggest a threshold for \u03c4 p of perhaps 0.8 as being \"appropriate\" for transportability performance. A threshold for \u03c4 var is more difficult to establish and would benefit from further investigation. Clearly, these measures depend on the domains chosen. As with NER, we found lexical difference indicative of transported performance, and that for NLI, accuracy scores decrease with increasing lexical difference, cosine distance and KL divergence (Tables 3 and 5 , and Figures 4a and 4b) . A simple 3 parameter non-linear regression model on KL Divergence and Cosine distance is able to predict the accuracy score with an mean error of 3.98 and 1.95 respectively. ", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 215, "text": "Figure 3a", "ref_id": null }, { "start": 220, "end": 229, "text": "Figure 3b", "ref_id": null }, { "start": 538, "end": 547, "text": "(Table 4)", "ref_id": "TABREF6" }, { "start": 1947, "end": 1962, "text": "(Tables 3 and 5", "ref_id": "TABREF5" }, { "start": 1969, "end": 1987, "text": "Figures 4a and 4b)", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "4.3" }, { "text": "We have presented a model of transportability for NLP tasks, together with metrics to allow for the quantification in the change in performance. We have shown that the proposed transportability measure allows for direct comparison of NLP systems' performance in new contexts. Further, we demonstrated domain similiarity as a measure to model corpus and domain complexity, and predict NLP system performance in unseen domains. This paper lays the foundations for further work in more complex transportability measures and estimation of NLP system performance in new contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://github.com/ai-systems/ transportability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Transferability indices for health economic evaluations: methods and applications. Health economics", "authors": [ { "first": "Fernando", "middle": [], "last": "Antonanzas", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Rodr\u00edguez-Ibeas", "suffix": "" }, { "first": "Carmelo", "middle": [], "last": "Ju\u00e1rez", "suffix": "" }, { "first": "Florencia", "middle": [], "last": "Hutter", "suffix": "" }, { "first": "Reyes", "middle": [], "last": "Lorente", "suffix": "" }, { "first": "Mariola", "middle": [], "last": "Pinillos", "suffix": "" } ], "year": 2009, "venue": "", "volume": "18", "issue": "", "pages": "629--643", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando Antonanzas, Roberto Rodr\u00edguez-Ibeas, Carmelo Ju\u00e1rez, Florencia Hutter, Reyes Lorente, and Mariola Pinillos. 2009. Transferability indices for health economic evaluations: methods and applications. Health economics, 18(6):629-643.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Shared tasks of the 2015 workshop on noisy user-generated text: Twitter lexical normalization and named entity recognition", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Han", "suffix": "" }, { "first": "Young-Bum", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Workshop on Noisy User-generated Text", "volume": "", "issue": "", "pages": "126--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Baldwin, Marie-Catherine de Marneffe, Bo Han, Young-Bum Kim, Alan Ritter, and Wei Xu. 2015. Shared tasks of the 2015 workshop on noisy user-generated text: Twitter lexical normalization and named entity recognition. In Proceedings of the Workshop on Noisy User-generated Text, pages 126- 135.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A general algorithm for deciding transportability of experimental results", "authors": [ { "first": "Elias", "middle": [], "last": "Bareinboim", "suffix": "" }, { "first": "Judea", "middle": [], "last": "Pearl", "suffix": "" } ], "year": 2013, "venue": "Journal of causal Inference", "volume": "1", "issue": "1", "pages": "107--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elias Bareinboim and Judea Pearl. 2013. A general al- gorithm for deciding transportability of experimen- tal results. Journal of causal Inference, 1(1):107- 134.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th annual meeting of the association of computational linguistics", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th annual meeting of the asso- ciation of computational linguistics, pages 440-447.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Can economic evaluations be made more transferable?", "authors": [ { "first": "Stephanie", "middle": [], "last": "Boulenger", "suffix": "" }, { "first": "John", "middle": [], "last": "Nixon", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Drummond", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Ulmann", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Rice", "suffix": "" }, { "first": "Gerard", "middle": [], "last": "De Pouvourville", "suffix": "" } ], "year": 2005, "venue": "The European Journal of Health Economics", "volume": "6", "issue": "4", "pages": "334--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephanie Boulenger, John Nixon, Michael Drum- mond, Philippe Ulmann, Stephen Rice, and Gerard de Pouvourville. 2005. Can economic evaluations be made more transferable? The European Journal of Health Economics, 6(4):334-346.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Experimental and quasi-experimental designs for research", "authors": [ { "first": "T", "middle": [], "last": "Donald", "suffix": "" }, { "first": "Julian C", "middle": [], "last": "Campbell", "suffix": "" }, { "first": "", "middle": [], "last": "Stanley", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Donald T Campbell and Julian C Stanley. 2015. Exper- imental and quasi-experimental designs for research. Ravenio Books.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Portability of an algorithm to identify rheumatoid arthritis in electronic health records", "authors": [ { "first": "J", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Will", "middle": [ "K" ], "last": "Carroll", "suffix": "" }, { "first": "Anne", "middle": [ "E" ], "last": "Thompson", "suffix": "" }, { "first": "", "middle": [], "last": "Eyler", "suffix": "" }, { "first": "Tianxi", "middle": [], "last": "Arthur M Mandelin", "suffix": "" }, { "first": "Raquel", "middle": [ "M" ], "last": "Cai", "suffix": "" }, { "first": "Jennifer", "middle": [ "A" ], "last": "Zink", "suffix": "" }, { "first": "Chad", "middle": [ "S" ], "last": "Pacheco", "suffix": "" }, { "first": "", "middle": [], "last": "Boomershine", "suffix": "" }, { "first": "A", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Lasko", "suffix": "" }, { "first": "", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2012, "venue": "Journal of the American Medical Informatics Association", "volume": "19", "issue": "e1", "pages": "162--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert J Carroll, Will K Thompson, Anne E Eyler, Arthur M Mandelin, Tianxi Cai, Raquel M Zink, Jen- nifer A Pacheco, Chad S Boomershine, Thomas A Lasko, Hua Xu, et al. 2012. Portability of an algo- rithm to identify rheumatoid arthritis in electronic health records. Journal of the American Medical In- formatics Association, 19(e1):e162-e169.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Aspiring to unintended consequences of natural language processing: a review of recent developments in clinical and consumer-generated text processing", "authors": [ { "first": "D", "middle": [], "last": "Demner-Fushman", "suffix": "" }, { "first": "Noemie", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2016, "venue": "Yearbook of medical informatics", "volume": "25", "issue": "01", "pages": "224--233", "other_ids": {}, "num": null, "urls": [], "raw_text": "D Demner-Fushman and Noemie Elhadad. 2016. As- piring to unintended consequences of natural lan- guage processing: a review of recent developments in clinical and consumer-generated text processing. Yearbook of medical informatics, 25(01):224-233.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Generalizability and comparison of automatic clinical text de-identification methods and resources", "authors": [ { "first": "Rotem", "middle": [], "last": "Dror", "suffix": "" }, { "first": "Gili", "middle": [], "last": "Baumer", "suffix": "" }, { "first": "Segev", "middle": [], "last": "Shlomov", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "; \u00d3 Scar", "middle": [], "last": "Ferr\u00e1ndez", "suffix": "" }, { "first": "R", "middle": [], "last": "Brett", "suffix": "" }, { "first": "Shuying", "middle": [], "last": "South", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Shen", "suffix": "" }, { "first": "", "middle": [], "last": "Friedlin", "suffix": "" }, { "first": "H", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "", "middle": [], "last": "Samore", "suffix": "" }, { "first": "M", "middle": [], "last": "St\u00e9phane", "suffix": "" }, { "first": "", "middle": [], "last": "Meystre", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The hitchhiker's guide to testing statis- tical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392.\u00d3 scar Ferr\u00e1ndez, Brett R South, Shuying Shen, F Jeff Friedlin, Matthew H Samore, and St\u00e9phane M Meystre. 2012. Generalizability and comparison of automatic clinical text de-identification methods and resources. In AMIA Annual Symposium Proceed- ings, volume 2012, page 199. American Medical In- formatics Association.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Incorporating non-local information into information extraction systems by gibbs sampling", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "363--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local informa- tion into information extraction systems by gibbs sampling. In Proceedings of the 43rd annual meet- ing on association for computational linguistics, pages 363-370. Association for Computational Lin- guistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Winer: A wikipedia annotated corpus for named entity recognition", "authors": [ { "first": "Abbas", "middle": [], "last": "Ghaddar", "suffix": "" }, { "first": "Phillippe", "middle": [], "last": "Langlais", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "413--422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abbas Ghaddar and Phillippe Langlais. 2017. Winer: A wikipedia annotated corpus for named entity recognition. In Proceedings of the Eighth Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 413-422.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "We need to talk about standard splits", "authors": [ { "first": "Kyle", "middle": [], "last": "Gorman", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bedrick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th annual meeting of the association for computational linguistics", "volume": "", "issue": "", "pages": "2786--2791", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyle Gorman and Steven Bedrick. 2019. We need to talk about standard splits. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 2786-2791.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Scitail: A textual entailment dataset from science question answering", "authors": [ { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. In Thirty-Second AAAI Confer- ence on Artificial Intelligence.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "1188--1196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Interna- tional conference on machine learning, pages 1188- 1196.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Natural language processing and the promise of big data: Small step forward, but many miles to go", "authors": [ { "first": "M", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Michael A", "middle": [], "last": "Maddox", "suffix": "" }, { "first": "", "middle": [], "last": "Matheny", "suffix": "" } ], "year": 2015, "venue": "Circulation. Cardiovascular quality and outcomes", "volume": "8", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas M Maddox and Michael A Matheny. 2015. Natural language processing and the promise of big data: Small step forward, but many miles to go. Circulation. Cardiovascular quality and outcomes, 8(5):463.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Machine learning: an algorithmic perspective", "authors": [ { "first": "Stephen", "middle": [], "last": "Marsland", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Marsland. 2011. Machine learning: an algo- rithmic perspective. Chapman and Hall/CRC.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Domain adaptation via transfer component analysis", "authors": [ { "first": "Ivor", "middle": [ "W" ], "last": "Sinno Jialin Pan", "suffix": "" }, { "first": "James", "middle": [ "T" ], "last": "Tsang", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Kwok", "suffix": "" }, { "first": "", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2010, "venue": "IEEE Transactions on Neural Networks", "volume": "22", "issue": "2", "pages": "199--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. 2010. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199-210.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A survey on transfer learning", "authors": [ { "first": "Qiang", "middle": [], "last": "Sinno Jialin Pan", "suffix": "" }, { "first": "", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2010, "venue": "IEEE Transactions on knowledge and data engineering", "volume": "22", "issue": "10", "pages": "1345--1359", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The seven tools of causal inference, with reflections on machine learning", "authors": [ { "first": "Judea", "middle": [], "last": "Pearl", "suffix": "" } ], "year": 2019, "venue": "Communications of the ACM", "volume": "62", "issue": "3", "pages": "54--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judea Pearl. 2019. The seven tools of causal inference, with reflections on machine learning. Communica- tions of the ACM, 62(3):54-60.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Transportability of causal and statistical relations: A formal approach", "authors": [ { "first": "Judea", "middle": [], "last": "Pearl", "suffix": "" }, { "first": "Elias", "middle": [], "last": "Bareinboim", "suffix": "" } ], "year": 2011, "venue": "Twenty-Fifth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judea Pearl and Elias Bareinboim. 2011. Transporta- bility of causal and statistical relations: A formal ap- proach. In Twenty-Fifth AAAI Conference on Artifi- cial Intelligence.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "External validity: From do-calculus to transportability across populations", "authors": [ { "first": "Judea", "middle": [], "last": "Pearl", "suffix": "" }, { "first": "Elias", "middle": [], "last": "Bareinboim", "suffix": "" } ], "year": 2014, "venue": "Statistical Science", "volume": "29", "issue": "4", "pages": "579--595", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judea Pearl and Elias Bareinboim. 2014. External va- lidity: From do-calculus to transportability across populations. Statistical Science, 29(4):579-595.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Moment matching for multi-source domain adaptation", "authors": [ { "first": "Xingchao", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Qinxun", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Xide", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Zijun", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Saenko", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "1406--1415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. 2019. Moment match- ing for multi-source domain adaptation. In Proceed- ings of the IEEE International Conference on Com- puter Vision, pages 1406-1415.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proc. of NAACL.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Sentiment analysis based on improved pre-trained word embeddings", "authors": [ { "first": "Rouhollah", "middle": [], "last": "Seyed Mahdi Rezaeinia", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Rahmani", "suffix": "" }, { "first": "Hadi", "middle": [], "last": "Ghodsi", "suffix": "" }, { "first": "", "middle": [], "last": "Veisi", "suffix": "" } ], "year": 2019, "venue": "Expert Systems with Applications", "volume": "117", "issue": "", "pages": "139--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seyed Mahdi Rezaeinia, Rouhollah Rahmani, Ali Gh- odsi, and Hadi Veisi. 2019. Sentiment analysis based on improved pre-trained word embeddings. Expert Systems with Applications, 117:139-147.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Introduction to the conll-2003 shared task: Languageindependent named entity recognition", "authors": [ { "first": "F", "middle": [], "last": "Erik", "suffix": "" }, { "first": "Fien", "middle": [], "last": "Sang", "suffix": "" }, { "first": "", "middle": [], "last": "De Meulder", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F Sang and Fien De Meulder. 2003. Intro- duction to the conll-2003 shared task: Language- independent named entity recognition. arXiv preprint cs/0306050.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "When data permutations are pathological: the case of neural natural language inference", "authors": [ { "first": "Natalie", "middle": [], "last": "Schluter", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Varab", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4935--4939", "other_ids": {}, "num": null, "urls": [], "raw_text": "Natalie Schluter and Daniel Varab. 2018. When data permutations are pathological: the case of neural nat- ural language inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 4935-4939.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Transfer and transport: incorporating causal methods for improving predictive models", "authors": [ { "first": "Alex", "middle": [ "At" ], "last": "Kyle W Singleton", "suffix": "" }, { "first": "William", "middle": [], "last": "Bui", "suffix": "" }, { "first": "", "middle": [], "last": "Hsu", "suffix": "" } ], "year": 2014, "venue": "Journal of the American Medical Informatics Association", "volume": "21", "issue": "e2", "pages": "374--375", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyle W Singleton, Alex AT Bui, and William Hsu. 2014. Transfer and transport: incorporating causal methods for improving predictive models. Journal of the American Medical Informatics Association, 21(e2):e374-e375.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "What's in a p-value in nlp?", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Johannsen", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "H\u00e9ctor Mart\u00ednez", "middle": [], "last": "Alonso", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the eighteenth conference on computational natural language learning", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard, Anders Johannsen, Barbara Plank, Dirk Hovy, and H\u00e9ctor Mart\u00ednez Alonso. 2014. What's in a p-value in nlp? In Proceedings of the eighteenth conference on computational natural lan- guage learning, pages 1-10.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Generalisability, transferability, complexity and relevance", "authors": [ { "first": "G", "middle": [], "last": "Damian", "suffix": "" }, { "first": "Yot", "middle": [], "last": "Walker", "suffix": "" }, { "first": "", "middle": [], "last": "Teerawattananon", "suffix": "" } ], "year": 2010, "venue": "Evidence-Based Decisions and Economics", "volume": "", "issue": "", "pages": "56--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Damian G Walker, Yot Teerawattananon, Rob Ander- son, and Gerry Richardson. 2010. Generalisabil- ity, transferability, complexity and relevance. In Evidence-Based Decisions and Economics, pages 56-66. Wiley and Sons.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Schematic representation of the definitions" }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Overview of the experiments undertaken, indicating the models being applied to each dataset" }, "TABREF1": { "html": null, "content": "", "num": null, "type_str": "table", "text": "Terminology through variation from a source.Table body is subscripts." }, "TABREF3": { "html": null, "content": "
: NER F1 scores for different models trained on
CoNLL dataset transported across different corpora
multiple datasets
", "num": null, "type_str": "table", "text": "" }, "TABREF5": { "html": null, "content": "
100100
8080
F1 Score40 60F1 Score40 60
20ELMo NER Stanford NER20ELMo NER Stanford NER
SpaCy NERSpaCy NER
002468 10 12 14 16000.511.522.5
Cosine Distance \u00d710 \u22122KL Divergence
(a) NER F1 scores Vs Doc2Vec cosine distance(b) NER F1 scores Vs KL Divergence from
from training (CoNLL) corpustraining (CoNLL) corpus
Figure 3: NER F1 score plotted against different measures of corpus similarity
DatasetLexical Cosine KL
Diver-
gence
Train0.0000.000 0.000
CoNLLDev0.1210.001 0.345
Test0.1970.003 0.463
Wiki0.2900.007 0.701
Train0.4210.134 2.129
WNUTDev0.5110.167 1.473
Test0.4810.130 1.137
", "num": null, "type_str": "table", "text": "NLI accuracy scores for BERT model trained on one dataset transported to a different dataset" }, "TABREF6": { "html": null, "content": "", "num": null, "type_str": "table", "text": "Domain similarity scores between the training corpus (CoNLL-2003) across other NER datasets" }, "TABREF7": { "html": null, "content": "
DatasetMeasurementSNLIMultiNLISciTail
Train DevTestTrain DevTrain DevTest
SNLI (Train) 0.000 0MultiNLI Lexical Lexical 0.008 0.008 0.008 0.000 0.008 0.063 0.063 0.047
(Train)Cosine0.008 0.018 0.016 0.000 0.002 0.298 0.307 0.306
KL Divergence 11.07 7.613 6.333 0.000 3.342 33.10 35.27 34.69
SciTailLexical0.282 0.282 0.282 0.277 0.278 0.000 0.028 0.025
(Train)Cosine0.233 0.230 0.231 0.262 0.298 0.000 0.001 0.002
KL Divergence 11.17 7.047.492 5.220 6.682 0.000 1.097 1.424
", "num": null, "type_str": "table", "text": "\u03c4 p and \u03c4 var as complementary to traditional measures. We are not breaking new ground in terms of evaluation methodology, but the experiments demonstrate that traditional F1 and accuracy measures do not capture a complete picture. Transportability measure are not only simple enough to calculate and convey but also evaluates a model .003 0.003 0.086 0.088 0.136 0.115 0.119 Cosine 0.000 0.002 0.002 0.008 0.007 0.233 0.242 0.242 KL Divergence 0.000 3.277 4.283 6.489 8.982 16.02 17.50 18.20" }, "TABREF8": { "html": null, "content": "
100100
8080
Accuracy40 60Accuracy40 60
20SNLI MultiNLI20SNLI MultiNLI
SciTailSciTail
005101520253000102030
Cosine Distance \u00d710 \u22122KL Divergence
(a) NLI accuracy Vs Doc2Vec cosine distance from(b) NLI accuracy Vs Lexical Divergence from
source corpustraining corpus
Figure 4: NLI accuracy score plotted against different measures of corpus similarity
Stanford SpaCy ELMo
\u03c4 p (wiki)0.6710.5240.794
\u03c4 p (wnut)0.5140.2870.477
\u03c4 p (wnut & wiki) 0.5530.3460.556
\u03c4 var15.05135.171 32.666
", "num": null, "type_str": "table", "text": "Domain similarity scores between the source training corpus and target corpora" }, "TABREF9": { "html": null, "content": "", "num": null, "type_str": "table", "text": "Transportability measures for NER models with regards to generalisability and robustness.Low cost ways of anticipating performance for a new task or domain. Most of the state of the art models are computationally expensive. With the transportability and domain similarity measures we are able to predict performance in a new domain with reasonable accuracy. These similarity measures are relatively simpler to run." }, "TABREF11": { "html": null, "content": "
", "num": null, "type_str": "table", "text": "Transportability measures for NLI corpora" } } } }