id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_101300
(2015) for the basic encoder-decoder architecture.
we describe our experimental setup.
neutral
train_101301
We conducted 2 experiments: (a) train on ar- (pl,sk,sl) pairs and test on ar-cs pair, and;(b) train on en- (bn,kn,ta) pairs and test on en-hi pair.
larger target side data does not explain the improvement in transliteration accuracy due to multilingual transliteration.
neutral
train_101302
We find that the proposed dataset meets the desiderata we set out in Section 3.1.
7 In this section, we show that NarrativeQA presents a challenging problem for current approaches to reading comprehension by evaluating several baselines based on information retrieval (IR) techniques and neural models.
neutral
train_101303
Cognates are words in two languages that share both a similar meaning and a similar form.
one of the frequent English synonym alternatives {approval, acceptance} would better fit this context.
neutral
train_101304
Bethard and Parker (2016) proposed an alternate scheme, Semantically Compositional Annotation of Time Expressions (SCATE), in which times are annotated as compositional time entities (Figure 1), and suggested that this should be more amenable to machine learning.
for example, next week refers to the week following the DCT, and in such a case the value of the property INTERVAL-TYPE for the operator NEXT would be DOCTIME.
neutral
train_101305
and SVM ling + GloVe are statistically significant, with p = .029 for Pearson's r and p .01 for both Spearman's ρ and Kendall's τ .
the variational inference algorithm begins by initializing the parameters G,f , C, a and b at random.
neutral
train_101306
To test this hypothesis, we simulate an active learning scenario, in which an agent iteratively learns a model for each fold.
we found that mean GloVe embeddings produced substantially better performance in all tests.
neutral
train_101307
Note that the attention mechanism is not shown here for clarity.
we also experimented earlier with a discrete version of LFT, where we treated responses in the [0.8, 1.0] range as polite, [0.2, 0.8] as neutral, and [0.0, 0.2] as rude.
neutral
train_101308
2013, e.g., positive ones such as gratitude, deference, greeting, positive lexicon, indirection, indicative modal, and negative ones such as negative lexicon, direct question, direct start, 2nd person start.
it is worth noting that in the last example, while LFT and Polite-RL seem to provide a relevant compliment, they are actually complimenting the wrong person.
neutral
train_101309
0.457 without any novel features 0.474 Abu-Jbara et al.
authors often use multiple sentences to indicate a citation's purpose (Abu-Jbara and Radev, 2012;Ritchie et al., 2008;He et al., 2011;Kataria et al., 2011).
neutral
train_101310
BiRNN-CRF is adopted as the fundamental segmentation framework that is complemented by an attention-based sequence-to-sequence transducer for non-segmental multiword tokens.
in addition to multiword tokens, the UD scheme also allows multitoken words, that is, words consisting of multiple tokens, such as numerical expressions like 20 000.
neutral
train_101311
For these languages, we build dictionaries from the training data to look up the multiword tokens.
(2014) present a data-driven word segmentation system for Arabic based on a sequence labelling framework.
neutral
train_101312
In terms of generation techniques that capture semantics, the sentence variational autoencoder (SVAE) (Bowman et al., 2016) is closest to our work in that it attempts to impose semantic structure on a latent vector space.
this was some of the best <norp> food i've had in the <gpe>.
neutral
train_101313
This task is substantially simpler, since the goal is to identify a single word (such as "good:better::bad:?")
a sample from q is simply a perturbed version of f : obtained by adding von-Mises Fisher (vMF) noise, and we perturb the magnitude of f by adding uniform noise.
neutral
train_101314
We thank all editors and reviewers for their helpful feedback and suggestions.
for the categorical variables we compare the mean values per category with the numerical dependent variable.
neutral
train_101315
Traditionally, languages have been grouped into the four 1 Chinese, Japanese, and Thai are sourced from Wikipedia and processed with the Polyglot tokeniser since we found their preprocessing in the PW is not adequate for language modeling.
it is not straightforward how to advance these ideas to the output side of the model, as this second set of word-specific parameters is directly responsible for the next-word prediction: it has to encode a much wider range of information, such as topical and semantic knowledge about words, which cannot be easily obtained from its characters alone (Jozefowicz et al., 2016).
neutral
train_101316
Empirically we find that it leads to similar results in testing our theory (Section 4) and better results in downstream WSI applications (Section 6).
, A m there will be directions corresponding to clothing, sports matches, etc., that will have high inner products with tie1, tie2, etc., respectively.
neutral
train_101317
It has the advantage that it is easy to measure and is widely used as a criteria for model fit, but the limitation that it is not directly matched to most tasks that language models are directly used for.
results are reported for contextconditioned perplexity and generative model text classification accuracy, using contexts that capture a range of phenomena and dimensionalities.
neutral
train_101318
Consider a sequence X for which we want to calculate its probability.
this corpus is limited by being relatively small, only containing approximately 45,000 sentences, which we found to be insufficient to effectively train lattice language models.
neutral
train_101319
We are exploring techniques to store fixed embeddings dynamically, so that the non-compositional phrases can be selected as part of the end-to-end training.
compositional Representation the non-compositional embeddings above only account for a subset of all n-grams, so we additionally construct compositional embeddings for each chunk by running a BiLSTM encoder over the individual embeddings of each unit-level token within it (Dyer et al., 2016).
neutral
train_101320
Then, define the literal semantics of a single-word message x to be x def = {s : s ∧ v x = 0}, where ∧ denotes element-wise and and 0 denotes the zero matrix.
in comparison to these works, we seek to predict human behavior instead of modeling artificial agents that communicate with each other.
neutral
train_101321
It is notable that their method has been incorporated into Thomson Reuters Eikon TM , their commercial datato-text NLG software product for macro-economic indicators and mergers-and-acquisitions deals .
this paper focuses on the problem of choosing appropriate verbs to express the direction and magnitude of a percentage change (e.g., in stock prices).
neutral
train_101322
The major problem with the Neural Network baseline is that, similar to the probabilistic model without smoothing, its verb choices would concentrate on the most frequent ones and thus have very poor diversity.
random sampling would lead to a much wider variety of verbs.
neutral
train_101323
Although existing corpora have promoted research into coreference resolution, they suffer from gender bias.
parallelism+URL tests the page-context setting; all others test the snippet-context setting.
neutral
train_101324
We choose Wikipedia as our base dataset given its wide use in natural language understanding tools, but are mindful of its well-known gender biases.
we saw systematic differences between genders in analysis; this is consistent with many studies that have called out differences in how men and women are discussed publicly.
neutral
train_101325
The PMB, too, is constructed using an existing semantic parser, but a part of it is completely manually checked and corrected (i.e., gold standard).
for both models the f-score clearly still improves when using more training instances, which shows that there is at least the potential for additional data to improve the score.
neutral
train_101326
Submission batch: 7/2018; Revision batch: 9/2018; Published 12/2018.
for (a) Standard naming $1 REF @1 $1 male "n.02" @1 $1 Name @1 "tom" $2 REF @2 $2 EQU @2 "now" $2 time "n.08" @2 $0 NOT $3 $3 REF @3 $3 Time @3 @2 $3 Experiencer @3 @1 $3 afraid "a.01" @3 $3 Stimulus @3 @4 $3 REF @4 $3 entity "n.01" @4 Figure 1.
neutral
train_101327
In this model, we directly use the sense selected for each token by one of the WSD systems above, and use the embeddings of the respective sense as generated by NMT after training.
it appeared that learning sense embeddings for NMT is better than using embeddings learned separately by other methods, although such embeddings may be useful for initialization.
neutral
train_101328
† indicates that a neural model's performance was found to be significantly different (p < 0.05) from the MGL.
4.1 Follow-ups to Rumelhart and McClelland (1986) Over the Years Following R&M, a cottage industry devoted to cognitively plausible connectionist models of inflection learning sprouted in the linguistics and cognitive science literature.
neutral
train_101329
A total of 168 of the 4,039 verb types were marked as irregular.
nLP as a discipline has a distinct practical bent and more often concerns itself with the large-scale engineering applications of language technologies.
neutral
train_101330
In brief, the main sources of improvement are twofold: Synthetic languages.
t D is the directionalities typology that was studied by Liu (2010) and used as a training target by Wang and Eisner (2017).
neutral
train_101331
We will choose our universal parameter values by minimizing an estimate of their expected loss, where L train is a collection of training languages (ideally drawn IID from the distribution D of possible human languages) for which some syntactic information is available.
some other work on generalizing from source to target languages assumes the availability of source-target parallel data, or bitext.
neutral
train_101332
It is important to relax the delexicalized assumption.
their method only found global typological information: it did not establish which 70% of the direct objects fell to the right of their verbs, let alone identify which nouns were in fact direct objects of which verbs.
neutral
train_101333
Experiments on sentence modeling with zero-context (sentiment analysis), singlecontext (textual entailment) and multiplecontext (claim verification) demonstrate the effectiveness of ATTCONV in sentence representation learning with the incorporation of context.
we now describe light and advanced versions of ATTCONV.
neutral
train_101334
We show improvements by adding knowledge from our learned entailments without changing the graphs or tuning them to this task in any way.
to our method, these works rely on supervised data and take a local learning approach.
neutral
train_101335
(2013) develop a small corpus of Korean particle errors and build a classifier to perform error detection.
system combination, model ensembles, and adding a spell checker boost these numbers by 4 to 6 points (Chollampatt and Ng, 2018;.
neutral
train_101336
This corpus was used in two editions of the QALB shared task Rozovskaya et al., 2015).
the classification system for Russian obtains a much lower score of 21.0.
neutral
train_101337
On a different but related note, our preliminary experiments on English and Hebrew show that the Arc ZEager variant always outperforms Arc Eager.
let J p , J g be the form-based (rather than index-based) arcs of the predicted and gold representations of x.
neutral
train_101338
In this work we empirically confirm this hypothesis for Modern Hebrew, an MRL with complex morphology and severe wordlevel ambiguity, in a novel transition-based framework.
for example, in the phrase w+b+silwp ewbdwt (''and in distortion of facts''), the conjunction marker w is labeled comp in gold while the parser correctly picks cc.
neutral
train_101339
This survey attempted to review and summarize as much of the current research as possible, while organizing it along several prominent themes.
evaluation of attacks is fairly tricky.
neutral
train_101340
We argue that an important consequence of the simplicity of unlexicalized systems is that their derivations are easier to learn.
we converted the Negra corpus to labeled dependency trees with the DEPSY tool 3 in order to annotate each token with a dependency label.
neutral
train_101341
The BASE templates form a minimal set that extracts Configuration: 7 indexes from a configuration, relying only on constituent boundaries.
unlexicalized parsing models renounce modeling bilexical statistics, based on the assumptions that they are too sparse to be estimated reliably.
neutral
train_101342
Contrary to previously discussed phenomena, there is usually no lexical trigger (wh-word, speech verb) that makes these discontinuities easy to spot.
they only experiment with a linear classifier, and assume access to gold part-of-speech (POS) tags for most of their experiments.
neutral
train_101343
• Extensive experiments on NIST Chinese-English, WMT14 English-German and WMT18 Russian-English translation tasks demonstrate that our SB-NMT model obtains significant improvements over the strong Transformer model by 3.92, 1.49, and 1.04 BLEU points, respectively.
our proposed model alleviates the under-translation problems by exploiting the combination of left-to-right and right-to-left decoding directions, reducing 30.6% of under-translation errors.
neutral
train_101344
For an embedding x in the source language, its transformation to the target language space is given by W ts x.
the detailed experimental settings for this BLI task can be found in Conneau et al.
neutral
train_101345
We learn the matrices B, U s , and U t corresponding to the transformations φ(•), ψ s (•), and ψ t (•), respectively.
this requires an n-way dictionary to represent n languages.
neutral
train_101346
Then, we turn on coverage training, as advised by See et al.
we let the RNN generate the special vectors at time step t (i) by linearly embedding the RNN input x t ∈ R N x to an embedded inputε t ∈ R N h , and (ii) by obtaining a target memory τ t as a linear combination of the current input x t (projected in the hidden space) and the previous history h t−1 (after a linear transformation).
neutral
train_101347
The novelty of our model is that it exploits the simple and fundamental multiplicative closure of rotations to generate rotational associative memory for RNNs.
the RNN is fed a sequence of letter-digit pairs followed by the separation indicator ''??''
neutral
train_101348
Several studies have focused on learning joint input-label representations grounded to word semantics for unseen label prediction for images Socher et al., 2013;Norouzi et al., 2014;Zhang et al., 2016;Fu et al., 2018), called zero-shot classification.
(2015) proposed hierarchical recurrent neural networks and showed that they were superior to CNNbased models.
neutral
train_101349
One potential solution is the use of implicit definitions (Rogers, 1997).
conversely, x's corresponding output position is labeled H if its predecessor is an H or if x is the last position and is an H. In this way, the ''shifting'' behavior is captured by determining the output label of each position via the predecessor of its corresponding input position.
neutral
train_101350
Let |w| indicate the length of w ∈ Σ * .
since members of the same tier are never associated to each other, A(t 1 , t 2 ) will always evaluate to False.
neutral
train_101351
2 Using this notation, the examples from (1) are as below in Example (2).
we then examine a variety of commonly attested tone patterns that are both ISL and not ISL (i.e., local and non-local in terms of strings) to see if they are A-ISL (i.e., local over ARs).
neutral
train_101352
Thus (λ, H∅) ∈ tails f (H∅ n ).
for any k, and for any n ≥ k, the strings H∅ n and ∅ n have the same (k − 1)-suffix: ∅ k−1 .
neutral
train_101353
Then let τ be de- where True is any unary predicate that is always true for models in S. In this transduction, ϕ P b (x) is only true for input positions labeled b that do not follow another b; conversely, ϕ P c (x) is only true for input positions labeled b that do follow another b.
this allows us to use the usual conventions from strings: For an AR w, w n represents the AR consisting of n repetitions of w; w 0 is the empty AR whose tiers are both λ and thus has no association lines.
neutral
train_101354
Memory Lookup: The memory lookup looks up scratch memory with a given probe, say x (of arbitrary dimension), and retrieves the memory entry having closest key embedding to x.
this, to our knowledge, is the first NPI system to be trained with only the gold answer as (very distant) supervision for inducing such complex programs.
neutral
train_101355
# of questions per dialogue 1.6 / 10 Avg./Max.
concurrently, there is a growing interest in conversational reading comprehension such as coQA (Reddy et al., 2018).
neutral
train_101356
Results show that GBDT++ is superior to the fine-tuned language model on the questions under the category matching (68.1% vs. 57.0%) and the latter model is more capable of answering implicit questions (e.g., under the category summary, logic, and commonsense) which require aggregation of information from multiple sentences, the understanding of the entire dialogue, or the utilization of world knowledge.
we further define four subcategories as follows.
neutral
train_101357
Besides surface matching, a significant portion of questions require multiple-sentence reasoning and external knowledge (Richardson et al., 2013;Mostafazadeh et al., 2016;Khashabi et al., 2018;Ostermann et al., 2018).
we map three most common speaker abbreviations (i.e., ''M''; ''W'' and ''F'') that appear in dialogues to their eight most common corresponding mentions (i.e., ''man,'' ''boy,'' ''he,'' and ''his''; ''woman,'' ''girl,'' ''she,'' and ''her'') in questions.
neutral
train_101358
Our neural sequence-to-sequence models utilize an encoder-decoder setup (Cho et al., 2014;Sutskever et al., 2014) with an attention mechanism (Bahdanau et al., 2015).
our ramp loss objectives can be formulated as follows: where y − is a fear output that is to be discouraged and y + is a hope output that is to be encouraged.
neutral
train_101359
Table 2 shows the average C v scores over the produced topics given N = 5 and N = 10.
our model captures topic and discourse representations embedded in conversations.
neutral
train_101360
To further understand why our model learns meaningful representations for topics and discourse, we present a case study based on the example conversation shown in Figure 1.
this shows that with sufficiently large training data, with or without using the pre-trained word embeddings do not make any difference in the topic coherence results.
neutral
train_101361
We further analyze the errors in our outputs.
future work should involve a better alternative to evaluate the latent discourse without Figure 5: Visualization of the topic-discourse assignment of a twitter conversion from TWT16.
neutral
train_101362
All these non-neural models train language models on the whole Gigaword corpus.
when we set the number to 18, GCN+RC+LA achieves a BLEU score of 19.4, which is significantly worse than the BLEU score obtained by DCGCN2 (23.3).
neutral
train_101363
(2018) show that vanilla residual connections proposed by He et al.
the BLEU margins between DCGCN models and their best GCN models are 2.0, 2.7, 2.7, and 3.4, respectively.
neutral
train_101364
Out-of-domain data are often much easier to obtain, and we can therefore conclude that the proposed approach is preferable in many practically relevant situations.
this causes equation 14 to be replaced by this extension moves the model closer to the basic two-stage model, and the inclusion of the context vectors and the block drop-out operation on the hidden decoder states ensures that the second stage decoder does not rely too strongly on the first stage outputs.
neutral
train_101365
Including an auxiliary ASR task is straightforward with the two-stage model by simply computing the cross-entropy loss with respect to the softmax output of the first stage, and dropping the second stage.
as a remedy, we propose the use of endto-end trainable models with two attention mechanisms, the first establishing source speech to source text alignments, the second modeling source to target text alignment.
neutral
train_101366
We did not find it beneficial to apply a scaling factor when adding this loss to the main cross-entropy loss in our experiments.
while a cascade would use the source-text outputs of the first stage as inputs into the second stage, in this model the second stage directly computes attentional context vectors over the decoder states of the first stage.
neutral
train_101367
Whereas the regular English verb paradigm has 4-5 slots in our annotation, the Archi verb will have thousands (Kibrik, 1998).
concretely, we show that the more unique forms an inflectional paradigm has, the more predictable the forms must be from one another-for example, forms in a predictable paradigm might all be related by a simple change of suffix.
neutral
train_101368
We reiterate that no other output has positive probability under their model, for example, swapping -a for -es or ablaut of a stem vowel.
in the implementation, we actually decrement the weight of every edge σ → σ (including when σ = empty) by the weight of empty → σ.
neutral
train_101369
The number of distinct inflected forms of each word indicates the number of morphosyntactic distinctions that the language makes on the surface.
the only true assumption we make of morphology is mild: We assume it is Turing-computable.
neutral
train_101370
Long-range dependencies often pose problems for SRL models; in fact, special networks like GCN and PathLSTM (Roth and Lapata, 2016) have been proposed to explicitly percolate information from each word in the sentence to its syntactic neighbors.
the semantic role predictor consists of: • a word representation component that encapsulates predicate-specific dependency information; • a J-layer BiLStM encoder that takes as input the representation of each word in a sentence; • a classifier that takes as input the BiLStM representations of predicates and their arguments and assigns semantic roles to the latter.
neutral
train_101371
We omit all these results from Figure 4: Edit distance per slot (which we call average edit distance, or AED) for each of the 5 corpora.
consider generating a sentence from a syntactic grammar as follows: Hail the king [, Arthur Pendragon ,] [, who wields [ " Excalibur " ] ,] .
neutral
train_101372
The above example 18 shows how our model handles commas in conjunctions of 2 or more phrases.
the possibleū values are characterized not by a lattice but by a cyclic WFSA (as |u i | is unbounded whenever |x i | > 0).
neutral
train_101373
In our future work, we are interested in relaxing these assumptions and evaluating our proposed method on actual real-world deployments with real users.
in our proposed method, we also have a classifier C that takes as input the dialog state vector s and makes a decision on whether to use the model to respond to the user or to transfer the dialog to a human agent.
neutral
train_101374
The results in Section 5.2 have shown that the transformations-based generalization strategy used by TransWeight works well across different languages and phrase types.
this shows that treating similar words similarly-and not each word as a semantic island-has a twofold benefit: (i) it leads to good generalization capabilities when the training data are scarce and (ii) gives the model the possibility to accommodate a large number of training examples without increasing the number of parameters.
neutral
train_101375
In our experiments, 100 transformations yielded optimal results for all phrase sets.
in this work, we have trained TransWeight for specific phrase types.
neutral
train_101376
A restricted dictionary makes the evaluation easier because many similar words are excluded.
understanding what the transformations encode requires taking a step back and contemplating again the architecture of the model.
neutral
train_101377
We add sentential contexts from Wikipedia, keeping up to 10 sentences per example.
one future direction is a richer training objective that simultaneously models multiple co-occurrences of the constituent words across different texts, as is commonly done in noun compound interpretation (e.g., Ó Séaghdha and Copestake, 2009;Shwartz and Waterson, 2018;Shwartz and Dagan, 2018).
neutral
train_101378
We set the number of hidden units proportional to the input length (see Section 3.1 for configuration details).
areas of study that were historically heavily dependent on feature engineering, including sentiment analysis or image processing (Manjunath and Ma, 1996;abbasi et al., 2008), have made their way towards alternatives that do not involve manually developing features, and instead favor deep learning (Wang et al., 2016).
neutral
train_101379
For instance, syntax paraphrase networks (Iyyer et al., 2018) applied to Quizbowl only yield valid paraphrases 3% of the time (Appendix A).
our examples go beyond surface level ''breaks'' such as character noise (Belinkov and Bisk, 2018) or syntax changes (Iyyer et al., 2018).
neutral
train_101380
Pronimial one The word one is a very common NFH anchor (61% of the occurrences in our corpus), and can be used either as a number (viii) or as a pronoun (xiii).
these rules look for common cases that are often not captured by the parser.
neutral
train_101381
The sense anaphora phenomena also cover numerals, and significantly overlap with many of our NFH cases.
(2005) perform both tasks, but restrict themselves to one anaphora cases and their nounphrase antecedents.
neutral
train_101382
Each row represents an example from the data.
we examined monologues (TED talks; Cettolo et al., 2012), Wikipedia (WikiText-2 and WikiText-103;Merity et al., 2016), journalistic text (PTB: Marcus et al., 1993), and product reviews (Amazon reviews 3 ) in which we found that more than 35.5%, 33.2%, 32.9%, 22.2%, and 37.5% of the numbers, respectively, are NFHs.
neutral
train_101383
For consistency, we consider the pronominal usages to be NFH, with the implicit head PEOPLE.
16 the coreference-centric architecture had to be adapted to the particularities of the NFH task.
neutral
train_101384
Elo provides a baseline; rather than opaque scores associated with individual users at different times, we seek to explain the probability of winning through linguistic features of past debate content.
this implies that our model generalizes relatively well to most debaters regardless of their experience.
neutral
train_101385
When using only the last debate's features and ignoring all previous debates, our performance is initially very good in the years 2013 and 2014 (when our training sets are smaller and histories shorter).
online debate communities offer an opportunity to investigate these questions.
neutral
train_101386
(2016) analyzed the combination of available reference sets and metrics to identify the best evaluation configuration.
the original Wiki annotations have much lower precision and recall (40.9 and 33.2, respectively).
neutral
train_101387
We refer to the model that either uses dependency or contextual graphs instead of random graphs as GCN-SeA+Structure.
we also performed ablations by removing specific parts of the encoder.
neutral
train_101388
(2017) developed a variant of RNN cell that computes a refined representation of the query over multiple iterations before querying the memory.
we see that Seq2seq+Attn model is not able to suggest a restaurant with a high rating whereas HRED gets the restaurant right but suggests an incorrect price range.
neutral
train_101389
The RNN and GCN hidden dimensions were also chosen to be 300.
as opposed to our work, all these works ignore the underlying structure of the entityentity graph of the KB and the syntactic structure of the utterances.
neutral
train_101390
using morphology, which helps discover the correct syntactic dependencies.
modifies the noun root ''masa'' (table) even though the final part of speech of ''masalı'' is an adjective.
neutral
train_101391
Pappas and Popescu-Belis (2014) adopt a multiple instance regression model to assign sentiment scores to specific product aspects, using a weighted summation of predictions.
this research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract FA8650-17-C-9118.
neutral
train_101392
Although most topic models are unsupervised, some variants can also accommodate documentlevel supervision (Mcauliffe and Blei, 2008;Lacoste-Julien et al., 2009).
aside from automatic evaluation, we also assessed model performance against human elicited domain labels for sentences and words.
neutral
train_101393
Our implementation is based on fairseq, 3 and we make use of its multi-GPU support to train on 16 NVIDIA V100 GPUs with a total batch size of 128,000 tokens.
these sentence embeddings are used to initialize the decoder LStM through a linear transformation, and are also concatenated to its input embeddings at every time step.
neutral
train_101394
While some proposals use a separate encoder for each language (Schwenk and Douze, 2017), sharing a single encoder for all languages also gives strong results (Schwenk, 2018).
this approach has two obvious drawbacks when trying to scale to a large number of languages.
neutral
train_101395
In addition, the success of these models stems from a large amount of data they used.
we report the main results in Table 2.
neutral
train_101396
Our approach exhibits small GPU memory footprints, due to reductions of the arithmetic complexity (with fewer intermediate results to keep) and trainable parameter size (with fewer parameters to store).
com/uclanlp/ELMO-C. Contextual representation We review contextual representation models from two aspects: how they are trained and how they are used in downstream tasks.
neutral
train_101397
This is because when training the ELMO-C model reported in 4.2, we actually train a forward ELMO-C on two cards and train a backward ELMO-C on two other cards, which reduces the communication cost by half.
a pre-trained word embedding, which projects words with different meanings into different positions in space, is a natural choice for the projection matrix W and can help preserve much of the variance of A.
neutral
train_101398
Prior studies show that the encoders trained in this way can capture generic contextual information of the input text and improve a variety of downstream tasks significantly.
it is rather a waste to allocate extensive computational resources to the softmax layer.
neutral
train_101399
An alternative adopted by Linzen et al.
's models and ours learn systematic generalizations like subject-verbobject order.
neutral