{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:33:50.417787Z" }, "title": "", "authors": [], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper investigates and reveals the relationship between two closely related machine learning disciplines, namely Active Learning (AL) and Curriculum Learning (CL), from the lens of several novel curricula. This paper also introduces Active Curriculum Learning (ACL) which improves AL by combining AL with CL to benefit from the dynamic nature of the AL informativeness concept as well as the human insights used in the design of the curriculum heuristics. Comparison of the performance of ACL and AL on two public datasets for the Named Entity Recognition (NER) task shows the effectiveness of combining AL and CL using our proposed framework.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper investigates and reveals the relationship between two closely related machine learning disciplines, namely Active Learning (AL) and Curriculum Learning (CL), from the lens of several novel curricula. This paper also introduces Active Curriculum Learning (ACL) which improves AL by combining AL with CL to benefit from the dynamic nature of the AL informativeness concept as well as the human insights used in the design of the curriculum heuristics. Comparison of the performance of ACL and AL on two public datasets for the Named Entity Recognition (NER) task shows the effectiveness of combining AL and CL using our proposed framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Modern deep learning architectures predominantly need large amounts of labeled data to achieve high levels of performance. In the presence of a large unlabeled corpus, data points are usually chosen randomly to be annotated. However, annotation can be a costly task and not all the annotations are equally beneficial. Active Learning (AL) aims to reduce the number of annotations required to train a machine learning model by choosing the most \"informative\" unlabeled data for annotation. The informativeness is determined by querying a model or a set of models trained on the available annotated data (Settles 2012) . Algorithm 1 shows AL more formally.", "cite_spans": [ { "start": 602, "end": 616, "text": "(Settles 2012)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several categories of informativeness score have been developed in the literature. For example, uncertainty metrics select unlabeled data for which the model has the highest uncertainty of label prediction (Settles and Craven 2008) . Examples of uncertainty measures for a classification task are the difference of the probability of prediction for the first and second most likely classes (i.e., the margin of the prediction probability) and the entropy of prediction over all classes (i.e., \u2212 \u2211 log =1 where c is the number of classes). Lower values of margin and higher values of entropy metrics are associated with higher uncertainty and consequently informativeness. Some other examples of informativeness scoring methods for unlabeled data are the amount of prediction disagreement in a committee of models (Melville and Mooney 2004) and the amount of expected change to model weights (Zhang, Lease, and Wallace 2017) or loss value (Long et al. 2014) .", "cite_spans": [ { "start": 206, "end": 231, "text": "(Settles and Craven 2008)", "ref_id": "BIBREF20" }, { "start": 486, "end": 503, "text": "(i.e., \u2212 \u2211 log =1", "ref_id": null }, { "start": 813, "end": 839, "text": "(Melville and Mooney 2004)", "ref_id": "BIBREF13" }, { "start": 891, "end": 923, "text": "(Zhang, Lease, and Wallace 2017)", "ref_id": "BIBREF25" }, { "start": 938, "end": 956, "text": "(Long et al. 2014)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Curriculum Learning (CL), on the other hand, attempts to mimic how humans learn and uses that knowledge to train better models (Bengio et al. 2009; Soviany et al. 2021) . Complex topics are taught to humans based on a curriculum which takes into account the level of difficulty of the material presented to the learner. CL borrows this idea and engages the human experts to design a metric that is used to sort the annotated training data from \"easy\" to \"hard\" to be presented to the model during training (Bengio et al. 2009) . The goal of CL is to find a better local optimum faster compared to randomly presenting the data to the model by smoothing the loss function in early stages of training. CL algorithm is presented in Algorithm 2. CL has been investigated in computer vision (Gui, Baltrusaitis, and Morency 2017) , Natural Language Processing (NLP) (Rao, Anuranjana, and Mamidi 2020) , and speech recognition (Braun, Neil, and Liu 2016) among others (Soviany et al. 2021) . Specifically within NLP, CL has been used on tasks such as question answering (Sachan and Xing 2016) , natural language understanding (Xu et al. 2020) , as well as learning word representations (Tsvetkov et al. 2016) . Different curriculum designs has been investigated by considering heuristics such as sentence length, word frequency, language model score, and parse tree depth (Tsvetkov et al. 2016; Platanios et al. 2019) .", "cite_spans": [ { "start": 127, "end": 147, "text": "(Bengio et al. 2009;", "ref_id": "BIBREF1" }, { "start": 148, "end": 168, "text": "Soviany et al. 2021)", "ref_id": "BIBREF21" }, { "start": 506, "end": 526, "text": "(Bengio et al. 2009)", "ref_id": "BIBREF1" }, { "start": 785, "end": 822, "text": "(Gui, Baltrusaitis, and Morency 2017)", "ref_id": "BIBREF4" }, { "start": 859, "end": 893, "text": "(Rao, Anuranjana, and Mamidi 2020)", "ref_id": "BIBREF17" }, { "start": 919, "end": 946, "text": "(Braun, Neil, and Liu 2016)", "ref_id": "BIBREF3" }, { "start": 960, "end": 981, "text": "(Soviany et al. 2021)", "ref_id": "BIBREF21" }, { "start": 1062, "end": 1084, "text": "(Sachan and Xing 2016)", "ref_id": "BIBREF18" }, { "start": 1118, "end": 1134, "text": "(Xu et al. 2020)", "ref_id": "BIBREF24" }, { "start": 1178, "end": 1200, "text": "(Tsvetkov et al. 2016)", "ref_id": "BIBREF23" }, { "start": 1364, "end": 1386, "text": "(Tsvetkov et al. 2016;", "ref_id": "BIBREF23" }, { "start": 1387, "end": 1409, "text": "Platanios et al. 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Other related approaches such as self-paced learning (SPL) (Kumar, Packer, and Koller 2010) and self-paced curriculum learning (Jiang et al. 2015) have also been proposed to show the efficacy of a designed curriculum which adapts dynamically to the pace at which the learner progresses. Other attempts at improving an AL strategy include self-paced active learning (Tang and Huang 2019) in which the authors introduce practical techniques to consider informativeness, representativeness, and easiness of samples while querying for labels. Such methods that only focus on designing a curriculum miss, in general, the opportunity to also leverage the ability of the predictive model which progresses as new labeled data becomes available.", "cite_spans": [ { "start": 59, "end": 91, "text": "(Kumar, Packer, and Koller 2010)", "ref_id": "BIBREF9" }, { "start": 127, "end": 146, "text": "(Jiang et al. 2015)", "ref_id": "BIBREF7" }, { "start": 365, "end": 386, "text": "(Tang and Huang 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Active Curriculum Learning", "sec_num": null }, { "text": "The addition of CL injects human expertise into learning manifested in the design of a curriculum. This is in contrast with previous studies that combined AL with SPL (Tang and Huang 2019; Lin et al. 2018) . SPL is inspired by CL but, similarly to AL, relies on querying the model being trained to select instances for labeling.", "cite_spans": [ { "start": 167, "end": 188, "text": "(Tang and Huang 2019;", "ref_id": "BIBREF22" }, { "start": 189, "end": 205, "text": "Lin et al. 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Active Curriculum Learning", "sec_num": null }, { "text": "Our contributions in this paper are twofold: (i) we shed light on the relationship between AL and CL by investigating if AL enforces (or follows) a curriculum. To this end, we monitor and visualize a variety of novel curricula during the AL simulation loop; (ii) We propose a novel method which we call Active Curriculum Learning (ACL). ACL takes advantage of the benefits of both CL (i.e., designing a curriculum for the model to follow) and AL (i.e., choosing samples based on the enhanced ability of the predictive model) at the same time to improve AL. Our preliminary experiments show that the performance of an AL strategy will be improved by deliberately combining AL and CL concepts. This article presents the foundation of this method accompanied by the preliminary results and in our future work we will explore its effectiveness more extensively by implementing more experiments and performing hyper parameter tuning as well as exploring other NLP tasks beyond NER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active Curriculum Learning", "sec_num": null }, { "text": "Other than the most explored curriculum features such as sentence length and word frequency some other curricula for measuring diversity, simplicity, and prototypicality of the samples are proposed in (Tsvetkov et al. 2016) . Our conjecture is that largescale language models and also linguistic features can be used to design NLP curricula. We design seven novel curricula which assign a score to a sentence indicating its level of difficulty for a specific NLP task. Then, to acquire a curriculum, sentences are sorted by their corresponding scores. Other than our 7 novel curricula, we also experiment with the following commonly used curricula: 1. SENT_LEN: Number of words in a sentence. 2. WORD_FREQ: Average of frequency of the words in a sentence (e.g., frequency of the word A is calculated by \u2211 \u2208 where V is the set of the unique vocabulary of the labeled dataset, and is the number of times the word has appeared in the labeled dataset). Our seven novel curricula are as follows: 1. PARSE_CHILD: Average of the number of children of words in the sentence parse tree. 2. GPT_SCORE: Sentence score according to the GPT2 language model (Radford et al. 2019 ) calculated as follows: \u2211 log( ( )) where ( ) is the probability of k th word of the sentence according to the GPT2 model. 3. LL_LOSS: Average loss of the words in a sentence from the Longformer language model (Beltagy, Peters, and Cohan 2020) For the following four novel curricula, we use the spaCy library (Honnibal and Montani 2017) to replace a word in a sentence with one of its linguistic features. The curriculum value for a sentence is then calculated exactly in the same way as word frequency but with one of the linguistic features instead of the word itself: 4. POS: Simple universal part-of-speech tag such as PROPN, AUX or VERB. 5. TAG: Detailed part-of-speech tag such as NNP, VBZ, VBG. 6. SHAPE: Shape of the word. For example, shapes of \"Apple\" and \"12a.\" are \"Xxxxx\" and \"ddx.\" respectively. 7. DEP: Syntactic relation connecting the word to its parent in the dependency parse tree of the sentence (e.g., amod, and compound).", "cite_spans": [ { "start": 201, "end": 223, "text": "(Tsvetkov et al. 2016)", "ref_id": "BIBREF23" }, { "start": 1144, "end": 1164, "text": "(Radford et al. 2019", "ref_id": "BIBREF16" }, { "start": 1475, "end": 1502, "text": "(Honnibal and Montani 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Novel Curricula", "sec_num": "2" }, { "text": "We set out to answer the following question: what is the relationship between AL and CL from the lens of the nine curricula? To answer this question, we simulate two AL strategies as well as random strategy and monitor the curriculum metrics on the most informative samples (from the unlabeled data) chosen for annotation by each sampling strategy and compare them. We use the following two informativeness measures for unlabeled sentences in our AL strategies: (i) min-margin: minimum of margin of the prediction probability for the sentence tokens is considered as the AL score for that sentence. Sentences with lower scores are preferred, (ii) max-entropy: maximum of entropy of the prediction probability for the sentence tokens are considered as the AL score for that sentence and sentences with higher scores are preferred.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relationship between AL and CL and the Experimental Setup", "sec_num": "3" }, { "text": "For the experiments, we use a single layer Bi-LSTM model (Lample et al. 2016) with the hidden state size of 768, enhanced with a 2-layer feedforward network in which the number of hidden and output layers' nodes are equal to the number of classes in the dataset. The input to the LSTM model is the word2vec embedding (Mikolov et al. 2013) of sentence words. We use ADAM optimizer (Kingma and Ba 2017) with the batch size of 64 and the learning rate of 5e-4. We experiment with two publicly available English-language NER datasets: OntoNotes5 1 , and CoNLL 2003 2 and use early stopping on the loss of the provided validation sets. Furthermore, we start with 500 randomly selected sentences as the seed data and 1 Available at https://catalog.ldc.upenn.edu/LDC2013T19", "cite_spans": [ { "start": 57, "end": 77, "text": "(Lample et al. 2016)", "ref_id": "BIBREF10" }, { "start": 317, "end": 338, "text": "(Mikolov et al. 2013)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "The Relationship between AL and CL and the Experimental Setup", "sec_num": "3" }, { "text": "choose 500 sentences to be labeled in each iteration for a total of 15 iterations. Figure 1 illustrates the experimental results of monitoring GPT score during AL loop. This figure clearly shows that GPT score of sentences chosen by max-entropy tends to have lower values (i.e., more complex sentences) and min-margin tends to choose sentences with higher values (i.e., simpler sentences) compared to a random strategy. Similar figures for other curricula reveal peculiarities of the different AL strategies compared to the random strategy and other AL strategies. Due to space limitations, instead of including such figures for different strategies, we calculate the following metric which we call Mean Normalized Difference (MND) to quantify how an AL selection strategy differs from a random strategy in choosing the most informative unlabeled data based on a curriculum. This metric is defined as follows:", "cite_spans": [], "ref_spans": [ { "start": 83, "end": 91, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Relationship between AL and CL and the Experimental Setup", "sec_num": "3" }, { "text": "= \u2211 \u2211 ( ( ))\u2212 ( ( )) \u00d7 =1 =1 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relationship between AL and CL and the Experimental Setup", "sec_num": "3" }, { "text": "where is the number of iterations where we add newly labeled sentences to the labeled dataset, calculates the value of the curriculum feature for a sentence, and are the \u210e sentence out of chosen for annotation in the \u210e step of the random and active strategies, respectively, ( ) ", "cite_spans": [ { "start": 275, "end": 278, "text": "( )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Relationship between AL and CL and the Experimental Setup", "sec_num": "3" }, { "text": ": = \u2212 \u2212 , : = min \u2208[1, ] \u2211 ( ) =1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relationship between AL and CL and the Experimental Setup", "sec_num": "3" }, { "text": ", and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relationship between AL and CL and the Experimental Setup", "sec_num": "3" }, { "text": ": = max \u2208[1, ] \u2211 ( ) =1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Relationship between AL and CL and the Experimental Setup", "sec_num": "3" }, { "text": ". In theory, the MND score can take any value. If the MND score of an AL strategy for a curriculum is close to zero, it means the curriculum values ( ) of the data chosen for 2 Available at https://www.clips.uantwerpen.be/conll2003/ner/ Figure 1 : Comparison of the mean of GPT score of sentences added to training data in each iteration between random, min-margin and max-entropy AL strategies for the CoNLL dataset (average of 3 runs).", "cite_spans": [], "ref_spans": [ { "start": 237, "end": 245, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Relationship between AL and CL and the Experimental Setup", "sec_num": "3" }, { "text": "annotation are close to that of the random strategy. This, however, does not imply that the same unlabeled data is chosen by the two techniques. Furthermore, large values of the MND score indicate that AL chooses unlabeled data for annotation that have different curriculum scores compared to the random strategy. Since MND is normalized, we can compare the MND score of any two combinations of AL strategy and curriculum score to compare the degree to which they diverge from random strategy. Experimental Results: Results of the MND scores for different curriculum features on the two experimental datasets are reported in Table 1 . In most of these experiments, we observe that there is a difference between how random strategy and AL choose unlabeled dataset from the lens of MND as if AL is mimicking curriculum learning. We also observe that not all AL strategies consistently have the same MND sign for a curriculum on OntoNotes5 and CoNLL 2003 datasets but a noticeable divergence from the random strategy is evident. Table 1 also shows that the largest difference between active and random strategies in following curricula in our experiments is DEP/Min-Margin combination and the smallest difference between them is POS/Max-Entropy combination, both for OntoNotes5 dataset.", "cite_spans": [], "ref_spans": [ { "start": 625, "end": 632, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 1026, "end": 1033, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "The Relationship between AL and CL and the Experimental Setup", "sec_num": "3" }, { "text": "To improve the performance of the AL strategies, we introduce a simple yet effective method leveraging both advantages of AL and CL which we call Active Curriculum Learning (ACL). The goal of this proposed method is to benefit from the dynamic nature of AL data selection metric while utilizing experts' knowledge in designing a fixed curriculum. To this end, in each step of the ACL loop, we use the following linear combination of the AL and CL scores to choose the most informative unlabeled data:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active Curriculum Learning (ACL)", "sec_num": "4" }, { "text": "( , ): = ( ) max \u2208 | ( )| + ( , ) max \u2208 | ( , )| (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active Curriculum Learning (ACL)", "sec_num": "4" }, { "text": "where is the set of unlabeled sentences in step of the ACL loop, and are the two parameters that control the combination of AL and CL scores, ( , ) is the AL score (i.e., informativeness) of sentence according to the predictive model trained on at step . The overall steps of the ACL algorithm are presented in Algorithm 3. Similar to the AL algorithm, the min-margin based strategy favors sentences with lower for annotation and the opposite is true for the max-entropy based approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active Curriculum Learning (ACL)", "sec_num": "4" }, { "text": "Experimental Results: We use the training setup of section 3 and perform token classification on CoNLL 2003 and OntoNotes5 datasets using the ACL algorithm. To evaluate the performance of ACL, for each AL metric and dataset combination, we run 18 ACL experiments where = 1 , = 0.5 or = \u22120.5 for the 9 curricula, and also one AL experiment where = 1 and = 0. Since the main focus of this article is to demonstrate if the introduction of a curriculum adds value to the performance of the active strategies, we select these hyper parameters in such a way that the effects of the active strategies are still dominant in the proposed model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Active Curriculum Learning (ACL)", "sec_num": "4" }, { "text": "In each step of the ACL loop, we measure the token-level F1 score (for higher granularity) of the provided test set using the trained model in that step. Table 2 reports the average of F1 scores for the top 5 ACL combinations as well as the active learner (\u03b1 = 1, \u03b2 = 0) across all runs (3) and steps (15). In all of our experiments, the top 5 ACL combinations always outperformed AL for that dataset. In particular our curricula based on deep language models (GPT_SCORE and LL_LOSS) are appearing frequently in Table 2 indicating their utility.", "cite_spans": [], "ref_spans": [ { "start": 154, "end": 161, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 512, "end": 519, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Active Curriculum Learning (ACL)", "sec_num": "4" }, { "text": "To the best of our knowledge, this is the first work to investigate and reveal the relationship between two closely related machine learning techniques namely, AL and CL. We observed that AL in fact follows a curriculum as it progresses through its iterations compared to the random strategy. This is also the first work to take advantage of the benefits of both CL (i.e., designing a curriculum for the model to learn) and AL (i.e., choosing samples based on the improved ability of the predictive model) to improve AL in a unified model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "In our future work, we are interested in understanding in detail how CL helps AL, and exploring model-based techniques of combining AL and CL rather than a fixed set of weights for \u03b1 and \u03b2. Another interesting question to investigate is to conduct similar experiments for other NLP tasks or using multiple curricula together with AL can be beneficial in reducing the annotation cost. We are also interested in investigating our novel curricula on their own in an isolated CL setting. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Longformer: The Long-Document Transformer", "authors": [ { "first": "", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Iz", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Peters", "suffix": "" }, { "first": "", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beltagy, Iz, Matthew E. Peters, and Arman Cohan. 2020. \"Longformer: The Long-Document Transformer.\" ArXiv:2004.05150 [Cs], December. http://arxiv.org/abs/2004.05150.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Curriculum Learning", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Jerome", "middle": [], "last": "Louradour", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 26th Annual International Conference on Machine Learning", "volume": "", "issue": "", "pages": "41--48", "other_ids": { "DOI": [ "10.1145/1553374.1553380" ] }, "num": null, "urls": [], "raw_text": "Bengio, Yoshua, Jerome Louradour, Ronan Collobert, and Jason Weston. 2009. \"Curriculum Learning.\" In Proceedings of the 26th Annual International Conference on Machine Learning, 41-48. Https://Doi.Org/10.1145/1553374.1553380.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Curriculum Learning Method for Improved Noise Robustness in Automatic Speech Recognition", "authors": [ { "first": "Stefan", "middle": [], "last": "Braun", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Neil", "suffix": "" }, { "first": "Shih-Chii", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Braun, Stefan, Daniel Neil, and Shih-Chii Liu. 2016. \"A Curriculum Learning Method for Improved Noise Robustness in Automatic Speech Recognition.\" ArXiv:1606.06864 [Cs], September. http://arxiv.org/abs/1606.06864.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Curriculum Learning for Facial Expression Recognition", "authors": [ { "first": "Liangke", "middle": [], "last": "Gui", "suffix": "" }, { "first": "Tadas", "middle": [], "last": "Baltrusaitis", "suffix": "" }, { "first": "Louis-Philippe", "middle": [], "last": "Morency", "suffix": "" } ], "year": 2017, "venue": "2017 12th IEEE International Conference on Automatic Face & Gesture Recognition", "volume": "", "issue": "", "pages": "505--516", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gui, Liangke, Tadas Baltrusaitis, and Louis-Philippe Morency. 2017. \"Curriculum Learning for Facial Expression Recognition.\" In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), 505-11.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "SpaCy 2: Natural Language Understanding with Bloom Embeddings", "authors": [ { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "Ines", "middle": [], "last": "Montani", "suffix": "" } ], "year": 2017, "venue": "Convolutional Neural Networks and Incremental Parsing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Honnibal, Matthew, and Ines Montani. 2017. \"SpaCy 2: Natural Language Understanding with Bloom Embeddings, Convolutional Neural Networks and Incremental Parsing.\"", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Self-Paced Curriculum Learning", "authors": [ { "first": "Lu", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Deyu", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Qian", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Shiguang", "middle": [], "last": "Shan", "suffix": "" }, { "first": "Alexander", "middle": [ "G" ], "last": "Hauptmann", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2694-2700. AAAI'15", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang, Lu, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G. Hauptmann. 2015. \"Self-Paced Curriculum Learning.\" In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2694-2700. AAAI'15. Austin, Texas: AAAI Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Adam: A Method for Stochastic Optimization", "authors": [ { "first": "Diederik", "middle": [ "P" ], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kingma, Diederik P., and Jimmy Ba. 2017. \"Adam: A Method for Stochastic Optimization.\" ArXiv:1412.6980 [Cs], January. http://arxiv.org/abs/1412.6980.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Self-Paced Learning for Latent Variable Models", "authors": [ { "first": "M", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Packer", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2010, "venue": "Advances in Neural Information Processing Systems", "volume": "23", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kumar, M., Benjamin Packer, and Daphne Koller. 2010. \"Self-Paced Learning for Latent Variable Models.\" In Advances in Neural Information Processing Systems. Vol. 23. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2010/file/e5 7c6b956a6521b28495f2886ca0977a-Paper.pdf.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Neural Architectures for Named Entity Recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "260--70", "other_ids": { "DOI": [ "10.18653/v1/N16-1030" ] }, "num": null, "urls": [], "raw_text": "Lample, Guillaume, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. \"Neural Architectures for Named Entity Recognition.\" In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 260-70. San Diego, California: Association for Computational Linguistics. https://doi.org/10.18653/v1/N16- 1030.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Active Self-Paced Learning for Cost-Effective and Progressive Face Identification", "authors": [ { "first": "Liang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Keze", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Deyu", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Wangmeng", "middle": [], "last": "Zuo", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "40", "issue": "1", "pages": "7--19", "other_ids": { "DOI": [ "10.1109/TPAMI.2017.2652459" ] }, "num": null, "urls": [], "raw_text": "Lin, Liang, Keze Wang, Deyu Meng, Wangmeng Zuo, and Lei Zhang. 2018. \"Active Self-Paced Learning for Cost-Effective and Progressive Face Identification.\" IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (1): 7-19. https://doi.org/10.1109/TPAMI.2017.2652459.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Active Learning for Ranking through Expected Loss Optimization", "authors": [ { "first": "Bo", "middle": [], "last": "Long", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Chapelle", "suffix": "" }, { "first": "Ya", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yoshiyuki", "middle": [], "last": "Inagaki", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2014, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "27", "issue": "5", "pages": "1180--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Long, Bo, Jiang Bian, Olivier Chapelle, Ya Zhang, Yoshiyuki Inagaki, and Yi Chang. 2014. \"Active Learning for Ranking through Expected Loss Optimization.\" IEEE Transactions on Knowledge and Data Engineering 27 (5): 1180-91.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Diverse Ensembles for Active Learning", "authors": [ { "first": "Prem", "middle": [], "last": "Melville", "suffix": "" }, { "first": "Raymond", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 2004, "venue": "Twenty-First International Conference on Machine Learning -ICML '04, 74", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/1015330.1015385" ] }, "num": null, "urls": [], "raw_text": "Melville, Prem, and Raymond J. Mooney. 2004. \"Diverse Ensembles for Active Learning.\" In Twenty-First International Conference on Machine Learning -ICML '04, 74. Banff, Alberta, Canada: ACM Press. https://doi.org/10.1145/1015330.1015385.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Efficient Estimation of Word Representations in Vector Space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. \"Efficient Estimation of Word Representations in Vector Space.\" ArXiv Preprint ArXiv:1301.3781.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Competence-Based Curriculum Learning for Neural Machine Translation", "authors": [ { "first": "Emmanouil", "middle": [], "last": "Platanios", "suffix": "" }, { "first": "Otilia", "middle": [], "last": "Antonios", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Stretcu", "suffix": "" }, { "first": "Barnabas", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Poczos", "suffix": "" }, { "first": "", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1162--117", "other_ids": { "DOI": [ "10.18653/v1/N19-1119" ] }, "num": null, "urls": [], "raw_text": "Platanios, Emmanouil Antonios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, and Tom Mitchell. 2019. \"Competence-Based Curriculum Learning for Neural Machine Translation.\" In Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 1162-117. Minneapolis, Minnesota: Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1119.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Language Models Are Unsupervised Multitask Learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radford, Alec, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. \"Language Models Are Unsupervised Multitask Learners.\" Ilya (blog). 2019.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A Sentiwordnet Strategy for Curriculum Learning in Sentiment Analysis", "authors": [ { "first": "Vijjini", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Kaveri", "middle": [], "last": "Anvesh", "suffix": "" }, { "first": "Radhika", "middle": [], "last": "Anuranjana", "suffix": "" }, { "first": "", "middle": [], "last": "Mamidi", "suffix": "" } ], "year": 2020, "venue": "Natural Language Processing and Information Systems", "volume": "12089", "issue": "", "pages": "170--78", "other_ids": { "DOI": [ "10.1007/978-3-030-51310-8_16" ] }, "num": null, "urls": [], "raw_text": "Rao, Vijjini Anvesh, Kaveri Anuranjana, and Radhika Mamidi. 2020. \"A Sentiwordnet Strategy for Curriculum Learning in Sentiment Analysis.\" In Natural Language Processing and Information Systems, edited by Elisabeth M\u00e9tais, Farid Meziane, Helmut Horacek, and Philipp Cimiano, 12089:170-78. Lecture Notes in Computer Science. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030- 51310-8_16.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Easy Questions First? A Case Study on Curriculum Learning for Question Answering", "authors": [ { "first": "Mrinmaya", "middle": [], "last": "Sachan", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Xing", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "453--63", "other_ids": { "DOI": [ "10.18653/v1/P16-1043" ] }, "num": null, "urls": [], "raw_text": "Sachan, Mrinmaya, and Eric Xing. 2016. \"Easy Questions First? A Case Study on Curriculum Learning for Question Answering.\" In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 453-63. Association for Computational Linguistics. https://doi.org/10.18653/v1/P16-1043.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Active Learning", "authors": [ { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2012, "venue": "Synthesis Lectures on Artificial Intelligence and Machine Learning", "volume": "6", "issue": "1", "pages": "1--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Settles, Burr. 2012. \"Active Learning.\" Synthesis Lectures on Artificial Intelligence and Machine Learning 6 (1): 1-114.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "An Analysis of Active Learning Strategies for Sequence Labeling Tasks", "authors": [ { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Craven", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing -EMNLP '08, 1070", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3115/1613715.1613855" ] }, "num": null, "urls": [], "raw_text": "Settles, Burr, and Mark Craven. 2008. \"An Analysis of Active Learning Strategies for Sequence Labeling Tasks.\" In Proceedings of the Conference on Empirical Methods in Natural Language Processing -EMNLP '08, 1070. Honolulu, Hawaii: Association for Computational Linguistics. https://doi.org/10.3115/1613715.1613855.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Curriculum Learning: A Survey", "authors": [ { "first": "Petru", "middle": [], "last": "Soviany", "suffix": "" }, { "first": "Radu", "middle": [ "Tudor" ], "last": "Ionescu", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rota", "suffix": "" }, { "first": "Nicu", "middle": [], "last": "Sebe", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soviany, Petru, Radu Tudor Ionescu, Paolo Rota, and Nicu Sebe. 2021. \"Curriculum Learning: A Survey.\" ArXiv:2101.10382 [Cs], January. http://arxiv.org/abs/2101.10382.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Self-Paced Active Learning: Query the Right Thing at the Right Time", "authors": [ { "first": "Ying-Peng", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "5117--5141", "other_ids": { "DOI": [ "10.1609/aaai.v33i01.33015117" ] }, "num": null, "urls": [], "raw_text": "Tang, Ying-Peng, and Sheng-Jun Huang. 2019. \"Self- Paced Active Learning: Query the Right Thing at the Right Time.\" In Proceedings of the AAAI Conference on Artificial Intelligence, 5117-24. https://doi.org/10.1609/aaai.v33i01.33015117.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning the Curriculum with Bayesian Optimization for Task-Specific Word Representation Learning", "authors": [ { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Macwhinney", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "130--169", "other_ids": { "DOI": [ "10.18653/v1/P16-1013" ] }, "num": null, "urls": [], "raw_text": "Tsvetkov, Yulia, Manaal Faruqui, Wang Ling, Brian MacWhinney, and Chris Dyer. 2016. \"Learning the Curriculum with Bayesian Optimization for Task-Specific Word Representation Learning.\" In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 130-39. Association for Computational Linguistics. https://doi.org/10.18653/v1/P16-1013.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Curriculum Learning for Natural Language Understanding", "authors": [ { "first": "Benfeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Licheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhendong", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Quan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hongtao", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Yongdong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6095--6104", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.542" ] }, "num": null, "urls": [], "raw_text": "Xu, Benfeng, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, and Yongdong Zhang. 2020. \"Curriculum Learning for Natural Language Understanding.\" In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 6095-6104. Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.542.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Active Discriminative Text Representation Learning", "authors": [ { "first": "Ye", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Lease", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Ye, Matthew Lease, and Byron C. Wallace. 2017. \"Active Discriminative Text Representation Learning.\" In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 3386-92. AAAI'17. AAAI Press.", "links": null } }, "ref_entries": { "TABREF2": { "num": null, "text": "", "content": "
: Mean Normalized Difference of min- |
margin and max-entropy for the two datasets CoNLL |
2003 and OntoNotes5 (average of 15 steps and 3 |
runs). |