ACL-OCL / Base_JSON /prefixO /json /osact /2020.osact-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:06:16.244369Z"
},
"title": "Offensive Language Detection in Arabic using ULMFiT",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Abdellatif",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rutgers University -Computer Science Piscataway",
"location": {
"region": "NJ",
"country": "USA"
}
},
"email": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgammal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rutgers University -Computer Science Piscataway",
"location": {
"region": "NJ",
"country": "USA"
}
},
"email": "elgammal@cs.rutgers.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we approach the shared task OffenseEval 2020 by Mubarak et al. (2020) using ULMFiT Howard and Ruder (2018) pre-trained on Arabic Wikipedia Khooli (2019) which we use as a starting point and use the target data-set to fine-tune it. The data set of the task is highly imbalanced. We train forward and backward models and ensemble the results. We report confusion matrix, accuracy, precision, recall and F1 of the development set and report summarized results of the test set. Transfer learning method using ULMFiT shows potential for Arabic text classification.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we approach the shared task OffenseEval 2020 by Mubarak et al. (2020) using ULMFiT Howard and Ruder (2018) pre-trained on Arabic Wikipedia Khooli (2019) which we use as a starting point and use the target data-set to fine-tune it. The data set of the task is highly imbalanced. We train forward and backward models and ensemble the results. We report confusion matrix, accuracy, precision, recall and F1 of the development set and report summarized results of the test set. Transfer learning method using ULMFiT shows potential for Arabic text classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Imbalanced data set is a data set that has at least one (minority) class with significantly smaller population than others (majority). If the minority class is a label of interest (to study and predict), imbalanced data represents a challenge since during the training there is relatively no sufficient representation of the minority class(es) to stand out in the trained model. Examples of applications include: finance (e.g. fraud transaction detection), security (e.g. intrusion detection), networking (e.g. anomaly traffic detection), systems (e.g. irregular resource usage detection), medical (e.g. disease [e.g. cancer] detection), nature (e.g. volcano eruption, earthquake, tsunami predictions) and text processing (e.g. opinion mining and spotting hate speech). Opinion mining and spotting hate speech in the context of social networking using deep learning attracted researchers' attention recently. For example, Park & Fung combined results from CNN (convolutional neural network) and LR (logistic regression) in Park and Fung (2017) . They applied their method on the data set by Waseem and Hovy (2016) . The same data-set was subject for experimenting a combination of both convolutional and recurrent units by Zhang et al. in Zhang et al. (2018) . State of the art text classification has been recently pushed forward by the advancements of the Transfer Learning (e.g. Devlin et al. (2018) , Howard and Ruder (2018) and Radford et al. (2018) ) From the work by Mahendran and Vedaldi (2016) , inspecting neural network of more than one layer that was trained on a certain data-set of images (say a cats vs dogs binary classification task), the earlier layers tend to capture high level features (e.g. edges, contours .. etc) while the later layers tend to capture low level features (e.g. dogs faces, cats faces .. etc). Even though both types of features are extracted from the same data-set, the high level one is more general so it can be made use of in training the same network for a different task (since almost any kind of image classification will benefit from capturing edges and contours [and similarly general image features] in the weights of the model as concluded by Sharif Razavian et al. (2014)). Observing that, Howard & Ruder (Howard and Ruder (2018) ) applied gradual unfreezing associated with dis-criminative fine-tuning and slanted triangular learning rates (as concluded by Smith (2017) ) and successfully apply it on text classification. Our goal is to investigate applying ULMFiT on the imbalanced Arabic data-sets OffenseEval 2020. Khooli pretrained ULMFiT on Arabic Wikipedia in Khooli (2019) . We use their model as a starting point and use the Arabic data-set of interest to fine tune it. The rest of the paper is organized as follows: we illustrate the data-sets properties in section 2.. In section 4. we describe the model, training parameters and experiments. We show results in section 5. and finally conclude the work in section 6..",
"cite_spans": [
{
"start": 1023,
"end": 1043,
"text": "Park and Fung (2017)",
"ref_id": "BIBREF9"
},
{
"start": 1091,
"end": 1113,
"text": "Waseem and Hovy (2016)",
"ref_id": "BIBREF14"
},
{
"start": 1223,
"end": 1258,
"text": "Zhang et al. in Zhang et al. (2018)",
"ref_id": "BIBREF15"
},
{
"start": 1382,
"end": 1402,
"text": "Devlin et al. (2018)",
"ref_id": "BIBREF2"
},
{
"start": 1405,
"end": 1428,
"text": "Howard and Ruder (2018)",
"ref_id": "BIBREF4"
},
{
"start": 1433,
"end": 1454,
"text": "Radford et al. (2018)",
"ref_id": "BIBREF10"
},
{
"start": 1474,
"end": 1502,
"text": "Mahendran and Vedaldi (2016)",
"ref_id": "BIBREF6"
},
{
"start": 2256,
"end": 2280,
"text": "(Howard and Ruder (2018)",
"ref_id": "BIBREF4"
},
{
"start": 2409,
"end": 2421,
"text": "Smith (2017)",
"ref_id": "BIBREF13"
},
{
"start": 2618,
"end": 2631,
"text": "Khooli (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "For this work, we use data provided by the organizers of OSACT4. The target of the shared task is to achieve as high macro F 1 score as possible. 10k Arabic tweets were collected. They are splitted to train (7k), development (1k) and test (2k) subsets. The train and development are released along with labels while the test set is released without them. The task has two sub tasks, sub task A is classifying the tweet as 'offensive' vs 'not offensive' while sub task B is about classifying the tweet as 'hateful' vs 'not hateful'. So each tweet is labeled twice. The labeled data sets in both cases are imbalanced with sub task B more so than A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets",
"sec_num": "2."
},
{
"text": "A tweet is considered offensive if it has any level of profanity. Table 1 shows instances count of different classes of sub task A. As the table shows the distribution of both training and development data sets show imbalance between the two existing classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sub task A",
"sec_num": "2.1."
},
{
"text": "A tweet is considered hateful if it has an attack against one or more person based on their nationality, ethnicity, gender, political affiliation, sport affiliation or religious belief. Table 2 shows instances count of different classes of sub task B. As the table shows the distribution of both training and development data sets show imbalance between the two ex- Table 1 : Classes distribution of sub task A isting classes that is more significant than in case of sub task A.",
"cite_spans": [],
"ref_spans": [
{
"start": 366,
"end": 373,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sub task B",
"sec_num": "2.2."
},
{
"text": "3. Approach",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub task B",
"sec_num": "2.2."
},
{
"text": "We do simple tokenization based on white-spaces and keep words that appeared more frequently than a certain threshold (replaced by 'xxunk'). Since pre-processing is not specific to Arabic, we kept all the non-Arabic words as long as they exist above the threshold (e.g. mentions). Among the special tokens: 'xxpad' is a padding token, 'xxeos' is an end of scentence token, 'xxup' is used to indicate the next word is capitalized (for English parts), 'xxrep' and '",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "3.1."
},
{
"text": "xxwrep' are used to indicate repetition. After segmentation/tokenization, we convert the set of tokens to unique ids. Figure 1 shows part of the resulting vocabulary.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 126,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "3.1."
},
{
"text": "Language modeling is a problem that deals with learning the joint probability function of sequences of words in this language. Such that given a sequence of a certain number of words, it can assigns a probability for it (as defined in Bengio et al. (2003) ). Inductive transfer learning is to make use of the knowledge learned by training a model (model A) on a source problem to be used towards building another model (model B) that handles a target (different) problem (as defined in Ruder et al. (2019) ). In the case of ULMFiT, the source problem is unlabeled (language modeling) and the target problem is (text classification).",
"cite_spans": [
{
"start": 235,
"end": 255,
"text": "Bengio et al. (2003)",
"ref_id": "BIBREF0"
},
{
"start": 486,
"end": 505,
"text": "Ruder et al. (2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.2."
},
{
"text": "ULMFiT transfer learning method (by Howard and Ruder (2018)) can be summarized as three steps applied on two neural networks. The first neural network is a Language Model (LM) the second one is a text classifier. The three steps are 1-pre-training the LM on a general corpus (we used the model by Khooli (2019) for this step), 2-training fine-tuning the LM on the target data-set and then saving a part off the LM (the encoder) and 3-Loading the saved part of the LM (result of step 2) and attaching it to the classifier then train fine-tuning the classifier with the target data-set. Following Howard and Ruder (2018) For both the language model and classifier networks, we used LSTM AWD (by Merity et al. (2017) ) which uses a 3 layers LSTM.",
"cite_spans": [
{
"start": 595,
"end": 618,
"text": "Howard and Ruder (2018)",
"ref_id": "BIBREF4"
},
{
"start": 693,
"end": 713,
"text": "Merity et al. (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.2."
},
{
"text": "Following the original work by Howard and Ruder (2018), we fine-tune two separate (forward and backward) models, classify twice and average results for each sub-task. That was shown to be always better on all the six of the English data-sets experimented on by Howard and Ruder (2018) . We report three different sets of results for each sub task as well to study whether the same conclusion can be made on Arabic imbalanced data-set in question.",
"cite_spans": [
{
"start": 261,
"end": 284,
"text": "Howard and Ruder (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1."
},
{
"text": "Since the source task of the transfer learning (language modeling) needs unlabeled data, we use all the available unlabeled Arabic text (both train and validation) to fine-tune and save (forward and backward) language models and use their encoders for two separate classifiers (two [forward and backward] for each sub task).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1."
},
{
"text": "We use fastai library 1 and adjust the hyper-parameters based on the observed performance of training on the development set. The forward language model was trained for 2 epochs while the backward one was trained for 3. After applying a 3-steps of gradual unfreezing, both the forward and the backward classifiers of sub task A were unfrozen and fine-tuned for 3 epochs. Similar steps were followed for sub task B, except we ended up with 30 epochs for fine tuning the forward classifier and only 3 to fine tune the backward one. We use an Nvidia Titan X with 12 GB of memory that allowed us to use a batch size of 64. Table 3 : Validation results (%) of sub-task A",
"cite_spans": [],
"ref_spans": [
{
"start": 619,
"end": 626,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Settings and training",
"sec_num": "4.2."
},
{
"text": "We report accuracy, weighted and macro F 1 as evaluation metrics for the validation set while we report accuracy and only the macro F 1 for the test set. F 1 is the harmonic mean of Precision (the ratio between the true positives and all the positive) and Recall (the ratio between the true positives and all the true). The macro version adds the metrics values of separate classes with equaly weights while the weighted version weights them by the ratio of class population. Recall that Both the language model and the classifier networks use AWD LSTM (Merity et al. (2017) ). Table 3 presents validation results of sub task A while table 4 present task B. Since we have access to validation labels, we show the results of the forward, backward and averaged models. Since weighted measures favor majority classes (they aggregate using a weighted average), they are not very descriptive of the performance in case of imbalanced datasets where the minority class is important (like in our case). This can be seen from the tables. In terms of validation results, training two models instead of one and averaging results boosts the results in terms of macro F 1 in both sub tasks. The confusion matrix of the validation set is illustrated in figure 2. Table 5 shows the test results of both sub tasks. Inspecting this table, the imbalance of the data-sets under question renders accuracy metric not descriptive of the performance. The very low population minor classes (offensive and hateful tweets in tasks A and B respectively) receive little attention from the trained classifier (relative to the majority class) since they are not as well represented in the training either. This is reflected in the low recall which drags F 1 down.",
"cite_spans": [
{
"start": 553,
"end": 574,
"text": "(Merity et al. (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 578,
"end": 585,
"text": "Table 3",
"ref_id": null
},
{
"start": 1249,
"end": 1256,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5."
},
{
"text": "We applied ULMFiT pre-trained on Arabic Wikipedia to approach the problem of classifying imbalanced Arabic data sets. Experiments on imbalanced data-sets of Of-fenseEval 2020 show that using two models (forward and Arabic-specific tokenization (e.g. based on Arabic morphological rules) may help building a better representation of Arabic text and hence improve performance, we leave this for future work. Another avenue for future work would be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6."
},
{
"text": "https://github.com/fastai/fastai",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": " Ghazikhani et al. (2012) and SMOTE Chawla et al. (2002) ).",
"cite_spans": [
{
"start": 1,
"end": 25,
"text": "Ghazikhani et al. (2012)",
"ref_id": "BIBREF3"
},
{
"start": 30,
"end": 56,
"text": "SMOTE Chawla et al. (2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Jauvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of machine learning research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137-1155, 2003.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Smote: synthetic minority over-sampling technique",
"authors": [
{
"first": "N",
"middle": [
"V"
],
"last": "Chawla",
"suffix": ""
},
{
"first": "K",
"middle": [
"W"
],
"last": "Bowyer",
"suffix": ""
},
{
"first": "L",
"middle": [
"O"
],
"last": "Hall",
"suffix": ""
},
{
"first": "W",
"middle": [
"P"
],
"last": "Kegelmeyer",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of artificial intelligence research",
"volume": "16",
"issue": "",
"pages": "321--357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer. Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16: 321-357, 2002.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bert",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805, 2018.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Class imbalance handling using wrapper-based random oversampling",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ghazikhani",
"suffix": ""
},
{
"first": "H",
"middle": [
"S"
],
"last": "Yazdi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Monsefi",
"suffix": ""
}
],
"year": 2012,
"venue": "20th Iranian Conference on Electrical Engineering (ICEE2012)",
"volume": "",
"issue": "",
"pages": "611--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ghazikhani, H. S. Yazdi, and R. Monsefi. Class im- balance handling using wrapper-based random oversam- pling. In 20th Iranian Conference on Electrical Engi- neering (ICEE2012), pages 611-616. IEEE, 2012.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "J",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.06146"
]
},
"num": null,
"urls": [],
"raw_text": "J. Howard and S. Ruder. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146, 2018.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Applied data science",
"authors": [
{
"first": "A",
"middle": [],
"last": "Khooli",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Khooli. Applied data science. https://github. com/abedkhooli/ds2, 2019.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Visualizing deep convolutional neural networks using natural pre-images",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mahendran",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Vedaldi",
"suffix": ""
}
],
"year": 2016,
"venue": "International Journal of Computer Vision",
"volume": "120",
"issue": "3",
"pages": "233--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Mahendran and A. Vedaldi. Visualizing deep convolu- tional neural networks using natural pre-images. Inter- national Journal of Computer Vision, 120(3):233-255, 2016.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Regularizing and optimizing lstm language models",
"authors": [
{
"first": "S",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "N",
"middle": [
"S"
],
"last": "Keskar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.02182"
]
},
"num": null,
"urls": [],
"raw_text": "S. Merity, N. S. Keskar, and R. Socher. Regularizing and optimizing lstm language models. arXiv preprint arXiv:1708.02182, 2017.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Overview of osact4 arabic offensive language detection shared task",
"authors": [
{
"first": "H",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Magdy",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Elsayed",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Al-Khalifa",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Mubarak, K. Darwish, W. Magdy, T. Elsayed, and H. Al- Khalifa. Overview of osact4 arabic offensive language detection shared task. 4, 2020.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "One-step and two-step classification for abusive language detection on twitter",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Park",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.01206"
]
},
"num": null,
"urls": [],
"raw_text": "J. H. Park and P. Fung. One-step and two-step classification for abusive language detection on twitter. arXiv preprint arXiv:1706.01206, 2017.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "A",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Salimans",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. Improving language under- standing by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf, 2018.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Transfer learning in natural language processing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "M",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials",
"volume": "",
"issue": "",
"pages": "15--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Ruder, M. E. Peters, S. Swayamdipta, and T. Wolf. Transfer learning in natural language processing. In Pro- ceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguis- tics: Tutorials, pages 15-18, 2019.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Cnn features off-the-shelf: an astounding baseline for recognition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Sharif Razavian",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Azizpour",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sullivan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Carlsson",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition workshops",
"volume": "",
"issue": "",
"pages": "806--813",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carls- son. Cnn features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 806-813, 2014.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Cyclical learning rates for training neural networks",
"authors": [
{
"first": "L",
"middle": [
"N"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE Winter Conference on Applications of Computer Vision (WACV)",
"volume": "",
"issue": "",
"pages": "464--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. N. Smith. Cyclical learning rates for training neural net- works. In 2017 IEEE Winter Conference on Applica- tions of Computer Vision (WACV), pages 464-472. IEEE, 2017.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL student research workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Waseem and D. Hovy. Hateful symbols or hateful peo- ple? predictive features for hate speech detection on twit- ter. In Proceedings of the NAACL student research work- shop, pages 88-93, 2016.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Detecting hate speech on twitter using a convolution-gru based deep neural network",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tepper",
"suffix": ""
}
],
"year": 2018,
"venue": "European semantic web conference",
"volume": "",
"issue": "",
"pages": "745--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Zhang, D. Robinson, and J. Tepper. Detecting hate speech on twitter using a convolution-gru based deep neural network. In European semantic web conference, pages 745-760. Springer, 2018.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 1: Part of vocabulary words"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "backward) helps the final result in terms of macro F ! ."
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>Model</td><td>Accuracy</td><td colspan=\"6\">Weighted precision recall F 1 precision recall F 1 Macro</td></tr><tr><td>Forward</td><td>86</td><td>85</td><td>86</td><td>85</td><td>77</td><td>71</td><td>74</td></tr><tr><td>Backward</td><td>87</td><td>87</td><td>87</td><td>87</td><td>78</td><td>78</td><td>78</td></tr><tr><td>Averaged</td><td>89</td><td>88</td><td>89</td><td>89</td><td>82</td><td>78</td><td>80</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Classes distribution of sub task B"
}
}
}
}