{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:06:21.558772Z" }, "title": "AraBERT: Transformer-based Model for Arabic Language Understanding", "authors": [ { "first": "Wissam", "middle": [], "last": "Antoun", "suffix": "", "affiliation": { "laboratory": "", "institution": "American University of Beirut", "location": {} }, "email": "" }, { "first": "Fady", "middle": [], "last": "Baly", "suffix": "", "affiliation": { "laboratory": "", "institution": "American University of Beirut", "location": {} }, "email": "" }, { "first": "Hazem", "middle": [], "last": "Hajj", "suffix": "", "affiliation": { "laboratory": "", "institution": "American University of Beirut", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The Arabic language is a morphologically rich language with relatively few resources and a less explored syntax compared to English. Given these limitations, Arabic Natural Language Processing (NLP) tasks like Sentiment Analysis (SA), Named Entity Recognition (NER), and Question Answering (QA), have proven to be very challenging to tackle. Recently, with the surge of transformers based models, language-specific BERT based models have proven to be very efficient at language understanding, provided they are pre-trained on a very large corpus. Such models were able to set new standards and achieve state-of-the-art results for most NLP tasks. In this paper, we pre-trained BERT specifically for the Arabic language in the pursuit of achieving the same success that BERT did for the English language. The performance of AraBERT is compared to multilingual BERT from Google and other state-of-the-art approaches. The results showed that the newly developed AraBERT achieved state-of-the-art performance on most tested Arabic NLP tasks. The pretrained araBERT models are publicly available on github.com/aub-mind/araBERT hoping to encourage research and applications for Arabic NLP.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The Arabic language is a morphologically rich language with relatively few resources and a less explored syntax compared to English. Given these limitations, Arabic Natural Language Processing (NLP) tasks like Sentiment Analysis (SA), Named Entity Recognition (NER), and Question Answering (QA), have proven to be very challenging to tackle. Recently, with the surge of transformers based models, language-specific BERT based models have proven to be very efficient at language understanding, provided they are pre-trained on a very large corpus. Such models were able to set new standards and achieve state-of-the-art results for most NLP tasks. In this paper, we pre-trained BERT specifically for the Arabic language in the pursuit of achieving the same success that BERT did for the English language. The performance of AraBERT is compared to multilingual BERT from Google and other state-of-the-art approaches. The results showed that the newly developed AraBERT achieved state-of-the-art performance on most tested Arabic NLP tasks. The pretrained araBERT models are publicly available on github.com/aub-mind/araBERT hoping to encourage research and applications for Arabic NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Pretrained contextualized text representation models have enabled massive advances in Natural Language Understanding (NLU) tasks, and achieved state-of-the-art performances in multiple NLP tasks (Howard and Ruder, 2018; Devlin et al., 2018) . Early pretrained text representation models aimed at representing words by capturing their distributed syntactic and semantic properties using techniques like Word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) . However, these models did not incorporate the context in which a word appears into its embedding. This issue was addressed by generating contextualized representations using models like ELMO (Peters et al., 2018) ). Recently, there has been a focus on applying transfer learning by fine-tuning large pretrained language models for downstream NLP/NLU tasks with a relatively small number of examples, resulting in notable performance improvement for these tasks. This approach takes advantage of the language models that had been pre-trained in an unsupervised manner (or sometimes called self-supervised). However, this advantage comes with drawbacks, particularly the huge corpora needed for pre-training, in addition to the high computational cost of days needed for training (latest models required 500+ TPUs or GPUs running for weeks (Conneau et al., 2019; Raffel et al., 2019; Adiwardana et al., 2020) ). These drawbacks restricted the availability of such models to English mainly and a handful of other languages. To remedy this gap, multilingual models have been trained to learn representations for +100 languages simultaneously, but still fall behind single-language models due to little data representation and small languagespecific vocabulary. While languages with similar structure and vocabulary can benefit from the shared representations (Conneau et al., 2019) , this is not the case for other languages, like Arabic, which differ in morphological and syntactic structure and share very little with other abundant *Equal Contribution Latin-based languages. In this paper, we describe the process of pretraining the BERT transformer model (Devlin et al., 2018) for the Arabic language, and which we name ARABERT. We evaluate ARABERT on three Arabic NLU downstream tasks that are different in nature: (i) Sentiment Analysis (SA), (ii) Named Entity Recognition (NER), and (iii) Question Answering (QA). The experiments results show that ARABERT achieves state-of-the-art performances on most datasets, compared to several baselines including previous multilingual and single-language approaches. The datasets that we considered for the downstream tasks contained both Modern Standard Arabic (MSA) and Dialectal Arabic (DA). Our contributions can be summarized as follows:", "cite_spans": [ { "start": 195, "end": 219, "text": "(Howard and Ruder, 2018;", "ref_id": "BIBREF25" }, { "start": 220, "end": 240, "text": "Devlin et al., 2018)", "ref_id": "BIBREF19" }, { "start": 411, "end": 433, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF34" }, { "start": 444, "end": 469, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF38" }, { "start": 663, "end": 684, "text": "(Peters et al., 2018)", "ref_id": "BIBREF39" }, { "start": 1310, "end": 1332, "text": "(Conneau et al., 2019;", "ref_id": null }, { "start": 1333, "end": 1353, "text": "Raffel et al., 2019;", "ref_id": "BIBREF41" }, { "start": 1354, "end": 1378, "text": "Adiwardana et al., 2020)", "ref_id": null }, { "start": 1827, "end": 1849, "text": "(Conneau et al., 2019)", "ref_id": null }, { "start": 2127, "end": 2148, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 A methodology to pretrain the BERT model on a large-scale Arabic corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 Application of ARABERT to three NLU downstream tasks: Sentiment Analysis, Named Entity Recognition and Question Answering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 Publicly releasing ARABERT on popular NLP libraries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of the paper is structured as follows. Section 2. provides a concise literature review of previous work on language representation for English and Arabic. Section 3. describes the methodology that was used to develop ARABERT. Section 4. describes the downstream tasks and benchmark datasets that are used for evaluation. Section 5. presents the experimental setup and discusses the results. Finally, section 6. concludes and points to possible directions for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The first meaningful representations for words started with the word2vec model developed by (Mikolov et al., 2013 ).", "cite_spans": [ { "start": 92, "end": 113, "text": "(Mikolov et al., 2013", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Evolution of Word Embeddings", "sec_num": "2.1." }, { "text": "Since then, research started moving towards variations of word2vec like of GloVe (Pennington et al., 2014) and fast-Text (Mikolov et al., 2017) . While major advances were achieved with these early models, they still lacked contextualized information, which was tackled by ELMO (Peters et al., 2018) . The performance over different tasks improved noticeably, leading to larger structures that had superior word and sentence representations. Ever since, more language understanding models have been developed such as ULMFit (Howard and Ruder, 2018) , BERT (Devlin et al., 2018) , RoBERTa (Liu et al., 2019) , XLNet (Yang et al., 2019) , ALBERT (Lan et al., 2019) , and T5 (Raffel et al., 2019) , which offered improved performance by exploring different pretraining methods, modified model architectures and larger training corpora.", "cite_spans": [ { "start": 81, "end": 106, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF38" }, { "start": 121, "end": 143, "text": "(Mikolov et al., 2017)", "ref_id": "BIBREF35" }, { "start": 278, "end": 299, "text": "(Peters et al., 2018)", "ref_id": "BIBREF39" }, { "start": 524, "end": 548, "text": "(Howard and Ruder, 2018)", "ref_id": "BIBREF25" }, { "start": 556, "end": 577, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF19" }, { "start": 588, "end": 606, "text": "(Liu et al., 2019)", "ref_id": null }, { "start": 615, "end": 634, "text": "(Yang et al., 2019)", "ref_id": "BIBREF47" }, { "start": 644, "end": 662, "text": "(Lan et al., 2019)", "ref_id": "BIBREF31" }, { "start": 672, "end": 693, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Evolution of Word Embeddings", "sec_num": "2.1." }, { "text": "Following the success of the English word2vec (Mikolov et al., 2013) , the same feat was sought by NLP researchers to create language specific embeddings. Arabic word2vec was first attempted by (Soliman et al., 2017) , and then followed by a Fasttext model (Bojanowski et al., 2017) trained on Wikipedia data and showing better performance than word2vec. To tackle dialectal variations in Arabic (Erdmann et al., 2018) presented techniques for training multidialectal word embeddings on relatively small and noisy corpora, while (Abu Farha and Magdy, 2019; Abdul-Mageed et al., 2018) provided Arabic word embeddings trained on \u223c250M tweets.", "cite_spans": [ { "start": 46, "end": 68, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF34" }, { "start": 194, "end": 216, "text": "(Soliman et al., 2017)", "ref_id": "BIBREF45" }, { "start": 257, "end": 282, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF13" }, { "start": 396, "end": 418, "text": "(Erdmann et al., 2018)", "ref_id": "BIBREF24" }, { "start": 534, "end": 556, "text": "Farha and Magdy, 2019;", "ref_id": "BIBREF2" }, { "start": 557, "end": 583, "text": "Abdul-Mageed et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Non-contextual Representations for Arabic", "sec_num": "2.2." }, { "text": "For non-English languages, Google released a multilingual BERT (Devlin et al., 2018) supporting 100+ languages with solid performance for most languages. However, pre-training monolingual BERT for non-English languages proved to provide better performance than the multilingual BERT such as Italian BERT Alberto (Polignano et al., 2019) and other publicly available BERTs (Martin et al., 2019; de Vries et al., 2019) . Arabic specific contextualized representations models, such as hULMonA (ElJundi et al., 2019) , used the ULMfit structure, which had a lower performance that BERT on English NLP Tasks.", "cite_spans": [ { "start": 63, "end": 84, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF19" }, { "start": 312, "end": 336, "text": "(Polignano et al., 2019)", "ref_id": "BIBREF40" }, { "start": 372, "end": 393, "text": "(Martin et al., 2019;", "ref_id": "BIBREF33" }, { "start": 394, "end": 416, "text": "de Vries et al., 2019)", "ref_id": "BIBREF18" }, { "start": 490, "end": 512, "text": "(ElJundi et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Contextualized Representations for Arabic", "sec_num": "2.3." }, { "text": "In this paper, we develop an Arabic language representation model to improve the state-of-the-art in several Arabic NLU tasks. We create ARABERT based on the BERT model, a stacked Bidirectional Transformer Encoder (Devlin et al., 2018) . This model is widely considered as the basis for most state-of-the-art results in different NLP tasks in several languages. We use the BERT-base configuration that has 12 encoder blocks, 768 hidden dimensions, 12 attention heads, 512 maximum sequence length, and a total of \u223c110M parameters 1 . We also introduced additional preprocessing prior to the model's pre-training, in order to better fit the Arabic language. Below, we describe the pre-training setup, the pre-training dataset for ARABERT, 1 Further details about the transformer architecture can be found in (Vaswani et al., 2017) the proposed Arabic-specific preprocessing, and the finetuning process.", "cite_spans": [ { "start": 214, "end": 235, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF19" }, { "start": 737, "end": 738, "text": "1", "ref_id": null }, { "start": 806, "end": 828, "text": "(Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ARABERT: Methodology", "sec_num": "3." }, { "text": "Following the original BERT pre-training objective, we employ the Masked Language Modeling (MLM) task by adding whole-word masking where; 15% of the N input tokens are selected for replacement. Those tokens are replaced 80% of the times with the [MASK] token, 10% with a random token, and 10% with the original token. Wholeword masking improves the pre-training task by forcing the model to predict the whole word instead of getting hints from parts of the word. We also employ the Next Sentence Prediction (NSP) task that helps the model understand the relationship between two sentences, which can be useful for many language understanding tasks such as Question Answering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-training Setup", "sec_num": "3.1." }, { "text": "The original BERT was trained on 3.3B words extracted from English Wikipedia and the Book Corpus (Zhu et al., 2015) . Since the Arabic Wikipedia Dumps are small compared to the English ones, we manually scraped Arabic news websites for articles. In addition, we used two publicly available large Arabic corpora: (1) the 1.5 billion words Arabic Corpus (El-Khair, 2016), which is a contemporary corpus that includes more than 5 million articles extracted from ten major news sources covering 8 countries, and (2) OSIAN: the Open Source International Arabic News Corpus (Zeroual et al., 2019 ) that consists of 3.5 million articles (\u223c1B tokens) from 31 news sources in 24 Arab countries.", "cite_spans": [ { "start": 97, "end": 115, "text": "(Zhu et al., 2015)", "ref_id": "BIBREF49" }, { "start": 568, "end": 589, "text": "(Zeroual et al., 2019", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-training Dataset", "sec_num": "3.2." }, { "text": "The final size of the pre-training dataset, after removing duplicate sentences, is 70 million sentences, corresponding to \u223c24GB of text. This dataset covers news from different media in different Arab regions, and therefore can be representative of a wide range of topics discussed in the Arab world. It is worth mentioning that we preserved words that include Latin characters, since it is common to mention named entities, scientific or technical terms in their original language, to avoid information loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-training Dataset", "sec_num": "3.2." }, { "text": "Arabic language is known for its lexical sparsity which is due to the complex concatenative system of Arabic (Al- Sallab et al., 2017) . Words can have different forms and share the same meaning. For instance, while the definite article \" -Al\", which is equivalent to \"the\" in English, is always prefixed to other words, it is not an intrinsic part of that word. Hence, when using a BERT-compatible tokenization, tokens will appear twice, once with \"Al-\" and once without it. For instance, both \" -kitAb\" and \" -AlkitAb\" need to be included in the vocabulary, leading to a significant amount of unnecessary redundancy.", "cite_spans": [ { "start": 114, "end": 134, "text": "Sallab et al., 2017)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Sub-Word Units Segmentation", "sec_num": "3.3." }, { "text": "To avoid this issue, we first segment the words using Farasa (Abdelali et al., 2016) into stems, prefixes and suffixes. For instance, \"", "cite_spans": [ { "start": 61, "end": 84, "text": "(Abdelali et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Sub-Word Units Segmentation", "sec_num": "3.3." }, { "text": "-Alloga\" becomes -Al+ log +a\". Then, we trained a SentencePiece (an unsupervised text tokenizer and detokenizer (Kudo, 2018) ), in unigram mode, on the segmented pre-training dataset to produce a subword vocabulary of \u223c60K tokens. To evaluate the impact of the proposed tokenization, we also trained SentencePiece on non-segmented text to create a second version of ARABERT (AraBERTv0.1) that does not require any segmentation. The final size of vocabulary was 64k tokens, which included nearly 4K unused tokens to allow further pre-training, if needed.", "cite_spans": [ { "start": 112, "end": 124, "text": "(Kudo, 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Sub-Word Units Segmentation", "sec_num": "3.3." }, { "text": "Sequence Classification To fine-tune AraBERT for sequence classification, we take the final hidden state of the first token, which corresponds to the word embedding of the special \"[CLS]\" token prepended to the start of each sentence. We then add a simple feed-forward layer with standard Softmax to get the probability distribution over the predicted output classes. During fine-tuning, the classifier and the pre-trained model weights are trained jointly to maximize the log-probability of the correct class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning", "sec_num": "3.4." }, { "text": "Named Entity Recognition For the NER task, each token in the sentence is labeled with the IOB2 format (Ratnaparkhi, 1998), where the \"B\" tag corresponds to the first word of the entity, the \"I\" tag corresponds to the rest of the words of the same entity, and the \"O\" tag indicates that the tagged word is not a desired named entity. Hence, we treat the system as a multi-class classification process, which allows us to use some text classification methods to label the tokens. Furthermore, after using the AraBERT tokenizer, we only input the first sub-token of each word to the model. Question Answering In the QA, given a question and a passage containing the answer, the model needs to select a span of text that contains the answers. This is done by predicting a \"start\" token and an \"end\" token on condition that the \"end\" token should appear after the \"start\" token. During training, the final embedding of every token in the passage is fed into two classifiers, each with a single set of weights, which are applied to every token. The dot product of the output embeddings and the classifier is then fed into a softmax layer to produce a probability distribution over all the tokens. The token with the highest probability of being a \"start\" toke is then selected, and the same process is repeated for the \"end\" token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning", "sec_num": "3.4." }, { "text": "We evaluated ARABERT on three Arabic language understanding downstream tasks: Sentiment Analysis, Named Entity Recognition, and Question Answering. As a baseline, we compared ARABERT to the multilingual version of BERT, and to other state-of-art results on each task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4." }, { "text": "We evaluated ARABERT on the following Arabic sentiment datasets that cover different genres, domains and dialects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Analysis", "sec_num": "4.1." }, { "text": "\u2022 HARD: The Hotel Arabic Reviews Dataset (Elnagar et al., 2018) contains 93,700 hotel reviews written in both Modern Standard Arabic (MSA) and in dialectal Arabic. Reviews are split into positive and negative reviews, where a negative review has a rating of 1 or 2, a positive review has a rating of 4 or 5, and neutral reviews with rating of 3 were ignored.", "cite_spans": [ { "start": 41, "end": 63, "text": "(Elnagar et al., 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Sentiment Analysis", "sec_num": "4.1." }, { "text": "\u2022 ASTD: The Arabic Sentiment Twitter Dataset (Nabil et al., 2015) contains 10,000 tweets written in both MSA and Egyptian dialect. We tested on the balanced version of the dataset, referred to as ASTD-B.", "cite_spans": [ { "start": 45, "end": 65, "text": "(Nabil et al., 2015)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Sentiment Analysis", "sec_num": "4.1." }, { "text": "\u2022 ArSenTD-Lev: The Arabic Sentiment Twitter Dataset for LEVantine (Baly et al., 2018) contains 4,000 tweets written in Levantine dialect with annotations for sentiment, topic and sentiment target. This is a challenging dataset as the collected tweets are from multiple domains and discuss different topics.", "cite_spans": [ { "start": 66, "end": 85, "text": "(Baly et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Sentiment Analysis", "sec_num": "4.1." }, { "text": "\u2022 LABR: The Large-scale Arabic Book Reviews dataset (Aly and Atiya, 2013) contains 63,000 book reviews written in Arabic. The reviews are rated between 1 and 5. We benchmarked our model on the unbalanced two-class dataset, where reviews with ratings of 1 or 2 are considered negative, while those with ratings of 4 or 5 are considered positive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Analysis", "sec_num": "4.1." }, { "text": "\u2022 AJGT: The Arabic Jordanian General Tweets dataset (Alomari et al., 2017) contains 1,800 tweets written in Jordanian dialect. The tweets were manually annotated as either positive or negative.", "cite_spans": [ { "start": 52, "end": 74, "text": "(Alomari et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Sentiment Analysis", "sec_num": "4.1." }, { "text": "Baselines: Sentiment Analysis is a popular Arabic NLP task. Previous approaches relied on sentiment lexicons such as ArSenL (Badaro et al., 2014) , which is a largescale lexicon of MSA words that is developed using the Arabic WordNet in combination with the English Senti-WordNet. Recurrent and recursive neural networks were explored with different choices of Arabic-specific processing (Al Sallab et al., 2015; Al-Sallab et al., 2017; . Convolutional Neural Networks (CNN) were trained with pre-trained word embeddings (Dahou et al., 2019a) . A hybrid model was proposed by (Abu Farha and Magdy, 2019) , where CNNs were used for feature extraction, and LSTMs were used for sequence and context understanding. Current state-of-the-art results are achieved by the hULMonA model (ElJundi et al., 2019) , which is an Arabic language model that is based on the ULMfit architecture (Howard and Ruder, 2018) . We compare the results of ARABERT to those of hULMonA.", "cite_spans": [ { "start": 124, "end": 145, "text": "(Badaro et al., 2014)", "ref_id": "BIBREF9" }, { "start": 388, "end": 412, "text": "(Al Sallab et al., 2015;", "ref_id": "BIBREF4" }, { "start": 413, "end": 436, "text": "Al-Sallab et al., 2017;", "ref_id": "BIBREF5" }, { "start": 521, "end": 542, "text": "(Dahou et al., 2019a)", "ref_id": "BIBREF15" }, { "start": 581, "end": 603, "text": "Farha and Magdy, 2019)", "ref_id": "BIBREF2" }, { "start": 778, "end": 800, "text": "(ElJundi et al., 2019)", "ref_id": "BIBREF22" }, { "start": 878, "end": 902, "text": "(Howard and Ruder, 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Sentiment Analysis", "sec_num": "4.1." }, { "text": "This task aims to extract and detect named entities in the text. It is framed as a word-level classification (or tagging) task, where the classes correspond to pre-defined categories such as names, locations, organizations, events and time expressions. For evaluation, we use the Arabic NER corpus (ANERcorp) (Benajiba and Rosso, 2007) . This dataset contains 16.5K entity mentions distributed among 4 entities categories, person (39%), organization: (30.4%), location: (20.6%), and miscellaneous: (10%).", "cite_spans": [ { "start": 309, "end": 335, "text": "(Benajiba and Rosso, 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Named Entity Recognition", "sec_num": "4.2." }, { "text": "Baselines: Advances in the NER task have been focusing on English, namely on the CoNLL 2003 (Sang and De Meulder, 2003) dataset. Initially, NER was tackled with Conditional Random Fields (CRF) (Lafferty et al., 2001) . Later on, CRFs were used on top of Bi-LSTM models (Huang et al., 2015; Lample et al., 2016) presenting significant improvements over standalone CRFs. Bi-LSTM-CRF structures were then used with contextualized embeddings that displayed further improvements (Peters et al., 2018) . Lastly, large pre-trained transformers showed slight improvement, setting the current state-of-the-art performance (Devlin et al., 2018) . As for Arabic, We compare ARABERT performance with Bi-LSTM-CRF baseline that set the previous state-of-the-art performance (El Bazi and Laachfoubi, 2019), and with BERT multilingual.", "cite_spans": [ { "start": 92, "end": 119, "text": "(Sang and De Meulder, 2003)", "ref_id": "BIBREF44" }, { "start": 193, "end": 216, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF29" }, { "start": 269, "end": 289, "text": "(Huang et al., 2015;", "ref_id": "BIBREF26" }, { "start": 290, "end": 310, "text": "Lample et al., 2016)", "ref_id": "BIBREF30" }, { "start": 474, "end": 495, "text": "(Peters et al., 2018)", "ref_id": "BIBREF39" }, { "start": 613, "end": 634, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Named Entity Recognition", "sec_num": "4.2." }, { "text": "Open-domain Question Answering (QA) is one of the goals of artificial intelligence, this goal can be achieved by leveraging natural language understanding and knowledge gathering (Kwiatkowski et al., 2019) . English QA research has been fueled by the release of large datasets such as Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) . On the other hand, research in Arabic QA has been hindered by the lack of such massive datasets, and by the fact that Arabic presents its own challenges such as:", "cite_spans": [ { "start": 179, "end": 205, "text": "(Kwiatkowski et al., 2019)", "ref_id": "BIBREF28" }, { "start": 329, "end": 353, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Question Answering", "sec_num": "4.3." }, { "text": "\u2022 Inconsistent name spelling (ex: Syria in Arabic can be written as \" -sOriyA\" and \" -sOriyT\" )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Answering", "sec_num": "4.3." }, { "text": "\u2022 Name de-spacing (ex: The name is written as \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Answering", "sec_num": "4.3." }, { "text": "-AbdulAzIz\" in the question, and \" -Abdul AzIz\" in the answer)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Answering", "sec_num": "4.3." }, { "text": "\u2022 Dual form \" \", which can have multiple forms (ex: \" \" -\"qalamAn\" or \" \" -\"qalamyn\" meaning \"two pencils\") \u2022 Grammatical gender variation: all nouns, animate and inanimate objects are classified under two genders either masculine or feminine (ex: \" \" -\"kabIr\" and \" \" -\"kabIrT\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Answering", "sec_num": "4.3." }, { "text": "We evaluate ARABERT on the Arabic Reading Comprehension Dataset (ARCD) (Mozannar et al., 2019) , where the task is to find the span of the answer in a document for a given question. ARCD contains 1395 questions on Wikipedia articles along with 2966 machine translated questions and answers from the SQuAD dubbed (Arabic-SQuAD). We train on the whole Arabic-SQuAD and on 50% of ARCD and test on the remaining 50% of ARCD.", "cite_spans": [ { "start": 71, "end": 94, "text": "(Mozannar et al., 2019)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Question Answering", "sec_num": "4.3." }, { "text": "Baselines Multilingual BERT had previously achieved state of the art results on ARCD.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Answering", "sec_num": "4.3." }, { "text": "Pretraining In our experiments, the original implementation of BERT on TensorFlow was used. The data for pre-training was sharded, transformed into TFRecords, and then stored on Google Cloud Storage. Duplication factor was set to 10, a random seed of 34, and a masking probability of 15%. The model was pre-trained on a TPUv2-8 pod for 1,250,000 steps. To speed up the training time, the first 900K steps were trained on sequences of 128 tokens, and the remaining steps were trained on sequences of 512 tokens. The decision of stopping the pre-training was based on the performance of downstream tasks. We follow the same approach taken by the open-sourced German BERT (DeepsetAI, ). Adam optimizer was used, with a learning rate of 1e-4, batch size of 512 and 128 for sequence length of 128 and 512 respectively. Training took 4 days, for 27 epochs over all the tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1." }, { "text": "Fine-tuning Fine-tuning was done independently using the same configuration for all tasks. We do not run extensive grid search for the best hyper-parameters due to computational and time constraints. We use the splits provided by the dataset's authors when available. and the standard 80% and 20% when not 2 . Table 1 illustrates the experimental results of applying AraBERT to multiple Arabic NLU downstream tasks, compared to state-of-the-art results and the multilingual BERT model (mBERT).", "cite_spans": [], "ref_spans": [ { "start": 310, "end": 317, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1." }, { "text": "Sentiment Analysis For Arabic sentiment analysis, the results in Table 1 show that both versions of AraBERT outperform mBERT and other state-of-the-art approaches on most tested datasets. Even though AraBERT was trained on MSA, the model was able to preform well on dialects that were never seen before.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 72, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2." }, { "text": "Named Entity Recognition Results in Table 1 show that AraBERTv0.1 improved results by 2.53 points in F1 score scoring 84.2 compared with the Bi-LSTM-CRF model, making AraBERT the new state-of-the-art for NER on AN-ERcorp. Testing AraBERT with tokenized suffixes and prefixes showed results similar to that of the Bi-LSTM-CRF model. We believe that the reason this happened is that the start token (B-label) is referenced to the suffixes most of the time. An example of this, \"", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 43, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2." }, { "text": "\" with a label B-ORG becomes \" \", \" \" with labels B-ORG, I-ORG respectively, providing misleading starting cues to the model. Testing multilingual BERT, it proved inefficient as we got results lower than the baseline model. Table 1 show an improvement in F1-score, the exact match scores were significantly lower. Upon further examination of the results, the majority of the erroneous answers differed from the true answer by one or two words with no significant impact on the semantics of the answer. Examples are shown in Tables 2 and 3 . We also report a 2% absolute increase in the sentence match score over mBERT, which is the previous state-of-the-art. Sentence Match (SM) measures the percentage of predictions that are within the same sentence as the ground truth answer. ", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 231, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 524, "end": 538, "text": "Tables 2 and 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2." }, { "text": "AraBERT achieved state-of-the-art performance on sentiment analysis, named entity recognition, and the question answering tasks. This adds truth to the assumption that pretrained language models on a single language only surpass the performance of a multilingual model. This jump in performance has many explanations. First, data size is a clear factor for the boost in performance. AraBERT used around 24GB of data in comparison with the 4.3G Wikipedia used for the multilingual BERT. Second, the vocab size used in the multilingual BERT is 2k tokens in comparison with 64k vocab size used for developing AraBERT. Third, with the large data size, the pre-training distribution has more diversity. As for the fourth point, the pre-segmentation applied before BERT tokenization improved performance on SA and QA tasks but reduced it on the NER task. It is also noted that the pre-processing applied to the pre-training data took into consideration the complexities of the Arabic language. Hence, increased the effective vocabulary by excluding unnecessary redundant tokens that come with certain common prefixes, and help the model learn better by reducing the language complexity. We believe these factors helped to reach state-of-the-art results on 3 different tasks and 8 different datasets. Obtained results indicate that the advantage we got in the datasets considered are better understood in a monolingual model than of a general language model trained on Wikipedia crawls such as multilingual BERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3." }, { "text": "AraBERT sets a new state-of-the-art for several downstream tasks for Arabic language. It is also 300MB smaller than multilingual BERT. By publicly releasing our AraBERT models, we hope that it will be used to serve as the new baseline for the various Arabic NLP tasks, and hope that this work will act as a footing stone to building and improving future Arabic language understanding models. We are currently working on publishing an AraBERT version that won't depend on external tokenizers. We are also in the process of training models with a better understanding of the various dialects that the Arabic language has across different Arabic countries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "Github repo https://github.com/aub-mind/arabert", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to express special thanks to Dr. Ramy Baly (Massachusetts Institute of Technology) for the useful discussions and suggestions, to Dr. Dirk Goldhahn (Universit\u00e4t Leipzig) for access to the OSIAN dataset, to TFRC for the free access to cloud TPUs, and to As-Safir newspaper, and Yakshof for providing us with their news articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "7." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Farasa: A fast and furious segmenter for arabic", "authors": [ { "first": "A", "middle": [], "last": "Abdelali", "suffix": "" }, { "first": "K", "middle": [], "last": "Darwish", "suffix": "" }, { "first": "N", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "H", "middle": [], "last": "Mubarak", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", "volume": "", "issue": "", "pages": "11--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdelali, A., Darwish, K., Durrani, N., and Mubarak, H. (2016). Farasa: A fast and furious segmenter for ara- bic. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 11-16.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "You tweet what you speak: A city-level dataset of arabic dialects", "authors": [ { "first": "M", "middle": [], "last": "Abdul-Mageed", "suffix": "" }, { "first": "H", "middle": [], "last": "Alhuzali", "suffix": "" }, { "first": "M", "middle": [], "last": "Elaraby", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdul-Mageed, M., Alhuzali, H., and Elaraby, M. (2018). You tweet what you speak: A city-level dataset of ara- bic dialects. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Evalua- tion (LREC 2018).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Mazajak: An online Arabic sentiment analyser", "authors": [ { "first": "Abu", "middle": [], "last": "Farha", "suffix": "" }, { "first": "I", "middle": [], "last": "Magdy", "suffix": "" }, { "first": "W", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "192--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abu Farha, I. and Magdy, W. (2019). Mazajak: An online Arabic sentiment analyser. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 192-198, Florence, Italy, August. Association for Com- putational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Deep learning models for sentiment analysis in arabic", "authors": [ { "first": "Al", "middle": [], "last": "Sallab", "suffix": "" }, { "first": "A", "middle": [], "last": "Hajj", "suffix": "" }, { "first": "H", "middle": [], "last": "Badaro", "suffix": "" }, { "first": "G", "middle": [], "last": "Baly", "suffix": "" }, { "first": "R", "middle": [], "last": "El-Hajj", "suffix": "" }, { "first": "W", "middle": [], "last": "Shaban", "suffix": "" }, { "first": "K", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the second workshop on Arabic natural language processing", "volume": "", "issue": "", "pages": "9--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Al Sallab, A., Hajj, H., Badaro, G., Baly, R., El-Hajj, W., and Shaban, K. (2015). Deep learning models for sen- timent analysis in arabic. In Proceedings of the second workshop on Arabic natural language processing, pages 9-17.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Aroma: A recursive deep learning model for opinion mining in arabic as a low resource language", "authors": [ { "first": "A", "middle": [], "last": "Al-Sallab", "suffix": "" }, { "first": "R", "middle": [], "last": "Baly", "suffix": "" }, { "first": "H", "middle": [], "last": "Hajj", "suffix": "" }, { "first": "K", "middle": [ "B" ], "last": "Shaban", "suffix": "" }, { "first": "W", "middle": [], "last": "El-Hajj", "suffix": "" }, { "first": "G", "middle": [], "last": "Badaro", "suffix": "" } ], "year": 2017, "venue": "ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)", "volume": "16", "issue": "4", "pages": "1--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Al-Sallab, A., Baly, R., Hajj, H., Shaban, K. B., El-Hajj, W., and Badaro, G. (2017). Aroma: A recursive deep learning model for opinion mining in arabic as a low re- source language. ACM Transactions on Asian and Low- Resource Language Information Processing (TALLIP), 16(4):1-20.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Arabic tweets sentimental analysis using machine learning", "authors": [], "year": null, "venue": "International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems", "volume": "", "issue": "", "pages": "602--610", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arabic tweets sentimental analysis using machine learn- ing. In International Conference on Industrial, Engi- neering and Other Applications of Applied Intelligent Systems, pages 602-610. Springer.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "LABR: A large scale Arabic book reviews dataset", "authors": [ { "first": "M", "middle": [], "last": "Aly", "suffix": "" }, { "first": "A", "middle": [], "last": "Atiya", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "494--498", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aly, M. and Atiya, A. (2013). LABR: A large scale Arabic book reviews dataset. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguis- tics (Volume 2: Short Papers), pages 494-498, Sofia, Bulgaria, August. Association for Computational Lin- guistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A large scale arabic sentiment lexicon for arabic opinion mining", "authors": [ { "first": "G", "middle": [], "last": "Badaro", "suffix": "" }, { "first": "R", "middle": [], "last": "Baly", "suffix": "" }, { "first": "H", "middle": [], "last": "Hajj", "suffix": "" }, { "first": "N", "middle": [], "last": "Habash", "suffix": "" }, { "first": "W", "middle": [], "last": "El-Hajj", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the EMNLP 2014 workshop on arabic natural language processing (ANLP)", "volume": "", "issue": "", "pages": "165--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Badaro, G., Baly, R., Hajj, H., Habash, N., and El-Hajj, W. (2014). A large scale arabic sentiment lexicon for arabic opinion mining. In Proceedings of the EMNLP 2014 workshop on arabic natural language processing (ANLP), pages 165-173.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A sentiment treebank and morphologically enriched recursive deep models for effective sentiment analysis in arabic", "authors": [ { "first": "R", "middle": [], "last": "Baly", "suffix": "" }, { "first": "H", "middle": [], "last": "Hajj", "suffix": "" }, { "first": "N", "middle": [], "last": "Habash", "suffix": "" }, { "first": "K", "middle": [ "B" ], "last": "Shaban", "suffix": "" }, { "first": "W", "middle": [], "last": "El-Hajj", "suffix": "" } ], "year": 2017, "venue": "ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)", "volume": "16", "issue": "4", "pages": "1--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baly, R., Hajj, H., Habash, N., Shaban, K. B., and El-Hajj, W. (2017). A sentiment treebank and morphologically enriched recursive deep models for effective sentiment analysis in arabic. ACM Transactions on Asian and Low- Resource Language Information Processing (TALLIP), 16(4):1-21.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Arsentd-lev: A multi-topic corpus for target-based sentiment analysis in arabic levantine tweets", "authors": [ { "first": "R", "middle": [], "last": "Baly", "suffix": "" }, { "first": "A", "middle": [], "last": "Khaddaj", "suffix": "" }, { "first": "H", "middle": [], "last": "Hajj", "suffix": "" }, { "first": "W", "middle": [], "last": "El-Hajj", "suffix": "" }, { "first": "K", "middle": [ "B" ], "last": "Shaban", "suffix": "" } ], "year": 2018, "venue": "OSACT 3: The 3rd Workshop on Open-Source Arabic Corpora and Processing Tools", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baly, R., Khaddaj, A., Hajj, H., El-Hajj, W., and Sha- ban, K. B. (2018). Arsentd-lev: A multi-topic corpus for target-based sentiment analysis in arabic levantine tweets. In OSACT 3: The 3rd Workshop on Open-Source Arabic Corpora and Processing Tools, page 37.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Anersys 2.0: Conquering the ner task for the arabic language by combining the maximum entropy with pos-tag information", "authors": [ { "first": "Y", "middle": [], "last": "Benajiba", "suffix": "" }, { "first": "P", "middle": [], "last": "Rosso", "suffix": "" } ], "year": 2007, "venue": "IICAI", "volume": "", "issue": "", "pages": "1814--1823", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benajiba, Y. and Rosso, P. (2007). Anersys 2.0: Conquer- ing the ner task for the arabic language by combining the maximum entropy with pos-tag information. In IICAI, pages 1814-1823.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Enriching word vectors with subword information", "authors": [ { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "A", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching word vectors with subword informa- tion. Transactions of the Association for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Arabic sentiment classification using convolutional neural network and differential evolution algorithm", "authors": [ { "first": "A", "middle": [], "last": "Dahou", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Elaziz", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "S", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dahou, A., Elaziz, M. A., Zhou, J., and Xiong, S. (2019a). Arabic sentiment classification using convolutional neu- ral network and differential evolution algorithm. Com- putational intelligence and neuroscience, 2019.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multi-channel embedding convolutional neural network model for arabic sentiment classification", "authors": [], "year": null, "venue": "ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)", "volume": "18", "issue": "", "pages": "1--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Multi-channel embedding convolutional neural network model for arabic sentiment classification. ACM Transac- tions on Asian and Low-Resource Language Information Processing (TALLIP), 18(4):1-23.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Bertje: A dutch bert model", "authors": [ { "first": "W", "middle": [], "last": "De Vries", "suffix": "" }, { "first": "A", "middle": [], "last": "Van Cranenburgh", "suffix": "" }, { "first": "A", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "T", "middle": [], "last": "Caselli", "suffix": "" }, { "first": "G", "middle": [], "last": "Van Noord", "suffix": "" }, { "first": "M", "middle": [], "last": "Nissim", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.09582" ] }, "num": null, "urls": [], "raw_text": "de Vries, W., van Cranenburgh, A., Bisazza, A., Caselli, T., van Noord, G., and Nissim, M. (2019). Bertje: A dutch bert model. arXiv preprint arXiv:1912.09582. DeepsetAI. ). Open sourcing german bert.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "J", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "M.-W", "middle": [], "last": "Chang", "suffix": "" }, { "first": "K", "middle": [], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Arabic named entity recognition using deep learning approach", "authors": [ { "first": "I", "middle": [], "last": "El Bazi", "suffix": "" }, { "first": "N", "middle": [], "last": "Laachfoubi", "suffix": "" } ], "year": 2019, "venue": "International Journal of Electrical & Computer Engineering", "volume": "", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "El Bazi, I. and Laachfoubi, N. (2019). Arabic named entity recognition using deep learning approach. International Journal of Electrical & Computer Engineering (2088- 8708), 9(3).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "1.5 billion words arabic corpus", "authors": [ { "first": "I", "middle": [ "A" ], "last": "El-Khair", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.04033" ] }, "num": null, "urls": [], "raw_text": "El-Khair, I. A. (2016). 1.5 billion words arabic corpus. arXiv preprint arXiv:1611.04033.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "hulmona: The universal language model in arabic", "authors": [ { "first": "O", "middle": [], "last": "Eljundi", "suffix": "" }, { "first": "W", "middle": [], "last": "Antoun", "suffix": "" }, { "first": "N", "middle": [], "last": "El Droubi", "suffix": "" }, { "first": "H", "middle": [], "last": "Hajj", "suffix": "" }, { "first": "W", "middle": [], "last": "El-Hajj", "suffix": "" }, { "first": "K", "middle": [], "last": "Shaban", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "68--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "ElJundi, O., Antoun, W., El Droubi, N., Hajj, H., El-Hajj, W., and Shaban, K. (2019). hulmona: The universal lan- guage model in arabic. In Proceedings of the Fourth Ara- bic Natural Language Processing Workshop, pages 68- 77.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Hotel arabic-reviews dataset construction for sentiment analysis applications", "authors": [ { "first": "A", "middle": [], "last": "Elnagar", "suffix": "" }, { "first": "Y", "middle": [ "S" ], "last": "Khalifa", "suffix": "" }, { "first": "A", "middle": [], "last": "Einea", "suffix": "" } ], "year": 2018, "venue": "Intelligent Natural Language Processing: Trends and Applications", "volume": "", "issue": "", "pages": "35--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elnagar, A., Khalifa, Y. S., and Einea, A. (2018). Ho- tel arabic-reviews dataset construction for sentiment analysis applications. In Intelligent Natural Language Processing: Trends and Applications, pages 35-52. Springer.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Addressing noise in multidialectal word embeddings", "authors": [ { "first": "A", "middle": [], "last": "Erdmann", "suffix": "" }, { "first": "N", "middle": [], "last": "Zalmout", "suffix": "" }, { "first": "N", "middle": [], "last": "Habash", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "558--565", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erdmann, A., Zalmout, N., and Habash, N. (2018). Ad- dressing noise in multidialectal word embeddings. In Proceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Pa- pers), pages 558-565.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Universal language model fine-tuning for text classification", "authors": [ { "first": "J", "middle": [], "last": "Howard", "suffix": "" }, { "first": "S", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1801.06146" ] }, "num": null, "urls": [], "raw_text": "Howard, J. and Ruder, S. (2018). Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Bidirectional lstm-crf models for sequence tagging", "authors": [ { "first": "Z", "middle": [], "last": "Huang", "suffix": "" }, { "first": "W", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "", "suffix": "" }, { "first": "K", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.01991" ] }, "num": null, "urls": [], "raw_text": "Huang, Z., Xu, W., and Yu, K. (2015). Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "authors": [ { "first": "T", "middle": [], "last": "Kudo", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kudo, T. (2018). Subword regularization: Improving neu- ral network translation models with multiple subword candidates.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics", "authors": [ { "first": "T", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "J", "middle": [], "last": "Palomaki", "suffix": "" }, { "first": "O", "middle": [], "last": "Redfield", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "A", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "C", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "D", "middle": [], "last": "Epstein", "suffix": "" }, { "first": "I", "middle": [], "last": "Polosukhin", "suffix": "" }, { "first": "M", "middle": [], "last": "Kelcey", "suffix": "" }, { "first": "J", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "K", "middle": [], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [ "N" ], "last": "Toutanova", "suffix": "" }, { "first": "L", "middle": [], "last": "Jones", "suffix": "" }, { "first": "M.-W", "middle": [], "last": "Chang", "suffix": "" }, { "first": "A", "middle": [], "last": "Dai", "suffix": "" }, { "first": "J", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Q", "middle": [], "last": "Le", "suffix": "" }, { "first": "S", "middle": [], "last": "Petrov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kwiatkowski, T., Palomaki, J., Redfield, O., Collins, M., Parikh, A., Alberti, C., Epstein, D., Polosukhin, I., Kel- cey, M., Devlin, J., Lee, K., Toutanova, K. N., Jones, L., Chang, M.-W., Dai, A., Uszkoreit, J., Le, Q., and Petrov, S. (2019). Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "J", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [ "C" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lafferty, J. D., McCallum, A., and Pereira, F. C. (2001). Conditional random fields: Probabilistic models for seg- menting and labeling sequence data. In ICML.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "G", "middle": [], "last": "Lample", "suffix": "" }, { "first": "M", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "S", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "K", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "C", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1603.01360" ] }, "num": null, "urls": [], "raw_text": "Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., and Dyer, C. (2016). Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Albert: A lite bert for selfsupervised learning of language representations", "authors": [ { "first": "Z", "middle": [], "last": "Lan", "suffix": "" }, { "first": "M", "middle": [], "last": "Chen", "suffix": "" }, { "first": "S", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "K", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "P", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "R", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. (2019). Albert: A lite bert for self- supervised learning of language representations.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Camembert: a tasty french language model", "authors": [ { "first": "L", "middle": [], "last": "Martin", "suffix": "" }, { "first": "B", "middle": [], "last": "Muller", "suffix": "" }, { "first": "P", "middle": [ "J O" ], "last": "Su\u00e1rez", "suffix": "" }, { "first": "Y", "middle": [], "last": "Dupont", "suffix": "" }, { "first": "L", "middle": [], "last": "Romary", "suffix": "" }, { "first": "", "middle": [], "last": "\u00c9ric Villemonte De La Clergerie", "suffix": "" }, { "first": "D", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "B", "middle": [], "last": "Sagot", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, L., Muller, B., Su\u00e1rez, P. J. O., Dupont, Y., Ro- mary, L.,\u00c9ric Villemonte de la Clergerie, Seddah, D., and Sagot, B. (2019). Camembert: a tasty french lan- guage model.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111- 3119.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Advances in pre-training distributed word representations", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "C", "middle": [], "last": "Puhrsch", "suffix": "" }, { "first": "Joulin", "middle": [], "last": "", "suffix": "" }, { "first": "A", "middle": [], "last": "", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1712.09405" ] }, "num": null, "urls": [], "raw_text": "Mikolov, T., Grave, E., Bojanowski, P., Puhrsch, C., and Joulin, A. (2017). Advances in pre-training distributed word representations. arXiv preprint arXiv:1712.09405.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Neural arabic question answering", "authors": [ { "first": "H", "middle": [], "last": "Mozannar", "suffix": "" }, { "first": "E", "middle": [], "last": "Maamary", "suffix": "" }, { "first": "K", "middle": [], "last": "El Hajal", "suffix": "" }, { "first": "H", "middle": [], "last": "Hajj", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "108--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mozannar, H., Maamary, E., El Hajal, K., and Hajj, H. (2019). Neural arabic question answering. In Proceed- ings of the Fourth Arabic Natural Language Processing Workshop, pages 108-118.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "ASTD: Arabic sentiment tweets dataset", "authors": [ { "first": "M", "middle": [], "last": "Nabil", "suffix": "" }, { "first": "M", "middle": [], "last": "Aly", "suffix": "" }, { "first": "A", "middle": [], "last": "Atiya", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2515--2519", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nabil, M., Aly, M., and Atiya, A. (2015). ASTD: Ara- bic sentiment tweets dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2515-2519, Lisbon, Portugal, September. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "J", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pennington, J., Socher, R., and Manning, C. (2014). Glove: Global vectors for word representation. In Pro- ceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532- 1543.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Deep contextualized word representations", "authors": [ { "first": "M", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "M", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "M", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "M", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "C", "middle": [], "last": "Clark", "suffix": "" }, { "first": "K", "middle": [], "last": "Lee", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "2227--2237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep contextu- alized word representations. In Proceedings of NAACL- HLT, pages 2227-2237.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "AlBERTo: Italian BERT Language Understanding Model for NLP Challenging Tasks Based on Tweets", "authors": [ { "first": "M", "middle": [], "last": "Polignano", "suffix": "" }, { "first": "P", "middle": [], "last": "Basile", "suffix": "" }, { "first": "M", "middle": [], "last": "De Gemmis", "suffix": "" }, { "first": "G", "middle": [], "last": "Semeraro", "suffix": "" }, { "first": "V", "middle": [], "last": "Basile", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Sixth Italian Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Polignano, M., Basile, P., de Gemmis, M., Semeraro, G., and Basile, V. (2019). AlBERTo: Italian BERT Lan- guage Understanding Model for NLP Challenging Tasks Based on Tweets. In Proceedings of the Sixth Ital- ian Conference on Computational Linguistics (CLiC-it 2019), volume 2481. CEUR.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Exploring the limits of transfer learning with a unified textto", "authors": [ { "first": "C", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "N", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "A", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "K", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Narang", "suffix": "" }, { "first": "M", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "W", "middle": [], "last": "Li", "suffix": "" }, { "first": "P", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. (2019). Ex- ploring the limits of transfer learning with a unified text- to-text transformer.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Squad: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "P", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "K", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.05250" ] }, "num": null, "urls": [], "raw_text": "Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016). Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Maximum entropy models for natural language ambiguity resolution", "authors": [ { "first": "A", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratnaparkhi, A. (1998). Maximum entropy models for nat- ural language ambiguity resolution.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Introduction to the conll-2003 shared task: Language-independent named entity recognition", "authors": [ { "first": "E", "middle": [ "F" ], "last": "Sang", "suffix": "" }, { "first": "F", "middle": [], "last": "De Meulder", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sang, E. F. and De Meulder, F. (2003). Introduction to the conll-2003 shared task: Language-independent named entity recognition. arXiv preprint cs/0306050.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Aravec: A set of arabic word embedding models for use in arabic nlp", "authors": [ { "first": "A", "middle": [ "B" ], "last": "Soliman", "suffix": "" }, { "first": "K", "middle": [], "last": "Eissa", "suffix": "" }, { "first": "S", "middle": [ "R" ], "last": "El-Beltagy", "suffix": "" } ], "year": 2017, "venue": "Procedia Computer Science", "volume": "117", "issue": "", "pages": "256--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soliman, A. B., Eissa, K., and El-Beltagy, S. R. (2017). Aravec: A set of arabic word embedding models for use in arabic nlp. Procedia Computer Science, 117:256-265.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Z", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Z", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Y", "middle": [], "last": "Yang", "suffix": "" }, { "first": "J", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., and Le, Q. V. (2019). Xlnet: Generalized autore- gressive pretraining for language understanding.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "OSIAN: Open source international Arabic news corpus -preparation and integration into the CLARINinfrastructure", "authors": [ { "first": "I", "middle": [], "last": "Zeroual", "suffix": "" }, { "first": "D", "middle": [], "last": "Goldhahn", "suffix": "" }, { "first": "T", "middle": [], "last": "Eckart", "suffix": "" }, { "first": "A", "middle": [], "last": "Lakhouaja", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "175--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zeroual, I., Goldhahn, D., Eckart, T., and Lakhouaja, A. (2019). OSIAN: Open source international Arabic news corpus -preparation and integration into the CLARIN- infrastructure. In Proceedings of the Fourth Arabic Nat- ural Language Processing Workshop, pages 175-182, Florence, Italy, August. Association for Computational Linguistics.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "authors": [ { "first": "Y", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "R", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "R", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "R", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "R", "middle": [], "last": "Urtasun", "suffix": "" }, { "first": "A", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "S", "middle": [], "last": "Fidler", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., and Fidler, S. (2015). Aligning books and movies: Towards story-like visual explanations by watching movies and reading books.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
Taskmetricprev. SOTA mBERT AraBERTv0.1/ v1
SA (HARD)Acc.95.7*95.796.2 / 96.1
SA (ASTD)Acc.86.5*80.192.2 / 92.6
SA (ArsenTD-Lev)Acc.52.4*51.058.9 / 59.4
SA (AJGT)Acc.92.6**83.693.1 / 93.8
SA (LABR)Acc.87.5 \u202083.085.9 / 86.7
NER (ANERcorp)macro-F181.7 \u2020 \u202078.484.2 / 81.9
Exact Match34.230.1 / 30.6
QA (ARCD)macro-F1mBERT61.361.2 / 62.7
Sent. Match90.093.0 / 92.0
* (ElJundi et al., 2019)
** (Dahou et al., 2019b)
\u2020 (Dahou et al., 2019b)
\u2020 \u2020 Previous state of the art performance by BiLSTM-CRF model
", "html": null, "text": "Performance of AraBERT on Arabic downstream tasks compared to mBERT and previous state of the art systems", "type_str": "table", "num": null }, "TABREF1": { "content": "
Question
where was the united nations established?
Ground TruthIn San Francisco -
Predicted AnswerSan Francisco -
", "html": null, "text": "Example of an erroneous results from the ARCD test set: the only difference is the preposition \" -In\".", "type_str": "table", "num": null }, "TABREF2": { "content": "
Question
What is the type of government in Austria?
Ground TruthAustria is a federal republic -
Predicted AnswerA federal republic -
", "html": null, "text": "Another example of an erroneous results from the ARCD test set: the predicted answer does not include \"introductory\" words.", "type_str": "table", "num": null } } } }