{ "paper_id": "2019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:29:26.170332Z" }, "title": "", "authors": [], "year": "", "venue": null, "identifiers": {}, "abstract": "India is one of unique countries in the world that has the legacy of diversity of languages. English influence most of these languages. This causes a large presence of code-mixed text in social media. Enormous presence of this code-mixed text provides an important research area for Natural Language Processing (NLP). This paper proposes a novel Attention based deep learning technique for Sentiment Classification on Code-Mixed Text (ACCMT) of Hindi-English. The proposed architecture uses fusion of character and word features. Non-availability of suitable word embedding to represent these Code-Mixed texts is another important hurdle for this league of NLP tasks. This paper also proposes a novel technique for preparing word embedding of Code-Mixed text. This embedding is prepared with two separately trained word embeddings on romanized Hindi and English respectively. This embedding is further used in the proposed deep learning based architecture for robust classification. The Proposed technique achieves 71.97% accuracy, which exceeds the baseline accuracy.", "pdf_parse": { "paper_id": "2019", "_pdf_hash": "", "abstract": [ { "text": "India is one of unique countries in the world that has the legacy of diversity of languages. English influence most of these languages. This causes a large presence of code-mixed text in social media. Enormous presence of this code-mixed text provides an important research area for Natural Language Processing (NLP). This paper proposes a novel Attention based deep learning technique for Sentiment Classification on Code-Mixed Text (ACCMT) of Hindi-English. The proposed architecture uses fusion of character and word features. Non-availability of suitable word embedding to represent these Code-Mixed texts is another important hurdle for this league of NLP tasks. This paper also proposes a novel technique for preparing word embedding of Code-Mixed text. This embedding is prepared with two separately trained word embeddings on romanized Hindi and English respectively. This embedding is further used in the proposed deep learning based architecture for robust classification. The Proposed technique achieves 71.97% accuracy, which exceeds the baseline accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Languages used in India belong to several language families. Historical presence of British on Indian soil has led to a very high influence of English language on many of these Indian languages. People belonging in a multi-lingual society of India, gives rise of a large amount of text in various social media (Patra, 2018) . Inclusion of English is very common in these texts. Essentially, an utterance in which a user makes use of grammar, lexicon or other linguistic units of more than one language is said to have undergone code-mixing (Chanda, 2016) . Hindi is the widely spoken language of India and used in various media. The number of native Hindi speakers is about 25% of the total Indian population; however, including dialects of Hindi termed as Hindi languages, the total is around 44% of Indians, mostly accounted from the states falling under the Hindi belt 1 . This community contributes a large amount of text on social media. The form of Hindi language used in Social Media is mixed with English and are available in roman scripts. According to the study (Dey, 2014) most common reason for this kind of code mixing in a single text is 'Ease of Use'. The code-mixed Hindi and English language poses various types of challenges (Barman, 2014) , which makes the text classification task on code-mixed text, an exciting problem in NLP Community. Despite a wide research on classification of code mixed texts, there remains open opportunities with two major aspects; first technique of preparing word embedding on Code-Mixed texts and second utilization of character and word features together to improve the accuracy. This research targets these two open points for exploration.", "cite_spans": [ { "start": 310, "end": 323, "text": "(Patra, 2018)", "ref_id": "BIBREF0" }, { "start": 540, "end": 554, "text": "(Chanda, 2016)", "ref_id": "BIBREF1" }, { "start": 1072, "end": 1083, "text": "(Dey, 2014)", "ref_id": "BIBREF2" }, { "start": 1243, "end": 1257, "text": "(Barman, 2014)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Various research works have tried to tackle these challenges. Recent work of Prabhu (2016) utilizes character level LSTMs to learn sub word level information of social media text. Then this information is used to classify the sentences using an annotated corpus. The work is very interesting and achieves good accuracy. However the work does not intend to capture the information related to word level semantics. This provides a further scope of research to study the impact of word", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Samsung R&D Institute India, Bangalore {siddhartha.m, vinuth, anish.n, mp.shah, nik.kumar} @samsung.com embedding based approach on classification of code-mixed text. Sharma (2015) used an approach of lexicon lookup for text normalization and sentiment analysis on Code-Mixed text. Pravalika (2017) used lexicon lookup approach for domain specific sentiment analysis. These lexicon lookup based approaches lack capability to handle misspelled words and wide variety of these code mixed texts. Recent work (Lal, 2019) have used BiLSTM based dual encoder networks to represent the character based input and additional feature network to achieve good accuracy on code-mixed texts. Recent work (Yenigalla, 2018) has explored the opportunity of using both character and word embedding based feature to handle unknown words for text classification on monolingual English only text corpora. However, this approach is not common for Code-Mixed text, primarily because of the non-availability of word embedding for the Code-Mixed texts.", "cite_spans": [ { "start": 167, "end": 180, "text": "Sharma (2015)", "ref_id": "BIBREF8" }, { "start": 505, "end": 516, "text": "(Lal, 2019)", "ref_id": "BIBREF6" }, { "start": 690, "end": 707, "text": "(Yenigalla, 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Robust Deep Learning Based Sentiment Classification of Code-Mixed Text Siddhartha Mukherjee, Vinuthkumar Prasan, Anish Nediyanchath, Manan Shah, Nikhil Kumar", "sec_num": null }, { "text": "We have considered Hi-En Code-Mixed dataset 2 , shared by Prabhu (2016) as a baseline for this research.", "cite_spans": [ { "start": 58, "end": 71, "text": "Prabhu (2016)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "3" }, { "text": "The dataset was collected from public Facebook pages of famous Indian personalities i.e. Salman Khan and Narendra Modi. The data is present in Roman script. The dataset contains 3879 comments. Each data is annotated with a 3-level of polarity scale i.e. Positive, Neutral and Negative. The dataset contains 15% negative, 50% neutral and 35% positive. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description", "sec_num": "3.1" }, { "text": "Transliteration of phonetic languages, like Hindi, into roman script creates several variations of the same word. For example, \"\u092c\u0939\u0941\u0924\" in Hindi which means \"more\" in English can be transliterated as \"bahut\", \"bohoot\" or \"bohut\" etc. The Romanized Code-Mixed text, available on social media imposes additional challenges of contraction of phrases. For example, 'awsm' is shortened form of 'awesome'; 'a6a' is contracted from 'accha' etc. Romanized code-mixed text also contain sentences with non-grammatical constructs like 'Bhai jaan bolu naa.. yar' as well as nonstandard spelling such as 'youuuu', 'jaaaaan' etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges", "sec_num": "3.2" }, { "text": "The phonetic similarity of various words across participant languages in the Code-Mixed text increases the challenge by introducing disambiguation for meaning of a word. For example, \"man\" in English means 'an adult human male' where as in Hindi it means 'mind'. Large availability of clean corpora has given a rise in various kinds of research for Mono-lingual texts like English. On the other hand, the limited availability of clean & standard Code-Mixed corpus restricts wide spectrum of experiments, which depends on word-embedding based input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges", "sec_num": "3.2" }, { "text": "The dataset is cleaned of any special characters for this research. Final character set is of 36 characters including 26 English letters and 10 numbers. Final character set is: abcdefghijklmnopqrstuvwxyz0123456789", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character Set", "sec_num": "3.3" }, { "text": "The proposed method consists of two major parts. First one is preparing a suitable word-embedding of code-mixed text and later one is a robust deep learning architecture for classification on codemixed text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Method", "sec_num": "4" }, { "text": "There are three main aspects for preparing word embedding for Hindi-English Code-Mixed Texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Embedding", "sec_num": "4.1" }, { "text": "First is preparation of a corpus of Hindi Romanized text. Second one is preparing word embedding by choosing a right algorithm of word embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Embedding", "sec_num": "4.1" }, { "text": "Third, is to ensure that words from both participant languages which are similar has nearby representation. To address the first aspect, we use Indic transliteration 3 on large Hindi-English corpus 4 where the Hindi text is present in Devanagari 5 script also contains English content. In this way, we achieve the Hindi-English Code-Mixed corpus in Roman Scripts. Figure 1 depicts the process of generating the desired corpus. We hypothesize that the transliterated corpus represents a new language of Romanized Hindi. As discussed earlier there are various challenges of Romanized representation of Code-Mixed text such as presence multiple homo-phonic representations of a single word etc., so we have chosen fastText (Bojanowski, 2017) word representation as best method to train word embedding. This addresses the second aspect of previously discussed task of preparing word embedding. Once the corpus is generated, we have trained word embedding with fastText 6 . This trained embedding is capable of providing the vectorized representation of a Romanized Hindi word. On the other side, an utterance in the Code-Mixed corpus also contains English words as well. For example, the 1 st utterance in the Table 1 contains two phrases, where 1 st phrase contains the Romanized Hindi words and the 2 nd phrase contains English words. This is the third and final aspect, discussed as a part of task of word embedding. Now to represent such an utterance using word embedding, we need the bi-lingual word embedding which include Romanized Hindi and English words as well. To cater to this requirement, we have used the proposed method (Smith, 2017) to represent bi-lingual representation of word from two monolingual representations. SVD is used to learn a linear transformation (a matrix), which aligns monolingual vectors from two languages in a single vector space 7 . In this experiment, we considered two monolingual word embedding(s). First is the trained word embedding of Romanized Hindi. Second one is the pre-trained & published 8 English word-embedding (Mikolov, 2018) , which is trained on Wikipedia corpus.", "cite_spans": [ { "start": 720, "end": 738, "text": "(Bojanowski, 2017)", "ref_id": "BIBREF10" }, { "start": 1631, "end": 1644, "text": "(Smith, 2017)", "ref_id": "BIBREF13" }, { "start": 2060, "end": 2075, "text": "(Mikolov, 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 364, "end": 372, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1206, "end": 1213, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Word-Embedding", "sec_num": "4.1" }, { "text": "We prepare Attention based deep learning architecture for Classification of Code-Mixed Text (ACCMT) which uses learning from both character and word based representation. The proposed architecture consists of two major parts. The first part learns the sub-word level features from input character sequences. The other parts uses prepared word embedding as input and learn the word level features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "The first part is similar as the baseline implementation Prabhu (2016) , which is inspired by research work of Kim (2016) . This part is independent of word vocabulary, which helps to resolve important issues in code mixed text like non-standard spelling, phrasal contraction etc.", "cite_spans": [ { "start": 57, "end": 70, "text": "Prabhu (2016)", "ref_id": "BIBREF4" }, { "start": 111, "end": 121, "text": "Kim (2016)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "6 https://fasttext.cc/docs/en/python-module.html 7 https://github.com/Babylonpartners/fastText_multilingual 8 https://fasttext.cc/docs/en/pretrained-vectors.html Even though this representation lack word level semantic interpretability, the assumption is that character n-gram serve semantic functions e.g. 'cat+s=cats'. Formally a Sentence S is made of sequence of characters [ 1 , \u2026 , ]where is sentence length. \u2208 \u211d \u00d7 is the representation of sentence where being the dimension of character embedding. We perform the convolution of with filter \u2208 \u211d \u00d7 of length m. This operation provides a feature map \u2208 \u211d \u2212 +1 . Convolution is shown with ' * ' Operator in equation 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= *", "eq_num": "(1)" } ], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "Next max-pool operation of p features from f brings sub-word representation y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= 0 \u00d7 tanh( \u0303+ \u22121 \u0303) \u210e , \u0303= tanh( [ \u22121 , ] + ) = \u03c3( [ \u22121 , ] + ) = \u03c3( [ \u22121 , ] + ) = ( [ \u22121 , ] + )", "eq_num": "(2)" } ], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "Here represents the input at current timestamp. Output from LSTM is at time . , , are respectively the output, input and forget gates of LSTM cell. \u0303i s the cell state at time . The second part is designed with intention to capture features for the word level semantic representation to counter the limitation of previous part of the architecture. For this purpose LSTM is used as well, because LSTM has performed very well (Bhasin, 2019; Tang, 2015) in various sentiment analysis and other text processing tasks. Formally a Sentence is made of sequence of words [ 1 , \u2026 , ] where is word length of . \u2208 \u211d \u00d7 is the representation of sentence where d being the dimension of word embedding. Now , word at time is passed to memory cell of LSTM and the output follows similar of equation (2). We have introduced two separate attention layers over the LSTM output of Character based side and Word based side respectively. The intention of applying the attention is to infer the dominating features from character representation as well as word representation respectively. We have used 9 https://pypi.org/project/keras-self-attention/ self attention (Vaswani, 2017) for our implementation 9 . Formally, the attention can be depicted as equation 3.", "cite_spans": [ { "start": 424, "end": 438, "text": "(Bhasin, 2019;", "ref_id": "BIBREF7" }, { "start": 439, "end": 450, "text": "Tang, 2015)", "ref_id": "BIBREF12" }, { "start": 563, "end": 574, "text": "[ 1 , \u2026 , ]", "ref_id": null }, { "start": 1144, "end": 1159, "text": "(Vaswani, 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "( , , ) = ( \u2044 ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "The , & is same and that is the output of the previous layer. The final output after attention of sub-word level representation through character embedding part and learnt features from the word embedding part are concatenated as late fusion to feature represent of the input sentence. The joint feature is passed through another attention layer. This layer is intend to figure out the dominating learnt feature among word and character based learnt features. Following this layer, we add two consecutive fully connected layers with ReLU non-linearity. The final output of the last dense layer is passed through a Softmax layer to predict the sentiment. Formally late fusion of learnt character features & word features is s = ( , ) to represent jointly learnt features of sentence S. Then s is input to dense layers with as ReLU non-linearity. Output 1 is passed through second dense layer to get output a2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 = ( 1 \u00d7 + 1 ) 2 = ( 2 \u00d7 1 + 2 )", "eq_num": "(4)" } ], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "Further, final layer is formalized as equation 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "= 2 \u2211 2 \u2044 (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "4.2" }, { "text": "This research used Keras on python for all required implementations. The baseline dataset is divided into 3 splits i.e. training, validation and testing. Initially the dataset is randomly divided into 80-20 train-test split. Further train is randomly divided into 90-10 train-validation unlike the baseline implementation which splits 80-20 as trainvalidation. The results are reported over the test split here. We have experimented with various possible values of hyper parameters and the best set of hyper parameters is shown in the Fig 2. As discussed earlier first part of the architecture is meant for character based input. Here a single sentence is considered to be of sequence of 200 characters. Characters beyond 200 are ignored for sentence having more than 200 characters. A sentence with less than 200 characters is zero padded. Point need to mention is that we have considered space also as valid character input. For the second part of the network we have use word embedding of different dimensions for example 100, 200 and 300. However it achieved best accuracy with 300 dimensional word-embedding. While training the fastText Word-Embedding, 'minn' & 'maxn' parameters were set to 2 and 10 respectively. For word based input, a sentence of length 40 words is considered. A sentence with lesser than 40 words is zero embedding padded whereas words beyond 40 are ignored if sentence is having more than 40 words. Also we empirically found that having two stacked LSTM layers similar to Prabhu (2016) ", "cite_spans": [ { "start": 1500, "end": 1513, "text": "Prabhu (2016)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 535, "end": 541, "text": "Fig 2.", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "We have conducted all experiments in the computing environment mentioned in above section. In the same environment, the implementation of Prabhu (2016) attained maximum accuracy of 66.29% across 5 different executions. Whereas the best performance of ACCMT is 71.97% exceeds the baseline performance by 5.68% in the same computing environment. To understand the impact of attention on the classification of code-mixed text, we have also experimented without attention. We have removed three attention layers from the ACCMT and created a deep learning architecture which uses only fusion of character and word features. This architecture showed a maximum of 69.845% accuracy on the same dataset. This implies that attention has improved accuracy with 2.125%. We also compared against Yenigalla (2018) which gave an accuracy of 64.3%. ", "cite_spans": [ { "start": 138, "end": 151, "text": "Prabhu (2016)", "ref_id": "BIBREF4" }, { "start": 783, "end": 799, "text": "Yenigalla (2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "6" }, { "text": "This paper shows the architecture of attention based deep learning architecture (ACCMT) which does fusion of character and word feature to develop a robust classifier for code-mixed text. The proposed ACCMT architecture performs well on the Hi-En code-mixed dataset and outperforms the baseline accuracy. A major contribution of this paper is the technique of training word embedding for code-mixed text. This technique is used for generating word embedding for Hindi-English code mixed corpus, which is required in this research work. This proposed technique is very easy to implement for other code-mixed languages as well and will be helpful for generating word embedding for low resource code-mixed languages majorly Indian languages e.g. Bengali, Tamil and Malayalam etc. This also opens up opportunities of research on other code-mixed languages. This work also shows the impact of attention for the classification of code-mixed text. Lal (2019) showed that introduction of feature network has improved the accuracy significantly. The integration of such feature network in ACCMT is considered for future course of improvement for the on-going research.", "cite_spans": [ { "start": 941, "end": 951, "text": "Lal (2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://en.wikipedia.org/wiki/List_of_languages_by_number _of_native_speakers_in_India", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/sanskrit-coders/indic_transliteration 4 https://www.kaggle.com/pk13055/code-mixed-hindienglishdataset 5 https://en.wikipedia.org/wiki/Devanagari", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/mkocabas/focal-loss-keras", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sentiment Analysis of Code-Mixed Indian Languages: An Overview of SAIL_Code-Mixed Shared Task@ ICON-2017", "authors": [ { "first": "B", "middle": [ "G" ], "last": "Patra", "suffix": "" }, { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "A", "middle": [], "last": "Das", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.06745" ] }, "num": null, "urls": [], "raw_text": "Patra, B.G., Das, D. and Das, A., 2018. Sentiment Analysis of Code-Mixed Indian Languages: An Overview of SAIL_Code-Mixed Shared Task@ ICON-2017. arXiv preprint arXiv:1803.06745.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unraveling the English-Bengali codemixing phenomenon", "authors": [ { "first": "Arunavha", "middle": [], "last": "Chanda", "suffix": "" }, { "first": "Dipankar", "middle": [], "last": "Das", "suffix": "" }, { "first": "Chandan", "middle": [], "last": "Mazumdar", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Second Workshop on Computational Approaches to Code Switching", "volume": "", "issue": "", "pages": "80--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chanda, Arunavha, Dipankar Das, and Chandan Mazumdar. Unraveling the English-Bengali code- mixing phenomenon. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pp. 80-89. 2016.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Hindi-English Code-Switching Corpus", "authors": [ { "first": "A", "middle": [], "last": "Dey", "suffix": "" }, { "first": "P", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2014, "venue": "LREC", "volume": "", "issue": "", "pages": "2410--2413", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dey, A. and Fung, P., 2014, May. A Hindi-English Code-Switching Corpus. In LREC (pp. 2410-2413)", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Code mixing: A challenge for language identification in the language of social media", "authors": [ { "first": "U", "middle": [], "last": "Barman", "suffix": "" }, { "first": "A", "middle": [], "last": "Das", "suffix": "" }, { "first": "J", "middle": [], "last": "Wagner", "suffix": "" }, { "first": "J", "middle": [], "last": "Foster", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the first workshop on computational approaches to code switching", "volume": "", "issue": "", "pages": "13--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barman, U., Das, A., Wagner, J. and Foster, J., 2014. Code mixing: A challenge for language identification in the language of social media. In Proceedings of the first workshop on computational approaches to code switching (pp. 13-23).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Towards sub-word level compositions for sentiment analysis of hindi-english code mixed text", "authors": [ { "first": "A", "middle": [], "last": "Prabhu", "suffix": "" }, { "first": "A", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "M", "middle": [], "last": "Shrivastava", "suffix": "" }, { "first": "V", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.00472" ] }, "num": null, "urls": [], "raw_text": "Prabhu, A., Joshi, A., Shrivastava, M. and Varma, V., 2016. Towards sub-word level compositions for sentiment analysis of hindi-english code mixed text. arXiv preprint arXiv:1611.00472.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Addressing unseen word problem in text classification", "authors": [ { "first": "P", "middle": [], "last": "Yenigalla", "suffix": "" }, { "first": "S", "middle": [], "last": "Kar", "suffix": "" }, { "first": "C", "middle": [], "last": "Singh", "suffix": "" }, { "first": "A", "middle": [], "last": "Nagar", "suffix": "" }, { "first": "G", "middle": [], "last": "Mathur", "suffix": "" } ], "year": 2018, "venue": "International Conference on Applications of Natural Language to Information Systems", "volume": "", "issue": "", "pages": "339--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yenigalla, P., Kar, S., Singh, C., Nagar, A., & Mathur, G. (2018, June). Addressing unseen word problem in text classification. In International Conference on Applications of Natural Language to Information Systems (pp. 339-351). Springer, Cham.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "De-Mixing Sentiment from Code-Mixed Text", "authors": [ { "first": "Y", "middle": [ "K" ], "last": "Lal", "suffix": "" }, { "first": "V", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "M", "middle": [], "last": "Dhar", "suffix": "" }, { "first": "M", "middle": [], "last": "Shrivastava", "suffix": "" }, { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics: Student Research Workshop", "volume": "", "issue": "", "pages": "371--377", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lal, Y.K., Kumar, V., Dhar, M., Shrivastava, M. and Koehn, P., 2019, July. De-Mixing Sentiment from Code-Mixed Text. In Proceedings of the 57th Conference of the Association for Computational Linguistics: Student Research Workshop (pp. 371- 377).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Unified Parallel Intent and Slot Prediction with Cross Fusion and Slot Masking", "authors": [ { "first": "A", "middle": [], "last": "Bhasin", "suffix": "" }, { "first": "B", "middle": [], "last": "Natarajan", "suffix": "" }, { "first": "G", "middle": [], "last": "Mathur", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Jeon", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Kim", "suffix": "" } ], "year": 2019, "venue": "International Conference on Applications of Natural Language to Information Systems", "volume": "", "issue": "", "pages": "277--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhasin, A., Natarajan, B., Mathur, G., Jeon, J.H. and Kim, J.S., 2019, June. Unified Parallel Intent and Slot Prediction with Cross Fusion and Slot Masking. In International Conference on Applications of Natural Language to Information Systems (pp. 277-285). Springer, Cham.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Text normalization of code mix and sentiment analysis", "authors": [ { "first": "S", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "P", "middle": [ "Y K L" ], "last": "Srinivas", "suffix": "" }, { "first": "R", "middle": [ "C" ], "last": "Balabantaray", "suffix": "" } ], "year": 2015, "venue": "2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI)", "volume": "", "issue": "", "pages": "1468--1473", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharma, S., Srinivas, P. Y. K. L., & Balabantaray, R. C. (2015, August). Text normalization of code mix and sentiment analysis. In 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI) (pp. 1468-1473). IEEE.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Domain-specific sentiment analysis approaches for code-mixed social network data", "authors": [ { "first": "A", "middle": [], "last": "Pravalika", "suffix": "" }, { "first": "V", "middle": [], "last": "Oza", "suffix": "" }, { "first": "N", "middle": [ "P" ], "last": "Meghana", "suffix": "" }, { "first": "S", "middle": [ "S" ], "last": "Kamath", "suffix": "" } ], "year": 2017, "venue": "2017 8th International Conference on Computing, Communication and Networking Technologies (ICCCNT)", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pravalika, A., Oza, V., Meghana, N.P. and Kamath, S.S., 2017, July. Domain-specific sentiment analysis approaches for code-mixed social network data. In 2017 8th International Conference on Computing, Communication and Networking Technologies (ICCCNT) (pp. 1-6). IEEE.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Enriching word vectors with subword information", "authors": [ { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "A", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2017). Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5, 135-146.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Character-aware neural language models", "authors": [ { "first": "Y", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "D", "middle": [], "last": "Sontag", "suffix": "" }, { "first": "A", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2016, "venue": "Thirtieth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim, Y., Jernite, Y., Sontag, D., & Rush, A. M. (2016, March). Character-aware neural language models In Thirtieth AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Document modeling with gated recurrent neural network for sentiment classification", "authors": [ { "first": "D", "middle": [], "last": "Tang", "suffix": "" }, { "first": "B", "middle": [], "last": "Qin", "suffix": "" }, { "first": "T", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1422--1432", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tang, D., Qin, B., & Liu, T. (2015, September). Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 conference on empirical methods in natural language processing (pp. 1422-1432).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax", "authors": [ { "first": "S", "middle": [ "L" ], "last": "Smith", "suffix": "" }, { "first": "D", "middle": [ "H" ], "last": "Turban", "suffix": "" }, { "first": "S", "middle": [], "last": "Hamblin", "suffix": "" }, { "first": "N", "middle": [ "Y" ], "last": "Hammerla", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1702.03859" ] }, "num": null, "urls": [], "raw_text": "Smith, S.L., Turban, D.H., Hamblin, S. and Hammerla, N.Y., 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Advances in Pre-Training Distributed Word Representations", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "C", "middle": [], "last": "Puhrsch", "suffix": "" }, { "first": "A", "middle": [], "last": "Joulin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, T., Grave, E., Bojanowski, P., Puhrsch, C. and Joulin, A., 2018, May. Advances in Pre- Training Distributed Word Representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Attention is all you need", "authors": [ { "first": "A", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "N", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "N", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "J", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "L", "middle": [], "last": "Jones", "suffix": "" }, { "first": "A", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "I", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141. and Polosukhin, I., 2017. Attention is all you need. In Advances in neural information processing systems (pp. 5998- 6008).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Focal loss for dense object detection", "authors": [ { "first": "T", "middle": [ "Y" ], "last": "Lin", "suffix": "" }, { "first": "P", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "R", "middle": [], "last": "Girshick", "suffix": "" }, { "first": "K", "middle": [], "last": "He", "suffix": "" }, { "first": "P", "middle": [], "last": "Doll\u00e1r", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE international conference on computer vision", "volume": "", "issue": "", "pages": "2980--2988", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, T.Y., Goyal, P., Girshick, R., He, K. and Doll\u00e1r, P., 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision (pp. 2980-2988).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Focal Loss based Residual Convolutional Neural Network for Speech Emotion Recognition", "authors": [ { "first": "S", "middle": [], "last": "Tripathi", "suffix": "" }, { "first": "A", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "A", "middle": [], "last": "Ramesh", "suffix": "" }, { "first": "C", "middle": [], "last": "Singh", "suffix": "" }, { "first": "P", "middle": [], "last": "Yenigalla", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.05682" ] }, "num": null, "urls": [], "raw_text": "Tripathi, S., Kumar, A., Ramesh, A., Singh, C. and Yenigalla, P., 2019. Focal Loss based Residual Convolutional Neural Network for Speech Emotion Recognition. arXiv preprint arXiv:1906.05682.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Corpus Preparation for Hi-En Code-mixed Text in Roman Script.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Attention based deep learning architecture for Classification of Code-Mixed Text (ACCMT)", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "content": "
ExampleApprox. meaningPolarity
in English
Sir yeh tho sirf aap hi kar sakte hai. Great sirSir only you can do it. Great SirPositive
Kuch nahi karoge tum india ke liyeYou anything for India won'tdoNegative
Humari sabhayata humari pehchaan ...Our civilization is our identityNeutral
Table 1: Example from Hi-En Code-Mixed dataset.
2 https://github.com/DrImpossible/Sub-word-LSTM
", "html": null, "type_str": "table", "text": "", "num": null }, "TABREF2": { "content": "
ExperimentsResults AccuracyF1
Yenigalla (2018)64.3%62.2
ACCMT (adamax + Focal Loss)70.10%68.1
ACCMT
(RMS prop + categorical69.75%67.5
cross entropy)
ACCMT (adamax + categorical cross entropy)71.97%70.93
ACCMT (RMS Prop + Focal Loss)70.32%68.71
", "html": null, "type_str": "table", "text": "showed the accuracy and F1 score of all experiments.", "num": null }, "TABREF3": { "content": "", "html": null, "type_str": "table", "text": "Results of ACCMT on Hi-En Code Mixed dataset with different loss-function and initializers.", "num": null } } } }