Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
528
3.02k
abstract
stringlengths
17
2.35k
authors
sequencelengths
1
44
title
stringlengths
18
190
id
stringlengths
7
19
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
528 values
n_linked_authors
int64
-1
15
upvotes
int64
-1
77
num_comments
int64
-1
10
n_authors
int64
-1
52
Models
sequencelengths
0
100
Datasets
sequencelengths
0
15
Spaces
sequencelengths
0
46
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
https://aclanthology.org/2023.banglalp-1.43.bib
https://aclanthology.org/2023.banglalp-1.43/
@inproceedings{tarannum-etal-2023-z, title = "{Z}-Index at {BLP}-2023 Task 2: A Comparative Study on Sentiment Analysis", author = "Tarannum, Prerona and Hasan, Md. Arid and Dey, Krishno and Noori, Sheak Rashed Haider", editor = "Alam, Firoj and Kar, Sudipta and Chowdhury, Shammur Absar and Sadeque, Farig and Amin, Ruhul", booktitle = "Proceedings of the First Workshop on Bangla Language Processing (BLP-2023)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.banglalp-1.43", doi = "10.18653/v1/2023.banglalp-1.43", pages = "324--330", abstract = "In this study, we report our participation in Task 2 of the BLP-2023 shared task. The main objective of this task is to determine the sentiment (Positive, Neutral, or Negative) of a given text. We first removed the URLs, hashtags, and other noises and then applied traditional and pretrained language models. We submitted multiple systems in the leaderboard and BanglaBERT with tokenized data provided thebest result and we ranked 5th position in the competition with an F1-micro score of 71.64. Our study also reports that the importance of tokenization is lessening in the realm of pretrained language models. In further experiments, our evaluation shows that BanglaBERT outperforms, and predicting the neutral class is still challenging for all the models.", }
In this study, we report our participation in Task 2 of the BLP-2023 shared task. The main objective of this task is to determine the sentiment (Positive, Neutral, or Negative) of a given text. We first removed the URLs, hashtags, and other noises and then applied traditional and pretrained language models. We submitted multiple systems in the leaderboard and BanglaBERT with tokenized data provided thebest result and we ranked 5th position in the competition with an F1-micro score of 71.64. Our study also reports that the importance of tokenization is lessening in the realm of pretrained language models. In further experiments, our evaluation shows that BanglaBERT outperforms, and predicting the neutral class is still challenging for all the models.
[ "Tarannum, Prerona", "Hasan, Md. Arid", "Dey, Krishno", "Noori, Sheak Rashed Haider" ]
Z-Index at BLP-2023 Task 2: A Comparative Study on Sentiment Analysis
banglalp-1.43
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.banglalp-1.44.bib
https://aclanthology.org/2023.banglalp-1.44/
@inproceedings{das-etal-2023-team-error, title = "Team Error Point at {BLP}-2023 Task 2: A Comparative Exploration of Hybrid Deep Learning and Machine Learning Approach for Advanced Sentiment Analysis Techniques.", author = "Das, Rajesh and Yeiad, Kabid and Ajmain, Moshfiqur and Maowa, Jannatul and Islam, Mirajul and Khushbu, Sharun", editor = "Alam, Firoj and Kar, Sudipta and Chowdhury, Shammur Absar and Sadeque, Farig and Amin, Ruhul", booktitle = "Proceedings of the First Workshop on Bangla Language Processing (BLP-2023)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.banglalp-1.44", doi = "10.18653/v1/2023.banglalp-1.44", pages = "331--335", abstract = "This paper presents a thorough and extensive investigation into the diverse models and techniques utilized for sentiment analysis. What sets this research apart is the deliberate and purposeful incorporation of data augmentation techniques with the goal of improving the efficacy of sentiment analysis in the Bengali language. We systematically explore various approaches, including preprocessing techniques, advancedmodels like Long Short-Term Memory (LSTM) and LSTM-CNN (Convolutional Neural Network) Combine, and traditional machine learning models such as Logistic Regression, Decision Tree, Random Forest, Multi-Naive Bayes, Support Vector Machine, and Stochastic Gradient Descent. Our study highlights the substantial impact of data augmentation on enhancing model accuracy and understanding Bangla sentiment nuances. Additionally, we emphasize the LSTM model{'}s ability to capture long-range correlations in Bangla text. Our system scored 0.4129 and ranked 27th among the participants.", }
This paper presents a thorough and extensive investigation into the diverse models and techniques utilized for sentiment analysis. What sets this research apart is the deliberate and purposeful incorporation of data augmentation techniques with the goal of improving the efficacy of sentiment analysis in the Bengali language. We systematically explore various approaches, including preprocessing techniques, advancedmodels like Long Short-Term Memory (LSTM) and LSTM-CNN (Convolutional Neural Network) Combine, and traditional machine learning models such as Logistic Regression, Decision Tree, Random Forest, Multi-Naive Bayes, Support Vector Machine, and Stochastic Gradient Descent. Our study highlights the substantial impact of data augmentation on enhancing model accuracy and understanding Bangla sentiment nuances. Additionally, we emphasize the LSTM model{'}s ability to capture long-range correlations in Bangla text. Our system scored 0.4129 and ranked 27th among the participants.
[ "Das, Rajesh", "Yeiad, Kabid", "Ajmain, Moshfiqur", "Maowa, Jannatul", "Islam, Mirajul", "Khushbu, Sharun" ]
Team Error Point at BLP-2023 Task 2: A Comparative Exploration of Hybrid Deep Learning and Machine Learning Approach for Advanced Sentiment Analysis Techniques.
banglalp-1.44
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.banglalp-1.45.bib
https://aclanthology.org/2023.banglalp-1.45/
@inproceedings{mukherjee-etal-2023-ufal-uld, title = "{UFAL}-{ULD} at {BLP}-2023 Task 2 Sentiment Classification in {B}angla Text", author = "Mukherjee, Sourabrata and Ojha, Atul Kr. and Du{\v{s}}ek, Ond{\v{r}}ej", editor = "Alam, Firoj and Kar, Sudipta and Chowdhury, Shammur Absar and Sadeque, Farig and Amin, Ruhul", booktitle = "Proceedings of the First Workshop on Bangla Language Processing (BLP-2023)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.banglalp-1.45", doi = "10.18653/v1/2023.banglalp-1.45", pages = "336--339", abstract = "In this paper, we present the UFAL-ULD team{'}s system for the BLP Shared Task 2: Sentiment Analysis of Bangla Social Media Posts. The Task 2 involves classifying text into Positive, Negative, or Neutral sentiments. As a part of this task, we conducted a series of experiments with several pre-trained sequence classification models {--} XLM-RoBERTa, BanglaBERT, Bangla BERT Base and Multilingual BERT. Among these, our best-performing model was based on the XLM-RoBERTa-base architecture, which outperforms baseline models. Our system was ranked 19th among the 30 teams that participated in the task.", }
In this paper, we present the UFAL-ULD team{'}s system for the BLP Shared Task 2: Sentiment Analysis of Bangla Social Media Posts. The Task 2 involves classifying text into Positive, Negative, or Neutral sentiments. As a part of this task, we conducted a series of experiments with several pre-trained sequence classification models {--} XLM-RoBERTa, BanglaBERT, Bangla BERT Base and Multilingual BERT. Among these, our best-performing model was based on the XLM-RoBERTa-base architecture, which outperforms baseline models. Our system was ranked 19th among the 30 teams that participated in the task.
[ "Mukherjee, Sourabrata", "Ojha, Atul Kr.", "Du{\\v{s}}ek, Ond{\\v{r}}ej" ]
UFAL-ULD at BLP-2023 Task 2 Sentiment Classification in Bangla Text
banglalp-1.45
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.banglalp-1.46.bib
https://aclanthology.org/2023.banglalp-1.46/
@inproceedings{tonmoy-2023-embeddings, title = "Embeddings at {BLP}-2023 Task 2: Optimizing Fine-Tuned Transformers with Cost-Sensitive Learning for Multiclass Sentiment Analysis", author = "Tonmoy, S.m Towhidul Islam", editor = "Alam, Firoj and Kar, Sudipta and Chowdhury, Shammur Absar and Sadeque, Farig and Amin, Ruhul", booktitle = "Proceedings of the First Workshop on Bangla Language Processing (BLP-2023)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.banglalp-1.46", doi = "10.18653/v1/2023.banglalp-1.46", pages = "340--346", abstract = "In this study, we address the task of Sentiment Analysis for Bangla Social Media Posts, introduced in first Workshop on Bangla Language Processing (CITATION). Our research encountered two significant challenges in the context of sentiment analysis. The first challenge involved extensive training times and memory constraints when we chose to employ oversampling techniques for addressing class imbalance in an attempt to enhance model performance. Conversely, when opting for undersampling, the training time was optimal, but this approach resulted in poor model performance. These challenges highlight the complex trade-offs involved in selecting sampling methods to address class imbalances in sentiment analysis tasks. We tackle these challenges through cost-sensitive approaches aimed at enhancing model performance. In our initial submission during the evaluation phase, we ranked 9th out of 30 participants with an F1-micro score of 0.7088 . Subsequently, through additional experimentation, we managed to elevate our F1-micro score to 0.7186 by leveraging the BanglaBERT-Large model in combination with the Self-adjusting Dice loss function. Our experiments highlight the effect in performance of the models achieved by modifying the loss function. Our experimental data and source code can be found here.", }
In this study, we address the task of Sentiment Analysis for Bangla Social Media Posts, introduced in first Workshop on Bangla Language Processing (CITATION). Our research encountered two significant challenges in the context of sentiment analysis. The first challenge involved extensive training times and memory constraints when we chose to employ oversampling techniques for addressing class imbalance in an attempt to enhance model performance. Conversely, when opting for undersampling, the training time was optimal, but this approach resulted in poor model performance. These challenges highlight the complex trade-offs involved in selecting sampling methods to address class imbalances in sentiment analysis tasks. We tackle these challenges through cost-sensitive approaches aimed at enhancing model performance. In our initial submission during the evaluation phase, we ranked 9th out of 30 participants with an F1-micro score of 0.7088 . Subsequently, through additional experimentation, we managed to elevate our F1-micro score to 0.7186 by leveraging the BanglaBERT-Large model in combination with the Self-adjusting Dice loss function. Our experiments highlight the effect in performance of the models achieved by modifying the loss function. Our experimental data and source code can be found here.
[ "Tonmoy, S.m Towhidul Islam" ]
Embeddings at BLP-2023 Task 2: Optimizing Fine-Tuned Transformers with Cost-Sensitive Learning for Multiclass Sentiment Analysis
banglalp-1.46
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.banglalp-1.47.bib
https://aclanthology.org/2023.banglalp-1.47/
@inproceedings{chakma-hasan-2023-lowresource, title = "{L}ow{R}esource at {BLP}-2023 Task 2: Leveraging {B}angla{B}ert for Low Resource Sentiment Analysis of {B}angla Language", author = "Chakma, Aunabil and Hasan, Masum", editor = "Alam, Firoj and Kar, Sudipta and Chowdhury, Shammur Absar and Sadeque, Farig and Amin, Ruhul", booktitle = "Proceedings of the First Workshop on Bangla Language Processing (BLP-2023)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.banglalp-1.47", doi = "10.18653/v1/2023.banglalp-1.47", pages = "347--353", abstract = "This paper describes the system of the LowResource Team for Task 2 of BLP-2023, which involves conducting sentiment analysis on a dataset composed of public posts and comments from diverse social media platforms. Our primary aim was to utilize BanglaBert, a BERT model pre-trained on a large Bangla corpus, using various strategies including fine-tuning, dropping random tokens, and using several external datasets. Our final model is an ensemble of the three best BanglaBert variations. Our system achieved overall 3rd in the Test Set among 30 participating teams with a score of 0.718. Additionally, we discuss the promising systems that didn{'}t perform well namely task-adaptive pertaining and paraphrasing using BanglaT5. Our training codes are publicly available at https://github.com/Aunabil4602/bnlp-workshop-task2-2023", }
This paper describes the system of the LowResource Team for Task 2 of BLP-2023, which involves conducting sentiment analysis on a dataset composed of public posts and comments from diverse social media platforms. Our primary aim was to utilize BanglaBert, a BERT model pre-trained on a large Bangla corpus, using various strategies including fine-tuning, dropping random tokens, and using several external datasets. Our final model is an ensemble of the three best BanglaBert variations. Our system achieved overall 3rd in the Test Set among 30 participating teams with a score of 0.718. Additionally, we discuss the promising systems that didn{'}t perform well namely task-adaptive pertaining and paraphrasing using BanglaT5. Our training codes are publicly available at https://github.com/Aunabil4602/bnlp-workshop-task2-2023
[ "Chakma, Aunabil", "Hasan, Masum" ]
LowResource at BLP-2023 Task 2: Leveraging BanglaBert for Low Resource Sentiment Analysis of Bangla Language
banglalp-1.47
2311.12735
[ "https://github.com/aunabil4602/bnlp-workshop-task2-2023" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.banglalp-1.48.bib
https://aclanthology.org/2023.banglalp-1.48/
@inproceedings{hasan-etal-2023-blp, title = "{BLP}-2023 Task 2: Sentiment Analysis", author = "Hasan, Md. Arid and Alam, Firoj and Anjum, Anika and Das, Shudipta and Anjum, Afiyat", editor = "Alam, Firoj and Kar, Sudipta and Chowdhury, Shammur Absar and Sadeque, Farig and Amin, Ruhul", booktitle = "Proceedings of the First Workshop on Bangla Language Processing (BLP-2023)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.banglalp-1.48", doi = "10.18653/v1/2023.banglalp-1.48", pages = "354--364", abstract = "We present an overview of the BLP Sentiment Shared Task, organized as part of the inaugural BLP 2023 workshop, co-located with EMNLP 2023. The task is defined as the detection of sentiment in a given piece of social media text. This task attracted interest from 71 participants, among whom 29 and 30 teams submitted systems during the development and evaluation phases, respectively. In total, participants submitted 597 runs. However, only 15 teams submitted system description papers. The range of approaches in the submitted systems spans from classical machine learning models, fine-tuning pre-trained models, to leveraging Large Language Model (LLMs) in zero- and few-shot settings. In this paper, we provide a detailed account of the task setup, including dataset development and evaluation setup. Additionally, we provide a succinct overview of the systems submitted by the participants. All datasets and evaluation scripts from the shared task have been made publicly available for the research community, to foster further research in this domain.", }
We present an overview of the BLP Sentiment Shared Task, organized as part of the inaugural BLP 2023 workshop, co-located with EMNLP 2023. The task is defined as the detection of sentiment in a given piece of social media text. This task attracted interest from 71 participants, among whom 29 and 30 teams submitted systems during the development and evaluation phases, respectively. In total, participants submitted 597 runs. However, only 15 teams submitted system description papers. The range of approaches in the submitted systems spans from classical machine learning models, fine-tuning pre-trained models, to leveraging Large Language Model (LLMs) in zero- and few-shot settings. In this paper, we provide a detailed account of the task setup, including dataset development and evaluation setup. Additionally, we provide a succinct overview of the systems submitted by the participants. All datasets and evaluation scripts from the shared task have been made publicly available for the research community, to foster further research in this domain.
[ "Hasan, Md. Arid", "Alam, Firoj", "Anjum, Anika", "Das, Shudipta", "Anjum, Afiyat" ]
BLP-2023 Task 2: Sentiment Analysis
banglalp-1.48
2310.16183
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.bigpicture-1.1.bib
https://aclanthology.org/2023.bigpicture-1.1/
@inproceedings{klinger-2023-event, title = "Where are We in Event-centric Emotion Analysis? Bridging Emotion Role Labeling and Appraisal-based Approaches", author = "Klinger, Roman", editor = "Elazar, Yanai and Ettinger, Allyson and Kassner, Nora and Ruder, Sebastian and A. Smith, Noah", booktitle = "Proceedings of the Big Picture Workshop", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.bigpicture-1.1", doi = "10.18653/v1/2023.bigpicture-1.1", pages = "1--17", abstract = "The term emotion analysis in text subsumes various natural language processing tasks which have in common the goal to enable computers to understand emotions. Most popular is emotion classification in which one or multiple emotions are assigned to a predefined textual unit. While such setting is appropriate for identifying the reader{'}s or author{'}s emotion, emotion role labeling adds the perspective of mentioned entities and extracts text spans that correspond to the emotion cause. The underlying emotion theories agree on one important point; that an emotion is caused by some internal or external event and comprises several subcomponents, including the subjective feeling and a cognitive evaluation. We therefore argue that emotions and events are related in two ways. (1) Emotions are events; and this perspective is the fundament in natural language processing for emotion role labeling. (2) Emotions are caused by events; a perspective that is made explicit with research how to incorporate psychological appraisal theories in NLP models to interpret events. These two research directions, role labeling and (event-focused) emotion classification, have by and large been tackled separately. In this paper, we contextualize both perspectives and discuss open research questions.", }
The term emotion analysis in text subsumes various natural language processing tasks which have in common the goal to enable computers to understand emotions. Most popular is emotion classification in which one or multiple emotions are assigned to a predefined textual unit. While such setting is appropriate for identifying the reader{'}s or author{'}s emotion, emotion role labeling adds the perspective of mentioned entities and extracts text spans that correspond to the emotion cause. The underlying emotion theories agree on one important point; that an emotion is caused by some internal or external event and comprises several subcomponents, including the subjective feeling and a cognitive evaluation. We therefore argue that emotions and events are related in two ways. (1) Emotions are events; and this perspective is the fundament in natural language processing for emotion role labeling. (2) Emotions are caused by events; a perspective that is made explicit with research how to incorporate psychological appraisal theories in NLP models to interpret events. These two research directions, role labeling and (event-focused) emotion classification, have by and large been tackled separately. In this paper, we contextualize both perspectives and discuss open research questions.
[ "Klinger, Roman" ]
Where are We in Event-centric Emotion Analysis? Bridging Emotion Role Labeling and Appraisal-based Approaches
bigpicture-1.1
2309.02092
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.bigpicture-1.2.bib
https://aclanthology.org/2023.bigpicture-1.2/
@inproceedings{hamalainen-etal-2023-working, title = "Working Towards Digital Documentation of {U}ralic Languages With Open-Source Tools and {M}odern {NLP} Methods", author = {H{\"a}m{\"a}l{\"a}inen, Mika and Rueter, Jack and Alnajjar, Khalid and Partanen, Niko}, editor = "Elazar, Yanai and Ettinger, Allyson and Kassner, Nora and Ruder, Sebastian and A. Smith, Noah", booktitle = "Proceedings of the Big Picture Workshop", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.bigpicture-1.2", doi = "10.18653/v1/2023.bigpicture-1.2", pages = "18--27", abstract = "We present our work towards building an infrastructure for documenting endangered languages with the focus on Uralic languages in particular. Our infrastructure consists of tools to write dictionaries so that entries are structured in XML format. These dictionaries are the foundation for rule-based NLP tools such as FSTs. We also work actively towards enhancing these dictionaries and tools by using the latest state-of-the-art neural models by generating training data through rules and lexica", }
We present our work towards building an infrastructure for documenting endangered languages with the focus on Uralic languages in particular. Our infrastructure consists of tools to write dictionaries so that entries are structured in XML format. These dictionaries are the foundation for rule-based NLP tools such as FSTs. We also work actively towards enhancing these dictionaries and tools by using the latest state-of-the-art neural models by generating training data through rules and lexica
[ "H{\\\"a}m{\\\"a}l{\\\"a}inen, Mika", "Rueter, Jack", "Alnajjar, Khalid", "Partanen, Niko" ]
Working Towards Digital Documentation of Uralic Languages With Open-Source Tools and Modern NLP Methods
bigpicture-1.2
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.bigpicture-1.3.bib
https://aclanthology.org/2023.bigpicture-1.3/
@inproceedings{piper-2023-computational, title = "Computational Narrative Understanding: A Big Picture Analysis", author = "Piper, Andrew", editor = "Elazar, Yanai and Ettinger, Allyson and Kassner, Nora and Ruder, Sebastian and A. Smith, Noah", booktitle = "Proceedings of the Big Picture Workshop", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.bigpicture-1.3", doi = "10.18653/v1/2023.bigpicture-1.3", pages = "28--39", abstract = "This paper provides an overview of outstanding major research goals for the field of computational narrative understanding. Storytelling is an essential human practice, one that provides a sense of personal meaning, shared sense of community, and individual enjoyment. A number of research domains have increasingly focused on storytelling as a key mechanism for explaining human behavior. Now is an opportune moment to provide a vision of the contributions that computational narrative understanding can make towards this collective endeavor and the challenges facing the field. In addition to providing an overview of the elements of narrative, this paper outlines three major lines of inquiry: understanding the multi-modality of narrative; the temporal patterning of narrative (narrative {``}shape{''}); and socio-cultural narrative schemas, i.e. collective narratives. The paper concludes with a call for more inter-disciplinary working groups and deeper investment in building cross-cultural and multi-modal narrative datasets.", }
This paper provides an overview of outstanding major research goals for the field of computational narrative understanding. Storytelling is an essential human practice, one that provides a sense of personal meaning, shared sense of community, and individual enjoyment. A number of research domains have increasingly focused on storytelling as a key mechanism for explaining human behavior. Now is an opportune moment to provide a vision of the contributions that computational narrative understanding can make towards this collective endeavor and the challenges facing the field. In addition to providing an overview of the elements of narrative, this paper outlines three major lines of inquiry: understanding the multi-modality of narrative; the temporal patterning of narrative (narrative {``}shape{''}); and socio-cultural narrative schemas, i.e. collective narratives. The paper concludes with a call for more inter-disciplinary working groups and deeper investment in building cross-cultural and multi-modal narrative datasets.
[ "Piper, Andrew" ]
Computational Narrative Understanding: A Big Picture Analysis
bigpicture-1.3
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.bigpicture-1.4.bib
https://aclanthology.org/2023.bigpicture-1.4/
@inproceedings{michael-2023-case, title = "The Case for Scalable, Data-Driven Theory: A Paradigm for Scientific Progress in {NLP}", author = "Michael, Julian", editor = "Elazar, Yanai and Ettinger, Allyson and Kassner, Nora and Ruder, Sebastian and A. Smith, Noah", booktitle = "Proceedings of the Big Picture Workshop", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.bigpicture-1.4", doi = "10.18653/v1/2023.bigpicture-1.4", pages = "40--52", abstract = "I propose a paradigm for scientific progress in NLP centered around developing scalable, data-driven theories of linguistic structure. The idea is to collect data in tightly scoped, carefully defined ways which allow for exhaustive annotation of behavioral phenomena of interest, and then use machine learning to construct explanatory theories of these phenomena which can form building blocks for intelligible AI systems. After laying some conceptual groundwork, I describe several investigations into data-driven theories of shallow semantic structure using Question-Answer driven Semantic Role Labeling (QA-SRL), a schema for annotating verbal predicate-argument relations using highly constrained question-answer pairs. While this only scratches the surface of the complex language behaviors of interest in AI, I outline principles for data collection and theoretical modeling which can inform future scientific progress. This note summarizes and draws heavily on my PhD thesis.", }
I propose a paradigm for scientific progress in NLP centered around developing scalable, data-driven theories of linguistic structure. The idea is to collect data in tightly scoped, carefully defined ways which allow for exhaustive annotation of behavioral phenomena of interest, and then use machine learning to construct explanatory theories of these phenomena which can form building blocks for intelligible AI systems. After laying some conceptual groundwork, I describe several investigations into data-driven theories of shallow semantic structure using Question-Answer driven Semantic Role Labeling (QA-SRL), a schema for annotating verbal predicate-argument relations using highly constrained question-answer pairs. While this only scratches the surface of the complex language behaviors of interest in AI, I outline principles for data collection and theoretical modeling which can inform future scientific progress. This note summarizes and draws heavily on my PhD thesis.
[ "Michael, Julian" ]
The Case for Scalable, Data-Driven Theory: A Paradigm for Scientific Progress in NLP
bigpicture-1.4
2312.00349
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.bigpicture-1.5.bib
https://aclanthology.org/2023.bigpicture-1.5/
@inproceedings{elsafoury-2023-thesis, title = "Thesis Distillation: Investigating The Impact of Bias in {NLP} Models on Hate Speech Detection", author = "Elsafoury, Fatma", editor = "Elazar, Yanai and Ettinger, Allyson and Kassner, Nora and Ruder, Sebastian and A. Smith, Noah", booktitle = "Proceedings of the Big Picture Workshop", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.bigpicture-1.5", doi = "10.18653/v1/2023.bigpicture-1.5", pages = "53--65", abstract = "This paper is a summary of the work done in my PhD thesis. Where I investigate the impact of bias in NLP models on the task of hate speech detection from three perspectives: explainability, offensive stereotyping bias, and fairness. Then, I discuss the main takeaways from my thesis and how they can benefit the broader NLP community. Finally, I discuss important future research directions. The findings of my thesis suggest that the bias in NLP models impacts the task of hate speech detection from all three perspectives. And that unless we start incorporating social sciences in studying bias in NLP models, we will not effectively overcome the current limitations of measuring and mitigating bias in NLP models.", }
This paper is a summary of the work done in my PhD thesis. Where I investigate the impact of bias in NLP models on the task of hate speech detection from three perspectives: explainability, offensive stereotyping bias, and fairness. Then, I discuss the main takeaways from my thesis and how they can benefit the broader NLP community. Finally, I discuss important future research directions. The findings of my thesis suggest that the bias in NLP models impacts the task of hate speech detection from all three perspectives. And that unless we start incorporating social sciences in studying bias in NLP models, we will not effectively overcome the current limitations of measuring and mitigating bias in NLP models.
[ "Elsafoury, Fatma" ]
Thesis Distillation: Investigating The Impact of Bias in NLP Models on Hate Speech Detection
bigpicture-1.5
2308.16549
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.bigpicture-1.6.bib
https://aclanthology.org/2023.bigpicture-1.6/
@inproceedings{dhole-2023-large, title = "Large Language Models as {S}ocio{T}echnical Systems", author = "Dhole, Kaustubh", editor = "Elazar, Yanai and Ettinger, Allyson and Kassner, Nora and Ruder, Sebastian and A. Smith, Noah", booktitle = "Proceedings of the Big Picture Workshop", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.bigpicture-1.6", doi = "10.18653/v1/2023.bigpicture-1.6", pages = "66--79", abstract = "The expectation of Large Language Models (LLMs) to solve various societal problems has ignored the larger socio-technical frame of reference under which they operate. From a socio-technical perspective, LLMs are necessary to look at separately from other ML models as they have radically different implications in society never witnessed before. In this article, we ground Selbst et al.(2019){'}s five abstraction traps {--} The Framing Trap, The Portability Trap, The Formalism Trap, The Ripple Effect Trap and the Solutionism Trap in the context of LLMs discussing the problems associated with the abstraction and fairness of LLMs. Through learnings from previous studies and examples, we discuss each trap that LLMs fall into, and propose ways to address the points of LLM failure by gauging them from a socio-technical lens. We believe the discussions would provide a broader perspective of looking at LLMs through a sociotechnical lens and our recommendations could serve as baselines to effectively demarcate responsibilities among the various technical and social stakeholders and inspire future LLM research.", }
The expectation of Large Language Models (LLMs) to solve various societal problems has ignored the larger socio-technical frame of reference under which they operate. From a socio-technical perspective, LLMs are necessary to look at separately from other ML models as they have radically different implications in society never witnessed before. In this article, we ground Selbst et al.(2019){'}s five abstraction traps {--} The Framing Trap, The Portability Trap, The Formalism Trap, The Ripple Effect Trap and the Solutionism Trap in the context of LLMs discussing the problems associated with the abstraction and fairness of LLMs. Through learnings from previous studies and examples, we discuss each trap that LLMs fall into, and propose ways to address the points of LLM failure by gauging them from a socio-technical lens. We believe the discussions would provide a broader perspective of looking at LLMs through a sociotechnical lens and our recommendations could serve as baselines to effectively demarcate responsibilities among the various technical and social stakeholders and inspire future LLM research.
[ "Dhole, Kaustubh" ]
Large Language Models as SocioTechnical Systems
bigpicture-1.6
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.bigpicture-1.7.bib
https://aclanthology.org/2023.bigpicture-1.7/
@inproceedings{maurya-desarkar-2023-towards, title = "Towards Low-resource Language Generation with Limited Supervision", author = "Maurya, Kaushal and Desarkar, Maunendra", editor = "Elazar, Yanai and Ettinger, Allyson and Kassner, Nora and Ruder, Sebastian and A. Smith, Noah", booktitle = "Proceedings of the Big Picture Workshop", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.bigpicture-1.7", doi = "10.18653/v1/2023.bigpicture-1.7", pages = "80--92", abstract = "We present a research narrative aimed at enabling language technology for multiple natural language generation (NLG) tasks in low-resource languages (LRLs). With approximately 7,000 languages spoken globally, many lack the resources required for model training. NLG applications for LRLs present two additional key challenges: (i) The training is more pronounced, and (ii) Zero-shot modeling is a viable research direction for scalability; however, generating zero-shot well-formed text in target LRLs is challenging. Addressing these concerns, this narrative introduces three promising research explorations that serve as a step toward enabling language technology for many LRLs. These approaches make effective use of transfer learning and limited supervision techniques for modeling. Evaluations were conducted mostly in the zero-shot setting, enabling scalability. This research narrative is an ongoing doctoral thesis.", }
We present a research narrative aimed at enabling language technology for multiple natural language generation (NLG) tasks in low-resource languages (LRLs). With approximately 7,000 languages spoken globally, many lack the resources required for model training. NLG applications for LRLs present two additional key challenges: (i) The training is more pronounced, and (ii) Zero-shot modeling is a viable research direction for scalability; however, generating zero-shot well-formed text in target LRLs is challenging. Addressing these concerns, this narrative introduces three promising research explorations that serve as a step toward enabling language technology for many LRLs. These approaches make effective use of transfer learning and limited supervision techniques for modeling. Evaluations were conducted mostly in the zero-shot setting, enabling scalability. This research narrative is an ongoing doctoral thesis.
[ "Maurya, Kaushal", "Desarkar, Maunendra" ]
Towards Low-resource Language Generation with Limited Supervision
bigpicture-1.7
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.bigpicture-1.8.bib
https://aclanthology.org/2023.bigpicture-1.8/
@inproceedings{henderson-etal-2023-transformers, title = "Transformers as Graph-to-Graph Models", author = "Henderson, James and Mohammadshahi, Alireza and Coman, Andrei and Miculicich, Lesly", editor = "Elazar, Yanai and Ettinger, Allyson and Kassner, Nora and Ruder, Sebastian and A. Smith, Noah", booktitle = "Proceedings of the Big Picture Workshop", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.bigpicture-1.8", doi = "10.18653/v1/2023.bigpicture-1.8", pages = "93--107", abstract = "We argue that Transformers are essentially graph-to-graph models, with sequences just being a special case. Attention weights are functionally equivalent to graph edges. Our Graph-to-Graph Transformer architecture makes this ability explicit, by inputting graph edges into the attention weight computations and predicting graph edges with attention-like functions, thereby integrating explicit graphs into the latent graphs learned by pretrained Transformers. Adding iterative graph refinement provides a joint embedding of input, output, and latent graphs, allowing non-autoregressive graph prediction to optimise the complete graph without any bespoke pipeline or decoding strategy. Empirical results show that this architecture achieves state-of-the-art accuracies for modelling a variety of linguistic structures, integrating very effectively with the latent linguistic representations learned by pretraining.", }
We argue that Transformers are essentially graph-to-graph models, with sequences just being a special case. Attention weights are functionally equivalent to graph edges. Our Graph-to-Graph Transformer architecture makes this ability explicit, by inputting graph edges into the attention weight computations and predicting graph edges with attention-like functions, thereby integrating explicit graphs into the latent graphs learned by pretrained Transformers. Adding iterative graph refinement provides a joint embedding of input, output, and latent graphs, allowing non-autoregressive graph prediction to optimise the complete graph without any bespoke pipeline or decoding strategy. Empirical results show that this architecture achieves state-of-the-art accuracies for modelling a variety of linguistic structures, integrating very effectively with the latent linguistic representations learned by pretraining.
[ "Henderson, James", "Mohammadshahi, Alireza", "Coman, Andrei", "Miculicich, Lesly" ]
Transformers as Graph-to-Graph Models
bigpicture-1.8
2310.17936
[ "https://github.com/idiap/g2g-transformer" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.bigpicture-1.9.bib
https://aclanthology.org/2023.bigpicture-1.9/
@inproceedings{bertsch-etal-2023-mbr, title = "It{'}s {MBR} All the Way Down: Modern Generation Techniques Through the Lens of Minimum {B}ayes Risk", author = "Bertsch, Amanda and Xie, Alex and Neubig, Graham and Gormley, Matthew", editor = "Elazar, Yanai and Ettinger, Allyson and Kassner, Nora and Ruder, Sebastian and A. Smith, Noah", booktitle = "Proceedings of the Big Picture Workshop", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.bigpicture-1.9", doi = "10.18653/v1/2023.bigpicture-1.9", pages = "108--122", abstract = "Minimum Bayes Risk (MBR) decoding is a method for choosing the outputs of a machine learning system based not on the output with the highest probability, but the output with the lowest risk (expected error) among multiple candidates. It is a simple but powerful method: for an additional cost at inference time, MBR provides reliable several-point improvements across metrics for a wide variety of tasks without any additional data or training. Despite this, MBR is not frequently applied in NLP works, and knowledge of the method itself is limited. We first provide an introduction to the method and the recent literature. We show that several recent methods that do not reference MBR can be written as special cases of MBR; this reformulation provides additional theoretical justification for the performance of these methods, explaining some results that were previously only empirical. We provide theoretical and empirical results about the effectiveness of various MBR variants and make concrete recommendations for the application of MBR in NLP models, including future directions in this area.", }
Minimum Bayes Risk (MBR) decoding is a method for choosing the outputs of a machine learning system based not on the output with the highest probability, but the output with the lowest risk (expected error) among multiple candidates. It is a simple but powerful method: for an additional cost at inference time, MBR provides reliable several-point improvements across metrics for a wide variety of tasks without any additional data or training. Despite this, MBR is not frequently applied in NLP works, and knowledge of the method itself is limited. We first provide an introduction to the method and the recent literature. We show that several recent methods that do not reference MBR can be written as special cases of MBR; this reformulation provides additional theoretical justification for the performance of these methods, explaining some results that were previously only empirical. We provide theoretical and empirical results about the effectiveness of various MBR variants and make concrete recommendations for the application of MBR in NLP models, including future directions in this area.
[ "Bertsch, Am", "a", "Xie, Alex", "Neubig, Graham", "Gormley, Matthew" ]
It's MBR All the Way Down: Modern Generation Techniques Through the Lens of Minimum Bayes Risk
bigpicture-1.9
2310.01387
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.bigpicture-1.10.bib
https://aclanthology.org/2023.bigpicture-1.10/
@inproceedings{mosbach-2023-analyzing, title = "Analyzing Pre-trained and Fine-tuned Language Models", author = "Mosbach, Marius", editor = "Elazar, Yanai and Ettinger, Allyson and Kassner, Nora and Ruder, Sebastian and A. Smith, Noah", booktitle = "Proceedings of the Big Picture Workshop", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.bigpicture-1.10", doi = "10.18653/v1/2023.bigpicture-1.10", pages = "123--134", abstract = "Since the introduction of transformer-based language models in 2018, the current generation of natural language processing (NLP) models continues to demonstrate impressive capabilities on a variety of academic benchmarks and real-world applications. This progress is based on a simple but general pipeline which consists of pre-training neural language models on large quantities of text, followed by an adaptation step that fine-tunes the pre-trained model to perform a specific NLP task of interest. However, despite the impressive progress on academic benchmarks and the widespread deployment of pre-trained and fine-tuned language models in industry we still lack a fundamental understanding of how and why pre-trained and fine-tuned language models work as well as the individual steps of the pipeline that produce them. We makes several contributions towards improving our understanding of pre-trained and fine-tuned language models, ranging from analyzing the linguistic knowledge of pre-trained language models and how it is affected by fine-tuning, to a rigorous analysis of the fine-tuning process itself and how the choice of adaptation technique affects the generalization of models and thereby provide new insights about previously unexplained phenomena and the capabilities of pre-trained and fine-tuned language models.", }
Since the introduction of transformer-based language models in 2018, the current generation of natural language processing (NLP) models continues to demonstrate impressive capabilities on a variety of academic benchmarks and real-world applications. This progress is based on a simple but general pipeline which consists of pre-training neural language models on large quantities of text, followed by an adaptation step that fine-tunes the pre-trained model to perform a specific NLP task of interest. However, despite the impressive progress on academic benchmarks and the widespread deployment of pre-trained and fine-tuned language models in industry we still lack a fundamental understanding of how and why pre-trained and fine-tuned language models work as well as the individual steps of the pipeline that produce them. We makes several contributions towards improving our understanding of pre-trained and fine-tuned language models, ranging from analyzing the linguistic knowledge of pre-trained language models and how it is affected by fine-tuning, to a rigorous analysis of the fine-tuning process itself and how the choice of adaptation technique affects the generalization of models and thereby provide new insights about previously unexplained phenomena and the capabilities of pre-trained and fine-tuned language models.
[ "Mosbach, Marius" ]
Analyzing Pre-trained and Fine-tuned Language Models
bigpicture-1.10
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.1.bib
https://aclanthology.org/2023.blackboxnlp-1.1/
@inproceedings{colas-etal-2023-knowledge, title = "Knowledge-Grounded Natural Language Recommendation Explanation", author = "Colas, Anthony and Araki, Jun and Zhou, Zhengyu and Wang, Bingqing and Feng, Zhe", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.1", doi = "10.18653/v1/2023.blackboxnlp-1.1", pages = "1--15", abstract = "Explanations accompanying a recommendation can assist users in understanding the decision made by recommendation systems, which in turn increases a user{'}s confidence and trust in the system. Recently, research has focused on generating natural language explanations in a human-readable format. Thus far, the proposed approaches leverage item reviews written by users, which are often subjective, sparse in language, and unable to account for new items that have not been purchased or reviewed before. Instead, we aim to generate fact-grounded recommendation explanations that are objectively described with item features while implicitly considering a user{'}s preferences, based on the user{'}s purchase history. To achieve this, we propose a knowledge graph (KG) approach to natural language explainable recommendation. Our approach draws on user-item features through a novel collaborative filtering-based KG representation to produce fact-grounded, personalized explanations, while jointly learning user-item representations for recommendation scoring. Experimental results show that our approach consistently outperforms previous state-of-the-art models on natural language explainable recommendation metrics.", }
Explanations accompanying a recommendation can assist users in understanding the decision made by recommendation systems, which in turn increases a user{'}s confidence and trust in the system. Recently, research has focused on generating natural language explanations in a human-readable format. Thus far, the proposed approaches leverage item reviews written by users, which are often subjective, sparse in language, and unable to account for new items that have not been purchased or reviewed before. Instead, we aim to generate fact-grounded recommendation explanations that are objectively described with item features while implicitly considering a user{'}s preferences, based on the user{'}s purchase history. To achieve this, we propose a knowledge graph (KG) approach to natural language explainable recommendation. Our approach draws on user-item features through a novel collaborative filtering-based KG representation to produce fact-grounded, personalized explanations, while jointly learning user-item representations for recommendation scoring. Experimental results show that our approach consistently outperforms previous state-of-the-art models on natural language explainable recommendation metrics.
[ "Colas, Anthony", "Araki, Jun", "Zhou, Zhengyu", "Wang, Bingqing", "Feng, Zhe" ]
Knowledge-Grounded Natural Language Recommendation Explanation
blackboxnlp-1.1
2308.15813
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.2.bib
https://aclanthology.org/2023.blackboxnlp-1.2/
@inproceedings{nanda-etal-2023-emergent, title = "Emergent Linear Representations in World Models of Self-Supervised Sequence Models", author = "Nanda, Neel and Lee, Andrew and Wattenberg, Martin", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.2", doi = "10.18653/v1/2023.blackboxnlp-1.2", pages = "16--30", abstract = "How do sequence models represent their decision-making process? Prior work suggests that Othello-playing neural network learned nonlinear models of the board state (Li et al., 2023a). In this work, we provide evidence of a closely related linear representation of the board. In particular, we show that probing for {``}my colour{''} vs. {``}opponent{'}s colour{''} may be a simple yet powerful way to interpret the model{'}s internal state. This precise understanding of the internal representations allows us to control the model{'}s behaviour with simple vector arithmetic. Linear representations enable significant interpretability progress, which we demonstrate with further exploration of how the world model is computed.", }
How do sequence models represent their decision-making process? Prior work suggests that Othello-playing neural network learned nonlinear models of the board state (Li et al., 2023a). In this work, we provide evidence of a closely related linear representation of the board. In particular, we show that probing for {``}my colour{''} vs. {``}opponent{'}s colour{''} may be a simple yet powerful way to interpret the model{'}s internal state. This precise understanding of the internal representations allows us to control the model{'}s behaviour with simple vector arithmetic. Linear representations enable significant interpretability progress, which we demonstrate with further exploration of how the world model is computed.
[ "N", "a, Neel", "Lee, Andrew", "Wattenberg, Martin" ]
Emergent Linear Representations in World Models of Self-Supervised Sequence Models
blackboxnlp-1.2
2309.00941
[ "https://github.com/ajyl/mech_int_othellogpt" ]
https://huggingface.co/papers/2309.00941
1
1
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.blackboxnlp-1.3.bib
https://aclanthology.org/2023.blackboxnlp-1.3/
@inproceedings{singh-etal-2023-explaining, title = "Explaining Data Patterns in Natural Language with Language Models", author = "Singh, Chandan and Morris, John X. and Aneja, Jyoti and Rush, Alexander and Gao, Jianfeng", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.3", doi = "10.18653/v1/2023.blackboxnlp-1.3", pages = "31--55", abstract = "Large language models (LLMs) have displayed an impressive ability to harness natural language to perform complex tasks. We explore whether we can leverage this ability to find and explain patterns in data. Specifically, given a pre-trained LLM and data examples, we apply interpretable autoprompting (iPrompt) to generate a natural language string explaining the data. iPrompt iteratively generates explanations with an LLM and reranks them based on their performance when used as a prompt. Experiments on a wide range of datasets, from synthetic mathematics to natural language understanding, show that iPrompt can yield meaningful insights by accurately finding dataset explanations that are human-interpretable. Moreover, iPrompt is reasonably efficient, as it does not require access to model gradients and works with relatively small models (e.g. {\textasciitilde}6 billion parameters rather than {\textgreater}=100 billion). Finally, experiments with scientific datasets show the potential for iPrompt to aid in scientific discovery.", }
Large language models (LLMs) have displayed an impressive ability to harness natural language to perform complex tasks. We explore whether we can leverage this ability to find and explain patterns in data. Specifically, given a pre-trained LLM and data examples, we apply interpretable autoprompting (iPrompt) to generate a natural language string explaining the data. iPrompt iteratively generates explanations with an LLM and reranks them based on their performance when used as a prompt. Experiments on a wide range of datasets, from synthetic mathematics to natural language understanding, show that iPrompt can yield meaningful insights by accurately finding dataset explanations that are human-interpretable. Moreover, iPrompt is reasonably efficient, as it does not require access to model gradients and works with relatively small models (e.g. {\textasciitilde}6 billion parameters rather than {\textgreater}=100 billion). Finally, experiments with scientific datasets show the potential for iPrompt to aid in scientific discovery.
[ "Singh, Ch", "an", "Morris, John X.", "Aneja, Jyoti", "Rush, Alex", "er", "Gao, Jianfeng" ]
Explaining Data Patterns in Natural Language with Language Models
blackboxnlp-1.3
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.4.bib
https://aclanthology.org/2023.blackboxnlp-1.4/
@inproceedings{gupta-2023-probing, title = "Probing Quantifier Comprehension in Large Language Models: Another Example of Inverse Scaling", author = "Gupta, Akshat", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.4", doi = "10.18653/v1/2023.blackboxnlp-1.4", pages = "56--64", abstract = "With their increasing size, large language models (LLMs) are becoming increasingly good at language understanding tasks. But even with high performance on specific downstream task, LLMs fail at simple linguistic tests for negation or quantifier understanding. Previous work on quantifier understanding in LLMs show inverse scaling in understanding few-type quantifiers. In this paper, we question the claims of of previous work and show that it is a result of inappropriate testing methodology. We also present alternate methods to measure quantifier comprehension in LLMs and show that LLMs are able to better understand the difference between the meaning of few-type and most-type quantifiers as their size increases, although they are not particularly good at it. We also observe inverse scaling for most-type quantifier understanding, which is contrary to human psycho-linguistic experiments and previous work, where the model{'}s understanding of most-type quantifier gets worse as the model size increases. We do this evaluation on models ranging from 125M-175B parameters, which suggests that LLMs do not do as well as expected with quantifiers. We also discuss the possible reasons for this and the relevance of quantifier understanding in evaluating language understanding in LLMs.", }
With their increasing size, large language models (LLMs) are becoming increasingly good at language understanding tasks. But even with high performance on specific downstream task, LLMs fail at simple linguistic tests for negation or quantifier understanding. Previous work on quantifier understanding in LLMs show inverse scaling in understanding few-type quantifiers. In this paper, we question the claims of of previous work and show that it is a result of inappropriate testing methodology. We also present alternate methods to measure quantifier comprehension in LLMs and show that LLMs are able to better understand the difference between the meaning of few-type and most-type quantifiers as their size increases, although they are not particularly good at it. We also observe inverse scaling for most-type quantifier understanding, which is contrary to human psycho-linguistic experiments and previous work, where the model{'}s understanding of most-type quantifier gets worse as the model size increases. We do this evaluation on models ranging from 125M-175B parameters, which suggests that LLMs do not do as well as expected with quantifiers. We also discuss the possible reasons for this and the relevance of quantifier understanding in evaluating language understanding in LLMs.
[ "Gupta, Akshat" ]
Probing Quantifier Comprehension in Large Language Models: Another Example of Inverse Scaling
blackboxnlp-1.4
2306.07384
[ "" ]
https://huggingface.co/papers/2306.07384
1
0
0
1
[]
[]
[]
1
Poster
https://aclanthology.org/2023.blackboxnlp-1.5.bib
https://aclanthology.org/2023.blackboxnlp-1.5/
@inproceedings{arnold-etal-2023-disentangling, title = "Disentangling the Linguistic Competence of Privacy-Preserving {BERT}", author = "Arnold, Stefan and Kemmerzell, Nils and Schreiner, Annika", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.5", doi = "10.18653/v1/2023.blackboxnlp-1.5", pages = "65--75", abstract = "Differential Privacy (DP) has been tailored to address the unique challenges of text-to-text privatization. However, text-to-text privatization is known for degrading the performance of language models when trained on perturbed text. Employing a series of interpretation techniques on the internal representations extracted from BERT trained on perturbed pre-text, we intend to disentangle at the linguistic level the distortion induced by differential privacy. Experimental results from a representational similarity analysis indicate that the overall similarity of internal representations is substantially reduced. Using probing tasks to unpack this dissimilarity, we find evidence that text-to-text privatization affects the linguistic competence across several formalisms, encoding localized properties of words while falling short at encoding the contextual relationships between spans of words.", }
Differential Privacy (DP) has been tailored to address the unique challenges of text-to-text privatization. However, text-to-text privatization is known for degrading the performance of language models when trained on perturbed text. Employing a series of interpretation techniques on the internal representations extracted from BERT trained on perturbed pre-text, we intend to disentangle at the linguistic level the distortion induced by differential privacy. Experimental results from a representational similarity analysis indicate that the overall similarity of internal representations is substantially reduced. Using probing tasks to unpack this dissimilarity, we find evidence that text-to-text privatization affects the linguistic competence across several formalisms, encoding localized properties of words while falling short at encoding the contextual relationships between spans of words.
[ "Arnold, Stefan", "Kemmerzell, Nils", "Schreiner, Annika" ]
Disentangling the Linguistic Competence of Privacy-Preserving BERT
blackboxnlp-1.5
2310.11363
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.6.bib
https://aclanthology.org/2023.blackboxnlp-1.6/
@inproceedings{chaffin-delaunay-2023-honey-tell, title = "{``}Honey, Tell Me What{'}s Wrong{''}, Global Explanation of Textual Discriminative Models through Cooperative Generation", author = "Chaffin, Antoine and Delaunay, Julien", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.6", doi = "10.18653/v1/2023.blackboxnlp-1.6", pages = "76--88", abstract = "The ubiquity of complex machine learning has raised the importance of model-agnostic explanation algorithms. These methods create artificial instances by slightly perturbing real instances, capturing shifts in model decisions. However, such methods rely on initial data and only provide explanations of the decision for these. To tackle these problems, we propose Therapy, the first global and model-agnostic explanation method adapted to text which requires no input dataset. Therapy generates texts following the distribution learned by a classifier through cooperative generation. Because it does not rely on initial samples, it allows to generate explanations even when data is absent (e.g., for confidentiality reasons). Moreover, conversely to existing methods that combine multiple local explanations into a global one, Therapy offers a global overview of the model behavior on the input space. Our experiments show that although using no input data to generate samples, Therapy provides insightful information about features used by the classifier that is competitive with the ones from methods relying on input samples and outperforms them when input samples are not specific to the studied model.", }
The ubiquity of complex machine learning has raised the importance of model-agnostic explanation algorithms. These methods create artificial instances by slightly perturbing real instances, capturing shifts in model decisions. However, such methods rely on initial data and only provide explanations of the decision for these. To tackle these problems, we propose Therapy, the first global and model-agnostic explanation method adapted to text which requires no input dataset. Therapy generates texts following the distribution learned by a classifier through cooperative generation. Because it does not rely on initial samples, it allows to generate explanations even when data is absent (e.g., for confidentiality reasons). Moreover, conversely to existing methods that combine multiple local explanations into a global one, Therapy offers a global overview of the model behavior on the input space. Our experiments show that although using no input data to generate samples, Therapy provides insightful information about features used by the classifier that is competitive with the ones from methods relying on input samples and outperforms them when input samples are not specific to the studied model.
[ "Chaffin, Antoine", "Delaunay, Julien" ]
“Honey, Tell Me What's Wrong”, Global Explanation of Textual Discriminative Models through Cooperative Generation
blackboxnlp-1.6
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.7.bib
https://aclanthology.org/2023.blackboxnlp-1.7/
@inproceedings{bartsch-etal-2023-self, title = "Self-Consistency of Large Language Models under Ambiguity", author = "Bartsch, Henning and Jorgensen, Ole and Rosati, Domenic and Hoelscher-Obermaier, Jason and Pfau, Jacob", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.7", doi = "10.18653/v1/2023.blackboxnlp-1.7", pages = "89--105", abstract = "Large language models (LLMs) that do not give consistent answers across contexts are problematic when used for tasks with expectations of consistency{--}e.g. question-answering, explanations, etc. Our work presents an evaluation benchmark for self-consistency in cases of under-specification where two or more answers can be correct. We conduct a series of behavioral experiments on the OpenAI model suite using an ambiguous integer sequence completion task. We find that average consistency ranges from 67{\%} to 82{\%}, far higher than would be predicted if a model{'}s consistency was random, and increases as model capability improves. Furthermore, we show that models tend to maintain self-consistency across a series of robustness checks, including prompting speaker changes and sequence length changes. These results suggest that self-consistency arises as an emergent capability without specifically training for it. Despite this, we find that models are uncalibrated when judging their own consistency, with models displaying both over- and under-confidence. We also propose a nonparametric test for determining from token output distribution whether a model assigns non-trivial probability to alternative answers. Using this test, we find that despite increases in self-consistency, models usually place significant weight on alternative, inconsistent answers. This distribution of probability mass provides evidence that even highly self-consistent models internally compute multiple possible responses.", }
Large language models (LLMs) that do not give consistent answers across contexts are problematic when used for tasks with expectations of consistency{--}e.g. question-answering, explanations, etc. Our work presents an evaluation benchmark for self-consistency in cases of under-specification where two or more answers can be correct. We conduct a series of behavioral experiments on the OpenAI model suite using an ambiguous integer sequence completion task. We find that average consistency ranges from 67{\%} to 82{\%}, far higher than would be predicted if a model{'}s consistency was random, and increases as model capability improves. Furthermore, we show that models tend to maintain self-consistency across a series of robustness checks, including prompting speaker changes and sequence length changes. These results suggest that self-consistency arises as an emergent capability without specifically training for it. Despite this, we find that models are uncalibrated when judging their own consistency, with models displaying both over- and under-confidence. We also propose a nonparametric test for determining from token output distribution whether a model assigns non-trivial probability to alternative answers. Using this test, we find that despite increases in self-consistency, models usually place significant weight on alternative, inconsistent answers. This distribution of probability mass provides evidence that even highly self-consistent models internally compute multiple possible responses.
[ "Bartsch, Henning", "Jorgensen, Ole", "Rosati, Domenic", "Hoelscher-Obermaier, Jason", "Pfau, Jacob" ]
Self-Consistency of Large Language Models under Ambiguity
blackboxnlp-1.7
2310.13439
[ "https://github.com/jacobpfau/introspective-self-consistency" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.8.bib
https://aclanthology.org/2023.blackboxnlp-1.8/
@inproceedings{sun-hewitt-2023-character, title = "Character-Level {C}hinese Backpack Language Models", author = "Sun, Hao and Hewitt, John", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.8", doi = "10.18653/v1/2023.blackboxnlp-1.8", pages = "106--119", abstract = "The Backpack is a Transformer alternative shown to improve interpretability in English language modeling by decomposing predictions into a weighted sum of token sense components. However, Backpacks{'} reliance on token-defined meaning raises questions as to their potential for languages other than English, a language for which subword tokenization provides a reasonable approximation for lexical items. In this work, we train, evaluate, interpret, and control Backpack language models in character-tokenized Chinese, in which words are often composed of many characters. We find that our (134M parameter) Chinese Backpack language model performs comparably to a (104M parameter) Transformer, and learns rich character-level meanings that log-additively compose to form word meanings. In SimLex-style lexical semantic evaluations, simple averages of Backpack character senses outperform input embeddings from a Transformer. We find that complex multi-character meanings are often formed by using the same per-character sense weights consistently across context. Exploring interpretability-through control, we show that we can localize a source of gender bias in our Backpacks to specific character senses and intervene to reduce the bias.", }
The Backpack is a Transformer alternative shown to improve interpretability in English language modeling by decomposing predictions into a weighted sum of token sense components. However, Backpacks{'} reliance on token-defined meaning raises questions as to their potential for languages other than English, a language for which subword tokenization provides a reasonable approximation for lexical items. In this work, we train, evaluate, interpret, and control Backpack language models in character-tokenized Chinese, in which words are often composed of many characters. We find that our (134M parameter) Chinese Backpack language model performs comparably to a (104M parameter) Transformer, and learns rich character-level meanings that log-additively compose to form word meanings. In SimLex-style lexical semantic evaluations, simple averages of Backpack character senses outperform input embeddings from a Transformer. We find that complex multi-character meanings are often formed by using the same per-character sense weights consistently across context. Exploring interpretability-through control, we show that we can localize a source of gender bias in our Backpacks to specific character senses and intervene to reduce the bias.
[ "Sun, Hao", "Hewitt, John" ]
Character-Level Chinese Backpack Language Models
blackboxnlp-1.8
2310.12751
[ "https://github.com/swordelucidator/nanobackpacklm" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.9.bib
https://aclanthology.org/2023.blackboxnlp-1.9/
@inproceedings{bhattacharya-bojar-2023-unveiling, title = "Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks", author = "Bhattacharya, Sunit and Bojar, Ond{\v{r}}ej", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.9", doi = "10.18653/v1/2023.blackboxnlp-1.9", pages = "120--126", abstract = "Recent research suggests that the feed-forward module within Transformers can be viewed as a collection of key-value memories, where the keys learn to capture specific patterns from the input based on the training examples. The values then combine the output from the {`}memories{'} of the keys to generate predictions about the next token. This leads to an incremental process of prediction that gradually converges towards the final token choice near the output layers. This interesting perspective raises questions about how multilingual models might leverage this mechanism. Specifically, for autoregressive models trained on two or more languages, do all neurons (across layers) respond equally to all languages? No! Our hypothesis centers around the notion that during pre-training, certain model parameters learn strong language-specific features, while others learn more language-agnostic (shared across languages) features. To validate this, we conduct experiments utilizing parallel corpora of two languages that the model was initially pre-trained on. Our findings reveal that the layers closest to the network{'}s input or output tend to exhibit more language-specific behaviour compared to the layers in the middle.", }
Recent research suggests that the feed-forward module within Transformers can be viewed as a collection of key-value memories, where the keys learn to capture specific patterns from the input based on the training examples. The values then combine the output from the {`}memories{'} of the keys to generate predictions about the next token. This leads to an incremental process of prediction that gradually converges towards the final token choice near the output layers. This interesting perspective raises questions about how multilingual models might leverage this mechanism. Specifically, for autoregressive models trained on two or more languages, do all neurons (across layers) respond equally to all languages? No! Our hypothesis centers around the notion that during pre-training, certain model parameters learn strong language-specific features, while others learn more language-agnostic (shared across languages) features. To validate this, we conduct experiments utilizing parallel corpora of two languages that the model was initially pre-trained on. Our findings reveal that the layers closest to the network{'}s input or output tend to exhibit more language-specific behaviour compared to the layers in the middle.
[ "Bhattacharya, Sunit", "Bojar, Ond{\\v{r}}ej" ]
Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks
blackboxnlp-1.9
2310.15552
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.10.bib
https://aclanthology.org/2023.blackboxnlp-1.10/
@inproceedings{mickus-vazquez-2023-bother, title = "Why Bother with Geometry? On the Relevance of Linear Decompositions of Transformer Embeddings", author = "Mickus, Timothee and V{\'a}zquez, Ra{\'u}l", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.10", doi = "10.18653/v1/2023.blackboxnlp-1.10", pages = "127--141", abstract = "A recent body of work has demonstrated that Transformer embeddings can be linearly decomposed into well-defined sums of factors, that can in turn be related to specific network inputs or components. There is however still a dearth of work studying whether these mathematical reformulations are empirically meaningful. In the present work, we study representations from machine-translation decoders using two of such embedding decomposition methods. Our results indicate that, while decomposition-derived indicators effectively correlate with model performance, variation across different runs suggests a more nuanced take on this question. The high variability of our measurements indicate that geometry reflects model-specific characteristics more than it does sentence-specific computations, and that similar training conditions do not guarantee similar vector spaces.", }
A recent body of work has demonstrated that Transformer embeddings can be linearly decomposed into well-defined sums of factors, that can in turn be related to specific network inputs or components. There is however still a dearth of work studying whether these mathematical reformulations are empirically meaningful. In the present work, we study representations from machine-translation decoders using two of such embedding decomposition methods. Our results indicate that, while decomposition-derived indicators effectively correlate with model performance, variation across different runs suggests a more nuanced take on this question. The high variability of our measurements indicate that geometry reflects model-specific characteristics more than it does sentence-specific computations, and that similar training conditions do not guarantee similar vector spaces.
[ "Mickus, Timothee", "V{\\'a}zquez, Ra{\\'u}l" ]
Why Bother with Geometry? On the Relevance of Linear Decompositions of Transformer Embeddings
blackboxnlp-1.10
2310.06977
[ "https://github.com/timotheemickus/seq2seq-splat" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.11.bib
https://aclanthology.org/2023.blackboxnlp-1.11/
@inproceedings{nikolaev-pado-2023-investigating, title = "Investigating Semantic Subspaces of Transformer Sentence Embeddings through Linear Structural Probing", author = "Nikolaev, Dmitry and Pad{\'o}, Sebastian", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.11", doi = "10.18653/v1/2023.blackboxnlp-1.11", pages = "142--154", abstract = "The question of what kinds of linguistic information are encoded in different layers of Transformer-based language models is of considerable interest for the NLP community. Existing work, however, has overwhelmingly focused on word-level representations and encoder-only language models with the masked-token training objective. In this paper, we present experiments with semantic structural probing, a method for studying sentence-level representations via finding a subspace of the embedding space that provides suitable task-specific pairwise distances between data-points. We apply our method to language models from different families (encoder-only, decoder-only, encoder-decoder) and of different sizes in the context of two tasks, semantic textual similarity and natural-language inference. We find that model families differ substantially in their performance and layer dynamics, but that the results are largely model-size invariant.", }
The question of what kinds of linguistic information are encoded in different layers of Transformer-based language models is of considerable interest for the NLP community. Existing work, however, has overwhelmingly focused on word-level representations and encoder-only language models with the masked-token training objective. In this paper, we present experiments with semantic structural probing, a method for studying sentence-level representations via finding a subspace of the embedding space that provides suitable task-specific pairwise distances between data-points. We apply our method to language models from different families (encoder-only, decoder-only, encoder-decoder) and of different sizes in the context of two tasks, semantic textual similarity and natural-language inference. We find that model families differ substantially in their performance and layer dynamics, but that the results are largely model-size invariant.
[ "Nikolaev, Dmitry", "Pad{\\'o}, Sebastian" ]
Investigating Semantic Subspaces of Transformer Sentence Embeddings through Linear Structural Probing
blackboxnlp-1.11
2310.11923
[ "https://github.com/macleginn/semantic-subspaces-code" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.12.bib
https://aclanthology.org/2023.blackboxnlp-1.12/
@inproceedings{tan-2023-causal, title = "Causal Abstraction for Chain-of-Thought Reasoning in Arithmetic Word Problems", author = "Tan, Juanhe (TJ)", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.12", doi = "10.18653/v1/2023.blackboxnlp-1.12", pages = "155--168", abstract = "Recent work suggests that large language models (LLMs) achieve higher accuracy on multi-step reasoning tasks when prompted to generate intermediate reasoning steps, or a chain of thought (CoT), before their final answer. However, it is unclear how exactly CoTs improve LLMs{'} accuracy, and in particular, if LLMs use their CoTs to reason to their final answers. This paper tries to answer this question with respect to arithmetic word problems, by (i) evaluating the correctness of LLMs{'} CoTs, and (ii) using causal abstraction to assess if the intermediate tokens produced as part of a CoT causally impact LLMs{'} final answers, in line with the reasoning described by the CoT. We find that for CoT-prompted LLMs, correct answers to arithmetic problems are highly correlated with correct CoTs, and that when LLMs produce correct CoTs, they realize to a fairly large extent the causal models suggested by their CoTs. Higher degrees of realization also seem associated with better overall accuracy on the arithmetic problems. These findings suggest that some CoT-prompted LLMs may do better on multi-step arithmetic reasoning at least partly because they use their CoTs to reason to their final answers. However, for some LLMs, other internal processes may also be involved.", }
Recent work suggests that large language models (LLMs) achieve higher accuracy on multi-step reasoning tasks when prompted to generate intermediate reasoning steps, or a chain of thought (CoT), before their final answer. However, it is unclear how exactly CoTs improve LLMs{'} accuracy, and in particular, if LLMs use their CoTs to reason to their final answers. This paper tries to answer this question with respect to arithmetic word problems, by (i) evaluating the correctness of LLMs{'} CoTs, and (ii) using causal abstraction to assess if the intermediate tokens produced as part of a CoT causally impact LLMs{'} final answers, in line with the reasoning described by the CoT. We find that for CoT-prompted LLMs, correct answers to arithmetic problems are highly correlated with correct CoTs, and that when LLMs produce correct CoTs, they realize to a fairly large extent the causal models suggested by their CoTs. Higher degrees of realization also seem associated with better overall accuracy on the arithmetic problems. These findings suggest that some CoT-prompted LLMs may do better on multi-step arithmetic reasoning at least partly because they use their CoTs to reason to their final answers. However, for some LLMs, other internal processes may also be involved.
[ "Tan, Juanhe (TJ)" ]
Causal Abstraction for Chain-of-Thought Reasoning in Arithmetic Word Problems
blackboxnlp-1.12
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.13.bib
https://aclanthology.org/2023.blackboxnlp-1.13/
@inproceedings{flechas-manrique-etal-2023-enhancing, title = "Enhancing Interpretability Using Human Similarity Judgements to Prune Word Embeddings", author = "Flechas Manrique, Natalia and Bao, Wanqian and Herbelot, Aurelie and Hasson, Uri", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.13", doi = "10.18653/v1/2023.blackboxnlp-1.13", pages = "169--179", abstract = "Interpretability methods in NLP aim to provide insights into the semantics underlying specific system architectures. Focusing on word embeddings, we present a supervised-learning method that, for a given domain (e.g., sports, professions), identifies a subset of model features that strongly improve prediction of human similarity judgments. We show this method keeps only 20-40{\%} of the original embeddings, for 8 independent semantic domains, and that it retains different feature sets across domains. We then present two approaches for interpreting the semantics of the retained features. The first obtains the scores of the domain words (co-hyponyms) on the first principal component of the retained embeddings, and extracts terms whose co-occurrence with the co-hyponyms tracks these scores{'} profile. This analysis reveals that humans differentiate e.g. sports based on how gender-inclusive and international they are. The second approach uses the retained sets as variables in a probing task that predicts values along 65 semantically annotated dimensions for a dataset of 535 words. The features retained for professions are best at predicting cognitive, emotional and social dimensions, whereas features retained for fruits or vegetables best predict the gustation (taste) dimension. We discuss implications for alignment between AI systems and human knowledge.", }
Interpretability methods in NLP aim to provide insights into the semantics underlying specific system architectures. Focusing on word embeddings, we present a supervised-learning method that, for a given domain (e.g., sports, professions), identifies a subset of model features that strongly improve prediction of human similarity judgments. We show this method keeps only 20-40{\%} of the original embeddings, for 8 independent semantic domains, and that it retains different feature sets across domains. We then present two approaches for interpreting the semantics of the retained features. The first obtains the scores of the domain words (co-hyponyms) on the first principal component of the retained embeddings, and extracts terms whose co-occurrence with the co-hyponyms tracks these scores{'} profile. This analysis reveals that humans differentiate e.g. sports based on how gender-inclusive and international they are. The second approach uses the retained sets as variables in a probing task that predicts values along 65 semantically annotated dimensions for a dataset of 535 words. The features retained for professions are best at predicting cognitive, emotional and social dimensions, whereas features retained for fruits or vegetables best predict the gustation (taste) dimension. We discuss implications for alignment between AI systems and human knowledge.
[ "Flechas Manrique, Natalia", "Bao, Wanqian", "Herbelot, Aurelie", "Hasson, Uri" ]
Enhancing Interpretability Using Human Similarity Judgements to Prune Word Embeddings
blackboxnlp-1.13
2310.10262
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.14.bib
https://aclanthology.org/2023.blackboxnlp-1.14/
@inproceedings{sieker-zarriess-2023-language, title = "When Your Language Model Cannot {E}ven Do Determiners Right: Probing for Anti-Presuppositions and the Maximize Presupposition! Principle", author = "Sieker, Judith and Zarrie{\ss}, Sina", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.14", doi = "10.18653/v1/2023.blackboxnlp-1.14", pages = "180--198", abstract = "The increasing interest in probing the linguistic capabilities of large language models (LLMs) has long reached the area of semantics and pragmatics, including the phenomenon of presuppositions. In this study, we investigate a phenomenon that, however, has not yet been investigated, i.e., the phenomenon of anti-presupposition and the principle that accounts for it, the Maximize Presupposition! principle (MP!). Through an experimental investigation using psycholinguistic data and four open-source BERT model variants, we explore how language models handle different anti-presuppositions and whether they apply the MP! principle in their predictions. Further, we examine whether fine-tuning with Natural Language Inference data impacts adherence to the MP! principle. Our findings reveal that LLMs tend to replicate context-based n-grams rather than follow the MP! principle, with fine-tuning not enhancing their adherence. Notably, our results further indicate a striking difficulty of LLMs to correctly predict determiners, in relatively simple linguistic contexts.", }
The increasing interest in probing the linguistic capabilities of large language models (LLMs) has long reached the area of semantics and pragmatics, including the phenomenon of presuppositions. In this study, we investigate a phenomenon that, however, has not yet been investigated, i.e., the phenomenon of anti-presupposition and the principle that accounts for it, the Maximize Presupposition! principle (MP!). Through an experimental investigation using psycholinguistic data and four open-source BERT model variants, we explore how language models handle different anti-presuppositions and whether they apply the MP! principle in their predictions. Further, we examine whether fine-tuning with Natural Language Inference data impacts adherence to the MP! principle. Our findings reveal that LLMs tend to replicate context-based n-grams rather than follow the MP! principle, with fine-tuning not enhancing their adherence. Notably, our results further indicate a striking difficulty of LLMs to correctly predict determiners, in relatively simple linguistic contexts.
[ "Sieker, Judith", "Zarrie{\\ss}, Sina" ]
When Your Language Model Cannot Even Do Determiners Right: Probing for Anti-Presuppositions and the Maximize Presupposition! Principle
blackboxnlp-1.14
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.15.bib
https://aclanthology.org/2023.blackboxnlp-1.15/
@inproceedings{groschwitz-2023-introducing, title = "Introducing {VULCAN}: A Visualization Tool for Understanding Our Models and Data by Example", author = "Groschwitz, Jonas", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.15", doi = "10.18653/v1/2023.blackboxnlp-1.15", pages = "199--211", abstract = "Examples are a powerful tool that help us understand complex concepts and connections. In computational linguistics research, looking at example system output and example corpus entries can offer a wealth of insights that are not otherwise accessible. This paper describes the open-source software VULCAN, a visualization tool for strings, graphs, trees, alignments, attention and more. VULCAN{'}s unique ability to visualize both linguistic structures and properties of neural models make it particularly relevant for neuro-symbolic models. Neuro-symbolic models, combining neural networks with often linguistically grounded structures, offer a promise of increased interpretability in an age of purely neural black-box end-to-end models. VULCAN aims to facilitate this interpretability in practice. VULCAN is designed to be both easy to use and powerful in its capabilities.", }
Examples are a powerful tool that help us understand complex concepts and connections. In computational linguistics research, looking at example system output and example corpus entries can offer a wealth of insights that are not otherwise accessible. This paper describes the open-source software VULCAN, a visualization tool for strings, graphs, trees, alignments, attention and more. VULCAN{'}s unique ability to visualize both linguistic structures and properties of neural models make it particularly relevant for neuro-symbolic models. Neuro-symbolic models, combining neural networks with often linguistically grounded structures, offer a promise of increased interpretability in an age of purely neural black-box end-to-end models. VULCAN aims to facilitate this interpretability in practice. VULCAN is designed to be both easy to use and powerful in its capabilities.
[ "Groschwitz, Jonas" ]
Introducing VULCAN: A Visualization Tool for Understanding Our Models and Data by Example
blackboxnlp-1.15
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.16.bib
https://aclanthology.org/2023.blackboxnlp-1.16/
@inproceedings{kletz-etal-2023-self, title = "The Self-Contained Negation Test Set", author = "Kletz, David and Amsili, Pascal and Candito, Marie", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.16", doi = "10.18653/v1/2023.blackboxnlp-1.16", pages = "212--221", abstract = "Several methodologies have recently been proposed to evaluate the ability of Pretrained Language Models (PLMs) to interpret negation. In this article, we build on Gubelmann and Handschuh (2022), which studies the modification of PLMs{'} predictions as a function of the polarity of inputs, in English. Crucially, this test uses {``}self-contained{''} inputs ending with a masked position: depending on the polarity of a verb in the input, a particular token is either semantically ruled out or allowed at the masked position. By replicating Gubelmann and Handschuh (2022) experiments, we have uncovered flaws that weaken the conclusions that can be drawn from this test. We thus propose an improved version, the Self-Contained Neg Test, which is more controlled, more systematic, and entirely based on examples forming minimal pairs varying only in the presence or absence of verbal negation in English. When applying our test to the roberta and bert base and large models, we show that only roberta-large shows trends that match the expectations, while bert-base is mostly insensitive to negation. For all the tested models though, in a significant number of test instances the top-1 prediction remains the token that is semantically forbidden by the context, which shows how much room for improvement remains for a proper treatment of the negation phenomenon.", }
Several methodologies have recently been proposed to evaluate the ability of Pretrained Language Models (PLMs) to interpret negation. In this article, we build on Gubelmann and Handschuh (2022), which studies the modification of PLMs{'} predictions as a function of the polarity of inputs, in English. Crucially, this test uses {``}self-contained{''} inputs ending with a masked position: depending on the polarity of a verb in the input, a particular token is either semantically ruled out or allowed at the masked position. By replicating Gubelmann and Handschuh (2022) experiments, we have uncovered flaws that weaken the conclusions that can be drawn from this test. We thus propose an improved version, the Self-Contained Neg Test, which is more controlled, more systematic, and entirely based on examples forming minimal pairs varying only in the presence or absence of verbal negation in English. When applying our test to the roberta and bert base and large models, we show that only roberta-large shows trends that match the expectations, while bert-base is mostly insensitive to negation. For all the tested models though, in a significant number of test instances the top-1 prediction remains the token that is semantically forbidden by the context, which shows how much room for improvement remains for a proper treatment of the negation phenomenon.
[ "Kletz, David", "Amsili, Pascal", "C", "ito, Marie" ]
The Self-Contained Negation Test Set
blackboxnlp-1.16
2408.11469
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.17.bib
https://aclanthology.org/2023.blackboxnlp-1.17/
@inproceedings{cong-etal-2023-investigating, title = "Investigating the Effect of Discourse Connectives on Transformer Surprisal: Language Models Understand Connectives, {E}ven So They Are Surprised", author = "Cong, Yan and Chersoni, Emmanuele and Hsu, Yu-Yin and Blache, Philippe", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.17", doi = "10.18653/v1/2023.blackboxnlp-1.17", pages = "222--232", abstract = "As neural language models (NLMs) based on Transformers are becoming increasingly dominant in natural language processing, several studies have proposed analyzing the semantic and pragmatic abilities of such models. In our study, we aimed at investigating the effect of discourse connectives on NLMs with regard to Transformer Surprisal scores by focusing on the English stimuli of an experimental dataset, in which the expectations about an event in a discourse fragment could be reversed by a concessive or a contrastive connective. By comparing the Surprisal scores of several NLMs, we found that bigger NLMs show patterns similar to humans{'} behavioral data when a concessive connective is used, while connective-related effects tend to disappear with a contrastive one. We have additionally validated our findings with GPT-Neo using an extended dataset, and results mostly show a consistent pattern.", }
As neural language models (NLMs) based on Transformers are becoming increasingly dominant in natural language processing, several studies have proposed analyzing the semantic and pragmatic abilities of such models. In our study, we aimed at investigating the effect of discourse connectives on NLMs with regard to Transformer Surprisal scores by focusing on the English stimuli of an experimental dataset, in which the expectations about an event in a discourse fragment could be reversed by a concessive or a contrastive connective. By comparing the Surprisal scores of several NLMs, we found that bigger NLMs show patterns similar to humans{'} behavioral data when a concessive connective is used, while connective-related effects tend to disappear with a contrastive one. We have additionally validated our findings with GPT-Neo using an extended dataset, and results mostly show a consistent pattern.
[ "Cong, Yan", "Chersoni, Emmanuele", "Hsu, Yu-Yin", "Blache, Philippe" ]
Investigating the Effect of Discourse Connectives on Transformer Surprisal: Language Models Understand Connectives, Even So They Are Surprised
blackboxnlp-1.17
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.18.bib
https://aclanthology.org/2023.blackboxnlp-1.18/
@inproceedings{zhou-srikumar-2023-metaprobe, title = "{METAPROBE}: A Representation- and Task-Agnostic Probe", author = "Zhou, Yichu and Srikumar, Vivek", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.18", doi = "10.18653/v1/2023.blackboxnlp-1.18", pages = "233--249", abstract = "Probing contextualized representations typically involves comparing task-specific model predictions against ground truth linguistic labels. Although this methodology shows \textit{what} information can be recovered by a classifier, it does not reveal \textit{how} a classifier uses the representation to make its decision. To address the latter problem, we ask: Do task-classifiers rely on representation- and task-independent geometric patterns in the embedding space? We explore this question by developing MetaProbe, an approach that uses geometric properties of representations to predict the behavior of task-specific classifiers (i.e., their predictions as opposed to the ground truth). Our experiments reveal the existence of universal geometric patterns across representations that can predict classifier predictions. Consequently, this allows us to posit a geometric explanation for the impressive performance of contextualized representations.", }
Probing contextualized representations typically involves comparing task-specific model predictions against ground truth linguistic labels. Although this methodology shows \textit{what} information can be recovered by a classifier, it does not reveal \textit{how} a classifier uses the representation to make its decision. To address the latter problem, we ask: Do task-classifiers rely on representation- and task-independent geometric patterns in the embedding space? We explore this question by developing MetaProbe, an approach that uses geometric properties of representations to predict the behavior of task-specific classifiers (i.e., their predictions as opposed to the ground truth). Our experiments reveal the existence of universal geometric patterns across representations that can predict classifier predictions. Consequently, this allows us to posit a geometric explanation for the impressive performance of contextualized representations.
[ "Zhou, Yichu", "Srikumar, Vivek" ]
METAPROBE: A Representation- and Task-Agnostic Probe
blackboxnlp-1.18
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.19.bib
https://aclanthology.org/2023.blackboxnlp-1.19/
@inproceedings{johnson-marasovic-2023-much, title = "How Much Consistency Is Your Accuracy Worth?", author = "Johnson, Jacob K. and Marasovi{\'c}, Ana", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.19", doi = "10.18653/v1/2023.blackboxnlp-1.19", pages = "250--260", abstract = "Contrast set consistency is a robustness measurement that evaluates the rate at which a model correctly responds to all instances in a bundle of minimally different examples relying on the same knowledge. To draw additional insights, we propose to complement consistency with relative consistency{---}the probability that an equally accurate model would surpass the consistency of the proposed model, given a distribution over possible consistencies. Models with 100{\%} relative consistency have reached a consistency peak for their accuracy. We reflect on prior work that reports consistency in contrast sets and observe that relative consistency can alter the assessment of a model{'}s consistency compared to another. We anticipate that our proposed measurement and insights will influence future studies aiming to promote consistent behavior in models.", }
Contrast set consistency is a robustness measurement that evaluates the rate at which a model correctly responds to all instances in a bundle of minimally different examples relying on the same knowledge. To draw additional insights, we propose to complement consistency with relative consistency{---}the probability that an equally accurate model would surpass the consistency of the proposed model, given a distribution over possible consistencies. Models with 100{\%} relative consistency have reached a consistency peak for their accuracy. We reflect on prior work that reports consistency in contrast sets and observe that relative consistency can alter the assessment of a model{'}s consistency compared to another. We anticipate that our proposed measurement and insights will influence future studies aiming to promote consistent behavior in models.
[ "Johnson, Jacob K.", "Marasovi{\\'c}, Ana" ]
How Much Consistency Is Your Accuracy Worth?
blackboxnlp-1.19
2310.13781
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.20.bib
https://aclanthology.org/2023.blackboxnlp-1.20/
@inproceedings{baeumel-etal-2023-investigating, title = "Investigating the Encoding of Words in {BERT}{'}s Neurons Using Feature Textualization", author = "Baeumel, Tanja and Vijayakumar, Soniya and van Genabith, Josef and Neumann, Guenter and Ostermann, Simon", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.20", doi = "10.18653/v1/2023.blackboxnlp-1.20", pages = "261--270", abstract = "Pretrained language models (PLMs) form the basis of most state-of-the-art NLP technologies. Nevertheless, they are essentially black boxes: Humans do not have a clear understanding of what knowledge is encoded in different parts of the models, especially in individual neurons. A contrast is in computer vision, where feature visualization provides a decompositional interpretability technique for neurons of vision models. Activation maximization is used to synthesize inherently interpretable visual representations of the information encoded in individual neurons. Our work is inspired by this but presents a cautionary tale on the interpretability of single neurons, based on the first large-scale attempt to adapt activation maximization to NLP, and, more specifically, large PLMs. We propose feature textualization, a technique to produce dense representations of neurons in the PLM word embedding space. We apply feature textualization to the BERT model to investigate whether the knowledge encoded in individual neurons can be interpreted and symbolized. We find that the produced representations can provide insights about the knowledge encoded in individual neurons, but that individual neurons do not represent clear-cut symbolic units of language such as words. Additionally, we use feature textualization to investigate how many neurons are needed to encode words in BERT.", }
Pretrained language models (PLMs) form the basis of most state-of-the-art NLP technologies. Nevertheless, they are essentially black boxes: Humans do not have a clear understanding of what knowledge is encoded in different parts of the models, especially in individual neurons. A contrast is in computer vision, where feature visualization provides a decompositional interpretability technique for neurons of vision models. Activation maximization is used to synthesize inherently interpretable visual representations of the information encoded in individual neurons. Our work is inspired by this but presents a cautionary tale on the interpretability of single neurons, based on the first large-scale attempt to adapt activation maximization to NLP, and, more specifically, large PLMs. We propose feature textualization, a technique to produce dense representations of neurons in the PLM word embedding space. We apply feature textualization to the BERT model to investigate whether the knowledge encoded in individual neurons can be interpreted and symbolized. We find that the produced representations can provide insights about the knowledge encoded in individual neurons, but that individual neurons do not represent clear-cut symbolic units of language such as words. Additionally, we use feature textualization to investigate how many neurons are needed to encode words in BERT.
[ "Baeumel, Tanja", "Vijayakumar, Soniya", "van Genabith, Josef", "Neumann, Guenter", "Ostermann, Simon" ]
Investigating the Encoding of Words in BERT's Neurons Using Feature Textualization
blackboxnlp-1.20
2311.08240
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.21.bib
https://aclanthology.org/2023.blackboxnlp-1.21/
@inproceedings{wang-steinert-threlkeld-2023-evaluating, title = "Evaluating Transformer{'}s Ability to Learn Mildly Context-Sensitive Languages", author = "Wang, Shunjie and Steinert-Threlkeld, Shane", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.21", doi = "10.18653/v1/2023.blackboxnlp-1.21", pages = "271--283", abstract = "Despite the fact that Transformers perform well in NLP tasks, recent studies suggest that self-attention is theoretically limited in learning even some regular and context-free languages. These findings motivated us to think about their implications in modeling natural language, which is hypothesized to be mildly context-sensitive. We test the Transformer{'}s ability to learn mildly context-sensitive languages of varying complexities, and find that they generalize well to unseen in-distribution data, but their ability to extrapolate to longer strings is worse than that of LSTMs. Our analyses show that the learned self-attention patterns and representations modeled dependency relations and demonstrated counting behavior, which may have helped the models solve the languages.", }
Despite the fact that Transformers perform well in NLP tasks, recent studies suggest that self-attention is theoretically limited in learning even some regular and context-free languages. These findings motivated us to think about their implications in modeling natural language, which is hypothesized to be mildly context-sensitive. We test the Transformer{'}s ability to learn mildly context-sensitive languages of varying complexities, and find that they generalize well to unseen in-distribution data, but their ability to extrapolate to longer strings is worse than that of LSTMs. Our analyses show that the learned self-attention patterns and representations modeled dependency relations and demonstrated counting behavior, which may have helped the models solve the languages.
[ "Wang, Shunjie", "Steinert-Threlkeld, Shane" ]
Evaluating Transformer's Ability to Learn Mildly Context-Sensitive Languages
blackboxnlp-1.21
2309.00857
[ "" ]
https://huggingface.co/papers/2309.00857
2
1
0
2
[]
[]
[]
1
Poster
https://aclanthology.org/2023.blackboxnlp-1.22.bib
https://aclanthology.org/2023.blackboxnlp-1.22/
@inproceedings{prakash-lee-2023-layered, title = "Layered Bias: Interpreting Bias in Pretrained Large Language Models", author = "Prakash, Nirmalendu and Lee, Roy Ka-Wei", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.22", doi = "10.18653/v1/2023.blackboxnlp-1.22", pages = "284--295", abstract = "Large language models (LLMs) like GPT and PALM have excelled in numerous natural language processing (NLP) tasks such as text generation, question answering, and translation. However, they are also found to have inherent social biases. To address this, recent studies have proposed debiasing techniques like iterative nullspace projection (INLP) and Counterfactual Data Augmentation (CDA). Additionally, there{'}s growing interest in understanding the intricacies of these models. Some researchers focus on individual neural units, while others examine specific layers. In our study, we benchmark newly released models, assess the impact of debiasing methods, and investigate how biases are linked to different transformer layers using a method called Logit Lens. Specifically, we evaluate three modern LLMs: OPT, LLaMA, and LLaMA2, and their debiased versions. Our experiments are based on two popular bias evaluation datasets, StereoSet and CrowS-Pairs, and we perform a layer-by-layer analysis using the Logit Lens.", }
Large language models (LLMs) like GPT and PALM have excelled in numerous natural language processing (NLP) tasks such as text generation, question answering, and translation. However, they are also found to have inherent social biases. To address this, recent studies have proposed debiasing techniques like iterative nullspace projection (INLP) and Counterfactual Data Augmentation (CDA). Additionally, there{'}s growing interest in understanding the intricacies of these models. Some researchers focus on individual neural units, while others examine specific layers. In our study, we benchmark newly released models, assess the impact of debiasing methods, and investigate how biases are linked to different transformer layers using a method called Logit Lens. Specifically, we evaluate three modern LLMs: OPT, LLaMA, and LLaMA2, and their debiased versions. Our experiments are based on two popular bias evaluation datasets, StereoSet and CrowS-Pairs, and we perform a layer-by-layer analysis using the Logit Lens.
[ "Prakash, Nirmalendu", "Lee, Roy Ka-Wei" ]
Layered Bias: Interpreting Bias in Pretrained Large Language Models
blackboxnlp-1.22
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.23.bib
https://aclanthology.org/2023.blackboxnlp-1.23/
@inproceedings{lorge-pierrehumbert-2023-wacky, title = "Not Wacky vs. Definitely Wacky: A Study of Scalar Adverbs in Pretrained Language Models", author = "Lorge, Isabelle and Pierrehumbert, Janet B.", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.23", doi = "10.18653/v1/2023.blackboxnlp-1.23", pages = "296--316", abstract = "Vector-space models of word meaning all assume that words occurring in similar contexts have similar meanings. Words that are similar in their topical associations but differ in their logical force tend to emerge as semantically close {--} creating well-known challenges for NLP applications that involve logical reasoning. Pretrained language models such as BERT, RoBERTa, GPT-2, and GPT-3 hold the promise of performing better on logical tasks than classic static word embeddings. However, reports are mixed about their success. Here, we advance this discussion through a systematic study of scalar adverbs, an under-explored class of words with strong logical force. Using three different tasks involving both naturalistic social media data and constructed examples, we investigate the extent to which BERT, RoBERTa, GPT-2 and GPT-3 exhibit knowledge of these common words. We ask: 1) Do the models distinguish amongst the three semantic categories of MODALITY, FREQUENCY and DEGREE? 2) Do they have implicit representations of full scales from maximally negative to maximally positive? 3) How do word frequency and contextual factors impact model performance? We find that despite capturing some aspects of logical meaning, the models still have obvious shortfalls.", }
Vector-space models of word meaning all assume that words occurring in similar contexts have similar meanings. Words that are similar in their topical associations but differ in their logical force tend to emerge as semantically close {--} creating well-known challenges for NLP applications that involve logical reasoning. Pretrained language models such as BERT, RoBERTa, GPT-2, and GPT-3 hold the promise of performing better on logical tasks than classic static word embeddings. However, reports are mixed about their success. Here, we advance this discussion through a systematic study of scalar adverbs, an under-explored class of words with strong logical force. Using three different tasks involving both naturalistic social media data and constructed examples, we investigate the extent to which BERT, RoBERTa, GPT-2 and GPT-3 exhibit knowledge of these common words. We ask: 1) Do the models distinguish amongst the three semantic categories of MODALITY, FREQUENCY and DEGREE? 2) Do they have implicit representations of full scales from maximally negative to maximally positive? 3) How do word frequency and contextual factors impact model performance? We find that despite capturing some aspects of logical meaning, the models still have obvious shortfalls.
[ "Lorge, Isabelle", "Pierrehumbert, Janet B." ]
Not Wacky vs. Definitely Wacky: A Study of Scalar Adverbs in Pretrained Language Models
blackboxnlp-1.23
2305.16426
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.24.bib
https://aclanthology.org/2023.blackboxnlp-1.24/
@inproceedings{huang-etal-2023-rigorously, title = "Rigorously Assessing Natural Language Explanations of Neurons", author = "Huang, Jing and Geiger, Atticus and D{'}Oosterlinck, Karel and Wu, Zhengxuan and Potts, Christopher", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.24", doi = "10.18653/v1/2023.blackboxnlp-1.24", pages = "317--331", abstract = "Natural language is an appealing medium for explaining how large language models process and store information, but evaluating the faithfulness of such explanations is challenging. To help address this, we develop two modes of evaluation for natural language explanations that claim individual neurons represent a concept in a text input. In the *observational mode*, we evaluate claims that a neuron $a$ activates on all and only input strings that refer to a concept picked out by the proposed explanation $E$. In the *intervention mode*, we construe $E$ as a claim that neuron $a$ is a causal mediator of the concept denoted by $E$. We apply our framework to the GPT-4-generated explanations of GPT-2 XL neurons of Bills et al. (2023) and show that even the most confident explanations have high error rates and little to no causal efficacy. We close the paper by critically assessing whether natural language is a good choice for explanations and whether neurons are the best level of analysis.", }
Natural language is an appealing medium for explaining how large language models process and store information, but evaluating the faithfulness of such explanations is challenging. To help address this, we develop two modes of evaluation for natural language explanations that claim individual neurons represent a concept in a text input. In the *observational mode*, we evaluate claims that a neuron $a$ activates on all and only input strings that refer to a concept picked out by the proposed explanation $E$. In the *intervention mode*, we construe $E$ as a claim that neuron $a$ is a causal mediator of the concept denoted by $E$. We apply our framework to the GPT-4-generated explanations of GPT-2 XL neurons of Bills et al. (2023) and show that even the most confident explanations have high error rates and little to no causal efficacy. We close the paper by critically assessing whether natural language is a good choice for explanations and whether neurons are the best level of analysis.
[ "Huang, Jing", "Geiger, Atticus", "D{'}Oosterlinck, Karel", "Wu, Zhengxuan", "Potts, Christopher" ]
Rigorously Assessing Natural Language Explanations of Neurons
blackboxnlp-1.24
2309.10312
[ "" ]
https://huggingface.co/papers/2309.10312
3
0
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.blackboxnlp-1.25.bib
https://aclanthology.org/2023.blackboxnlp-1.25/
@inproceedings{decarlo-etal-2023-npis, title = "{NPI}s Aren{'}t Exactly Easy: Variation in Licensing across Large Language Models", author = "DeCarlo, Deanna and Palmer, William and Wilson, Michael and Frank, Bob", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.25", doi = "10.18653/v1/2023.blackboxnlp-1.25", pages = "332--341", abstract = "We examine the licensing of negative polarity items (NPIs) in large language models (LLMs) to enrich the picture of how models acquire NPIs as linguistic phenomena at the syntax-semantics interface. NPIs are a class of words which have a restricted distribution, appearing only in certain licensing contexts, prototypically negation. Unlike much of previous work which assumes NPIs and their licensing environments constitute unified classes, we consider NPI distribution in its full complexity: different NPIs are possible in different licensing environments. By studying this phenomenon across a broad range of models, we are able to explore which features of the model architecture, properties of the training data, and linguistic characteristics of the NPI phenomenon itself drive performance.", }
We examine the licensing of negative polarity items (NPIs) in large language models (LLMs) to enrich the picture of how models acquire NPIs as linguistic phenomena at the syntax-semantics interface. NPIs are a class of words which have a restricted distribution, appearing only in certain licensing contexts, prototypically negation. Unlike much of previous work which assumes NPIs and their licensing environments constitute unified classes, we consider NPI distribution in its full complexity: different NPIs are possible in different licensing environments. By studying this phenomenon across a broad range of models, we are able to explore which features of the model architecture, properties of the training data, and linguistic characteristics of the NPI phenomenon itself drive performance.
[ "DeCarlo, Deanna", "Palmer, William", "Wilson, Michael", "Frank, Bob" ]
NPIs Aren't Exactly Easy: Variation in Licensing across Large Language Models
blackboxnlp-1.25
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.26.bib
https://aclanthology.org/2023.blackboxnlp-1.26/
@inproceedings{sakarvadia-etal-2023-memory, title = "Memory Injections: Correcting Multi-Hop Reasoning Failures During Inference in Transformer-Based Language Models", author = "Sakarvadia, Mansi and Ajith, Aswathy and Khan, Arham and Grzenda, Daniel and Hudson, Nathaniel and Bauer, Andr{\'e} and Chard, Kyle and Foster, Ian", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.26", doi = "10.18653/v1/2023.blackboxnlp-1.26", pages = "342--356", abstract = "Answering multi-hop reasoning questions requires retrieving and synthesizing information from diverse sources. Large Language Models (LLMs) struggle to perform such reasoning consistently. Here we propose an approach to pinpoint and rectify multi-hop reasoning failures through targeted memory injections on LLM attention heads. First, we analyze the per-layer activations of GPT-2 models in response to single and multi-hop prompts. We then propose a mechanism that allows users to inject pertinent prompt-specific information, which we refer to as {``}memories,{''} at critical LLM locations during inference. By thus enabling the LLM to incorporate additional relevant information during inference, we enhance the quality of multi-hop prompt completions. We show empirically that a simple, efficient, and targeted memory injection into a key attention layer can often increase the probability of the desired next token in multi-hop tasks, by up to 424{\%}.", }
Answering multi-hop reasoning questions requires retrieving and synthesizing information from diverse sources. Large Language Models (LLMs) struggle to perform such reasoning consistently. Here we propose an approach to pinpoint and rectify multi-hop reasoning failures through targeted memory injections on LLM attention heads. First, we analyze the per-layer activations of GPT-2 models in response to single and multi-hop prompts. We then propose a mechanism that allows users to inject pertinent prompt-specific information, which we refer to as {``}memories,{''} at critical LLM locations during inference. By thus enabling the LLM to incorporate additional relevant information during inference, we enhance the quality of multi-hop prompt completions. We show empirically that a simple, efficient, and targeted memory injection into a key attention layer can often increase the probability of the desired next token in multi-hop tasks, by up to 424{\%}.
[ "Sakarvadia, Mansi", "Ajith, Aswathy", "Khan, Arham", "Grzenda, Daniel", "Hudson, Nathaniel", "Bauer, Andr{\\'e}", "Chard, Kyle", "Foster, Ian" ]
Memory Injections: Correcting Multi-Hop Reasoning Failures During Inference in Transformer-Based Language Models
blackboxnlp-1.26
2309.05605
[ "https://github.com/msakarvadia/memory_injections" ]
https://huggingface.co/papers/2309.05605
0
1
0
8
[]
[ "msakarvadia/handwritten_multihop_reasoning_data" ]
[]
1
Poster
https://aclanthology.org/2023.blackboxnlp-1.27.bib
https://aclanthology.org/2023.blackboxnlp-1.27/
@inproceedings{chakraborty-etal-2023-systematic, title = "Systematic Generalization by Finetuning? Analyzing Pretrained Language Models Using Constituency Tests", author = "Chakraborty, Aishik and Cheung, Jackie CK and O{'}Donnell, Timothy J.", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.27", doi = "10.18653/v1/2023.blackboxnlp-1.27", pages = "357--366", abstract = "Constituents are groups of words that behave as a syntactic unit. Many linguistic phenomena (e.g., question formation, diathesis alternations) require the manipulation and rearrangement of constituents in a sentence. In this paper, we investigate how different finetuning setups affect the ability of pretrained sequence-to-sequence language models such as BART and T5 to replicate constituency tests {---} transformations that involve manipulating constituents in a sentence. We design multiple evaluation settings by varying the combinations of constituency tests and sentence types that a model is exposed to during finetuning. We show that models can replicate a linguistic transformation on a specific type of sentence that they saw during finetuning, but performance degrades substantially in other settings, showing a lack of systematic generalization. These results suggest that models often learn to manipulate sentences at a surface level unrelated to the constituent-level syntactic structure, for example by copying the first word of a sentence. These results may partially explain the brittleness of pretrained language models in downstream tasks.", }
Constituents are groups of words that behave as a syntactic unit. Many linguistic phenomena (e.g., question formation, diathesis alternations) require the manipulation and rearrangement of constituents in a sentence. In this paper, we investigate how different finetuning setups affect the ability of pretrained sequence-to-sequence language models such as BART and T5 to replicate constituency tests {---} transformations that involve manipulating constituents in a sentence. We design multiple evaluation settings by varying the combinations of constituency tests and sentence types that a model is exposed to during finetuning. We show that models can replicate a linguistic transformation on a specific type of sentence that they saw during finetuning, but performance degrades substantially in other settings, showing a lack of systematic generalization. These results suggest that models often learn to manipulate sentences at a surface level unrelated to the constituent-level syntactic structure, for example by copying the first word of a sentence. These results may partially explain the brittleness of pretrained language models in downstream tasks.
[ "Chakraborty, Aishik", "Cheung, Jackie CK", "O{'}Donnell, Timothy J." ]
Systematic Generalization by Finetuning? Analyzing Pretrained Language Models Using Constituency Tests
blackboxnlp-1.27
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.28.bib
https://aclanthology.org/2023.blackboxnlp-1.28/
@inproceedings{liu-chersoni-2023-quick, title = "On Quick Kisses and How to Make Them Count: A Study on Event Construal in Light Verb Constructions with {BERT}", author = "Liu, Chenxin and Chersoni, Emmanuele", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.28", doi = "10.18653/v1/2023.blackboxnlp-1.28", pages = "367--378", abstract = "Psycholinguistic studies suggested that our mental perception of events depends not only on the lexical items used to describe them, but also on the syntactic structure of the event description. More specifically, it has been argued that light verb constructions affect the perception of duration in event construal, such that the same event in this type of constructions is perceived by humans as taking less time (to give a kiss takes a shorter time than to kiss). In our paper, we present two experiments with BERT using English stimuli from psycholinguistic studies to investigate the effects of the syntactic construction on event duration and event similarity. We show that i) the dimensions of BERT vectors encode a smaller value for duration for both punctive and durative events in count syntax, in line with human results; on the other hand, we also found that ii) BERT semantic similarity fails to capture the conceptual shift that durative events should undergo in count syntax.", }
Psycholinguistic studies suggested that our mental perception of events depends not only on the lexical items used to describe them, but also on the syntactic structure of the event description. More specifically, it has been argued that light verb constructions affect the perception of duration in event construal, such that the same event in this type of constructions is perceived by humans as taking less time (to give a kiss takes a shorter time than to kiss). In our paper, we present two experiments with BERT using English stimuli from psycholinguistic studies to investigate the effects of the syntactic construction on event duration and event similarity. We show that i) the dimensions of BERT vectors encode a smaller value for duration for both punctive and durative events in count syntax, in line with human results; on the other hand, we also found that ii) BERT semantic similarity fails to capture the conceptual shift that durative events should undergo in count syntax.
[ "Liu, Chenxin", "Chersoni, Emmanuele" ]
On Quick Kisses and How to Make Them Count: A Study on Event Construal in Light Verb Constructions with BERT
blackboxnlp-1.28
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.blackboxnlp-1.29.bib
https://aclanthology.org/2023.blackboxnlp-1.29/
@inproceedings{chintam-etal-2023-identifying, title = "Identifying and Adapting Transformer-Components Responsible for Gender Bias in an {E}nglish Language Model", author = "Chintam, Abhijith and Beloch, Rahel and Zuidema, Willem and Hanna, Michael and van der Wal, Oskar", editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein", booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.blackboxnlp-1.29", doi = "10.18653/v1/2023.blackboxnlp-1.29", pages = "379--394", abstract = "Language models (LMs) exhibit and amplify many types of undesirable biases learned from the training data, including gender bias. However, we lack tools for effectively and efficiently changing this behavior without hurting general language modeling performance. In this paper, we study three methods for identifying causal relations between LM components and particular output: causal mediation analysis, automated circuit discovery and our novel, efficient method called DiffMask+ based on differential masking. We apply the methods to GPT-2 small and the problem of gender bias, and use the discovered sets of components to perform parameter-efficient fine-tuning for bias mitigation. Our results show significant overlap in the identified components (despite huge differences in the computational requirements of the methods) as well as success in mitigating gender bias, with less damage to general language modeling compared to full model fine-tuning. However, our work also underscores the difficulty of defining and measuring bias, and the sensitivity of causal discovery procedures to dataset choice. We hope our work can contribute to more attention for dataset development, and lead to more effective mitigation strategies for other types of bias.", }
Language models (LMs) exhibit and amplify many types of undesirable biases learned from the training data, including gender bias. However, we lack tools for effectively and efficiently changing this behavior without hurting general language modeling performance. In this paper, we study three methods for identifying causal relations between LM components and particular output: causal mediation analysis, automated circuit discovery and our novel, efficient method called DiffMask+ based on differential masking. We apply the methods to GPT-2 small and the problem of gender bias, and use the discovered sets of components to perform parameter-efficient fine-tuning for bias mitigation. Our results show significant overlap in the identified components (despite huge differences in the computational requirements of the methods) as well as success in mitigating gender bias, with less damage to general language modeling compared to full model fine-tuning. However, our work also underscores the difficulty of defining and measuring bias, and the sensitivity of causal discovery procedures to dataset choice. We hope our work can contribute to more attention for dataset development, and lead to more effective mitigation strategies for other types of bias.
[ "Chintam, Abhijith", "Beloch, Rahel", "Zuidema, Willem", "Hanna, Michael", "van der Wal, Oskar" ]
Identifying and Adapting Transformer-Components Responsible for Gender Bias in an English Language Model
blackboxnlp-1.29
2310.12611
[ "https://github.com/iabhijith/bias-causal-analysis" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.calcs-1.1.bib
https://aclanthology.org/2023.calcs-1.1/
@inproceedings{sterner-teufel-2023-tongueswitcher, title = "{T}ongue{S}witcher: Fine-Grained Identification of {G}erman-{E}nglish Code-Switching", author = "Sterner, Igor and Teufel, Simone", editor = "Winata, Genta and Kar, Sudipta and Zhukova, Marina and Solorio, Thamar and Diab, Mona and Sitaram, Sunayana and Choudhury, Monojit and Bali, Kalika", booktitle = "Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.calcs-1.1", pages = "1--13", abstract = "This paper contributes to German-English code-switching research. We provide the largest corpus of naturally occurring German-English code-switching, where English is included in German text, and two methods for code-switching identification. The first method is rule-based, using wordlists and morphological processing. We use this method to compile a corpus of 25.6M tweets employing German-English code-switching. In our second method, we continue pretraining of a neural language model on this corpus and classify tokens based on embeddings from this language model. Our systems establish SoTA on our new corpus and an existing German-English code-switching benchmark. In particular, we systematically study code-switching for language-ambiguous words which can only be resolved in context, and morphologically mixed words consisting of both English and German morphemes. We distribute both corpora and systems to the research community.", }
This paper contributes to German-English code-switching research. We provide the largest corpus of naturally occurring German-English code-switching, where English is included in German text, and two methods for code-switching identification. The first method is rule-based, using wordlists and morphological processing. We use this method to compile a corpus of 25.6M tweets employing German-English code-switching. In our second method, we continue pretraining of a neural language model on this corpus and classify tokens based on embeddings from this language model. Our systems establish SoTA on our new corpus and an existing German-English code-switching benchmark. In particular, we systematically study code-switching for language-ambiguous words which can only be resolved in context, and morphologically mixed words consisting of both English and German morphemes. We distribute both corpora and systems to the research community.
[ "Sterner, Igor", "Teufel, Simone" ]
TongueSwitcher: Fine-Grained Identification of German-English Code-Switching
calcs-1.1
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.calcs-1.2.bib
https://aclanthology.org/2023.calcs-1.2/
@inproceedings{alastruey-etal-2023-towards, title = "Towards Real-World Streaming Speech Translation for Code-Switched Speech", author = "Alastruey, Belen and Sperber, Matthias and Gollan, Christian and Telaar, Dominic and Ng, Tim and Agarwal, Aashish", editor = "Winata, Genta and Kar, Sudipta and Zhukova, Marina and Solorio, Thamar and Diab, Mona and Sitaram, Sunayana and Choudhury, Monojit and Bali, Kalika", booktitle = "Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.calcs-1.2", pages = "14--22", abstract = "Code-switching (CS), i.e. mixing different languages in a single sentence, is a common phenomenon in communication and can be challenging in many Natural Language Processing (NLP) settings. Previous studies on CS speech have shown promising results for end-to-end speech translation (ST), but have been limited to offline scenarios and to translation to one of the languages present in the source monolingual transcription). In this paper, we focus on two essential yet unexplored areas for real-world CS speech translation: streaming settings, and translation to a third language (i.e., a language not included in the source). To this end, we extend the Fisher and Miami test and validation datasets to include new targets in Spanish and German. Using this data, we train a model for both offline and streaming ST and we establish baseline results for the two settings mentioned earlier.", }
Code-switching (CS), i.e. mixing different languages in a single sentence, is a common phenomenon in communication and can be challenging in many Natural Language Processing (NLP) settings. Previous studies on CS speech have shown promising results for end-to-end speech translation (ST), but have been limited to offline scenarios and to translation to one of the languages present in the source monolingual transcription). In this paper, we focus on two essential yet unexplored areas for real-world CS speech translation: streaming settings, and translation to a third language (i.e., a language not included in the source). To this end, we extend the Fisher and Miami test and validation datasets to include new targets in Spanish and German. Using this data, we train a model for both offline and streaming ST and we establish baseline results for the two settings mentioned earlier.
[ "Alastruey, Belen", "Sperber, Matthias", "Gollan, Christian", "Telaar, Dominic", "Ng, Tim", "Agarwal, Aashish" ]
Towards Real-World Streaming Speech Translation for Code-Switched Speech
calcs-1.2
2310.12648
[ "https://github.com/apple/ml-codeswitching-translations" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.calcs-1.3.bib
https://aclanthology.org/2023.calcs-1.3/
@inproceedings{pahari-shimada-2023-language, title = "Language Preference for Expression of Sentiment for {N}epali-{E}nglish Bilingual Speakers on Social Media", author = "Pahari, Niraj and Shimada, Kazutaka", editor = "Winata, Genta and Kar, Sudipta and Zhukova, Marina and Solorio, Thamar and Diab, Mona and Sitaram, Sunayana and Choudhury, Monojit and Bali, Kalika", booktitle = "Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.calcs-1.3", pages = "23--32", abstract = "Nepali-English code-switching (CS) has been a growing phenomenon in Nepalese society, especially in social media. The code-switching text can be leveraged to understand the socio-linguistic behaviours of the multilingual speakers. Existing studies have attempted to identify the language preference of the multilingual speakers for expressing different emotions using text in different language pairs. In this work, we aim to study the language preference of multilingual Nepali-English CS speakers while expressing sentiment in social media. We create a novel dataset for sentiment analysis using the public Nepali-English code-switched comments in YouTube. After performing the statistical study on the dataset, we find that the proportion of use of Nepali language is higher in negative comments when compared with positive comments, hence concluding the preference for using native language while expressing negative sentiment. Machine learning and transformer-based models are used as the baseline models for the dataset for sentiment classification. The dataset is released publicly.", }
Nepali-English code-switching (CS) has been a growing phenomenon in Nepalese society, especially in social media. The code-switching text can be leveraged to understand the socio-linguistic behaviours of the multilingual speakers. Existing studies have attempted to identify the language preference of the multilingual speakers for expressing different emotions using text in different language pairs. In this work, we aim to study the language preference of multilingual Nepali-English CS speakers while expressing sentiment in social media. We create a novel dataset for sentiment analysis using the public Nepali-English code-switched comments in YouTube. After performing the statistical study on the dataset, we find that the proportion of use of Nepali language is higher in negative comments when compared with positive comments, hence concluding the preference for using native language while expressing negative sentiment. Machine learning and transformer-based models are used as the baseline models for the dataset for sentiment classification. The dataset is released publicly.
[ "Pahari, Niraj", "Shimada, Kazutaka" ]
Language Preference for Expression of Sentiment for Nepali-English Bilingual Speakers on Social Media
calcs-1.3
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.calcs-1.4.bib
https://aclanthology.org/2023.calcs-1.4/
@inproceedings{wang-li-2023-text, title = "Text-Derived Language Identity Incorporation for End-to-End Code-Switching Speech Recognition", author = "Wang, Qinyi and Li, Haizhou", editor = "Winata, Genta and Kar, Sudipta and Zhukova, Marina and Solorio, Thamar and Diab, Mona and Sitaram, Sunayana and Choudhury, Monojit and Bali, Kalika", booktitle = "Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.calcs-1.4", pages = "33--42", abstract = "Recognizing code-switching (CS) speech often presents challenges for an automatic speech recognition system (ASR) due to limited linguistic context in short monolingual segments, resulting in language confusion. To mitigate this issue, language identity (LID) is often integrated into the speech recognition system to provide additional linguistic context. However, previous works predominately focus on extracting language identity from speech signals. We introduce a novel approach to learn language identity from pure text data via a dedicated language identity-language model. Besides, we explore two strategies: LID state fusion and language posterior biasing, to integrate the text-derived language identities into the end-to-end ASR system. By incorporating hypothesized language identities, our ASR system gains crucial contextual cues, effectively capturing language transitions and patterns within code-switched utterances. We conduct speech recognition experiments on the SEAME corpus and demonstrate the effectiveness of our proposed methods. Our results reveal significantly improved transcriptions in code-switching scenarios, underscoring the potential of text-derived LID in enhancing code-switching speech recognition.", }
Recognizing code-switching (CS) speech often presents challenges for an automatic speech recognition system (ASR) due to limited linguistic context in short monolingual segments, resulting in language confusion. To mitigate this issue, language identity (LID) is often integrated into the speech recognition system to provide additional linguistic context. However, previous works predominately focus on extracting language identity from speech signals. We introduce a novel approach to learn language identity from pure text data via a dedicated language identity-language model. Besides, we explore two strategies: LID state fusion and language posterior biasing, to integrate the text-derived language identities into the end-to-end ASR system. By incorporating hypothesized language identities, our ASR system gains crucial contextual cues, effectively capturing language transitions and patterns within code-switched utterances. We conduct speech recognition experiments on the SEAME corpus and demonstrate the effectiveness of our proposed methods. Our results reveal significantly improved transcriptions in code-switching scenarios, underscoring the potential of text-derived LID in enhancing code-switching speech recognition.
[ "Wang, Qinyi", "Li, Haizhou" ]
Text-Derived Language Identity Incorporation for End-to-End Code-Switching Speech Recognition
calcs-1.4
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.calcs-1.5.bib
https://aclanthology.org/2023.calcs-1.5/
@inproceedings{yong-etal-2023-prompting, title = "Prompting Multilingual Large Language Models to Generate Code-Mixed Texts: The Case of South {E}ast {A}sian Languages", author = "Yong, Zheng Xin and Zhang, Ruochen and Forde, Jessica and Wang, Skyler and Subramonian, Arjun and Lovenia, Holy and Cahyawijaya, Samuel and Winata, Genta and Sutawika, Lintang and Cruz, Jan Christian Blaise and Tan, Yin Lin and Phan, Long and Phan, Long and Garcia, Rowena and Solorio, Thamar and Aji, Alham Fikri", editor = "Winata, Genta and Kar, Sudipta and Zhukova, Marina and Solorio, Thamar and Diab, Mona and Sitaram, Sunayana and Choudhury, Monojit and Bali, Kalika", booktitle = "Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.calcs-1.5", pages = "43--63", abstract = "While code-mixing is a common linguistic practice in many parts of the world, collecting high-quality and low-cost code-mixed data remains a challenge for natural language processing (NLP) research. The recent proliferation of Large Language Models (LLMs) compels one to ask: how capable are these systems in generating code-mixed data? In this paper, we explore prompting multilingual LLMs in a zero-shot manner to generate code-mixed data for seven languages in South East Asia (SEA), namely Indonesian, Malay, Chinese, Tagalog, Vietnamese, Tamil, and Singlish. We find that publicly available multilingual instruction-tuned models such as BLOOMZ and Flan-T5-XXL are incapable of producing texts with phrases or clauses from different languages. ChatGPT exhibits inconsistent capabilities in generating code-mixed texts, wherein its per-formance varies depending on the prompt template and language pairing. For instance, ChatGPT generates fluent and natural Singlish texts (an English-based creole spoken in Singapore), but for English-Tamil language pair, the system mostly produces grammatically incorrect or semantically meaningless utterances. Furthermore, it may erroneously introduce languages not specified in the prompt. Based on our investigation, existing multilingual LLMs exhibit a wide range of proficiency in code-mixed data generation for SEA languages. As such, we advise against using LLMs in this context without extensive human checks.", }
While code-mixing is a common linguistic practice in many parts of the world, collecting high-quality and low-cost code-mixed data remains a challenge for natural language processing (NLP) research. The recent proliferation of Large Language Models (LLMs) compels one to ask: how capable are these systems in generating code-mixed data? In this paper, we explore prompting multilingual LLMs in a zero-shot manner to generate code-mixed data for seven languages in South East Asia (SEA), namely Indonesian, Malay, Chinese, Tagalog, Vietnamese, Tamil, and Singlish. We find that publicly available multilingual instruction-tuned models such as BLOOMZ and Flan-T5-XXL are incapable of producing texts with phrases or clauses from different languages. ChatGPT exhibits inconsistent capabilities in generating code-mixed texts, wherein its per-formance varies depending on the prompt template and language pairing. For instance, ChatGPT generates fluent and natural Singlish texts (an English-based creole spoken in Singapore), but for English-Tamil language pair, the system mostly produces grammatically incorrect or semantically meaningless utterances. Furthermore, it may erroneously introduce languages not specified in the prompt. Based on our investigation, existing multilingual LLMs exhibit a wide range of proficiency in code-mixed data generation for SEA languages. As such, we advise against using LLMs in this context without extensive human checks.
[ "Yong, Zheng Xin", "Zhang, Ruochen", "Forde, Jessica", "Wang, Skyler", "Subramonian, Arjun", "Lovenia, Holy", "Cahyawijaya, Samuel", "Winata, Genta", "Sutawika, Lintang", "Cruz, Jan Christian Blaise", "Tan, Yin Lin", "Phan, Long", "Phan, Long", "Garcia, Rowena", "Solorio, Thamar", "Aji, Alham Fikri" ]
Prompting Multilingual Large Language Models to Generate Code-Mixed Texts: The Case of South East Asian Languages
calcs-1.5
2303.13592
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.calcs-1.6.bib
https://aclanthology.org/2023.calcs-1.6/
@inproceedings{mohammed-etal-2023-conflator, title = "{CONFLATOR}: Incorporating Switching Point based Rotatory Positional Encodings for Code-Mixed Language Modeling", author = "Mohammed, Mohsin and Kandukuri, Sai and Gupta, Neeharika and Patwa, Parth and Chatterjee, Anubhab and Jain, Vinija and Chadha, Aman and Das, Amitava", editor = "Winata, Genta and Kar, Sudipta and Zhukova, Marina and Solorio, Thamar and Diab, Mona and Sitaram, Sunayana and Choudhury, Monojit and Bali, Kalika", booktitle = "Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.calcs-1.6", pages = "64--73", abstract = "The mixing of two or more languages is called Code-Mixing (CM). CM is a social norm in multilingual societies. Neural Language Models (NLMs) like transformers have been effective on many NLP tasks. However, NLM for CM is an under-explored area. Though transformers are capable and powerful, they cannot always encode positional information since they are non-recurrent. Therefore, to enrich word information and incorporate positional information, positional encoding is defined. We hypothesize that Switching Points (SPs), i.e., junctions in the text where the language switches (L1 -{\textgreater} L2 or L2 -{\textgreater} L1), pose a challenge for CM Language Models (LMs), and hence give special emphasis to SPs in the modeling process. We experiment with several positional encoding mechanisms and show that rotatory positional encodings along with switching point information yield the best results.We introduce CONFLATOR: a neural language modeling approach for code-mixed languages. CONFLATOR tries to learn to emphasize switching points using smarter positional encoding, both at unigram and bigram levels. CONFLATOR outperforms the state-of-the-art on two tasks based on code-mixed Hindi and English (Hinglish): (i) sentiment analysis and (ii) machine translation.", }
The mixing of two or more languages is called Code-Mixing (CM). CM is a social norm in multilingual societies. Neural Language Models (NLMs) like transformers have been effective on many NLP tasks. However, NLM for CM is an under-explored area. Though transformers are capable and powerful, they cannot always encode positional information since they are non-recurrent. Therefore, to enrich word information and incorporate positional information, positional encoding is defined. We hypothesize that Switching Points (SPs), i.e., junctions in the text where the language switches (L1 -{\textgreater} L2 or L2 -{\textgreater} L1), pose a challenge for CM Language Models (LMs), and hence give special emphasis to SPs in the modeling process. We experiment with several positional encoding mechanisms and show that rotatory positional encodings along with switching point information yield the best results.We introduce CONFLATOR: a neural language modeling approach for code-mixed languages. CONFLATOR tries to learn to emphasize switching points using smarter positional encoding, both at unigram and bigram levels. CONFLATOR outperforms the state-of-the-art on two tasks based on code-mixed Hindi and English (Hinglish): (i) sentiment analysis and (ii) machine translation.
[ "Mohammed, Mohsin", "K", "ukuri, Sai", "Gupta, Neeharika", "Patwa, Parth", "Chatterjee, Anubhab", "Jain, Vinija", "Chadha, Aman", "Das, Amitava" ]
CONFLATOR: Incorporating Switching Point based Rotatory Positional Encodings for Code-Mixed Language Modeling
calcs-1.6
2309.05270
[ "" ]
https://huggingface.co/papers/2309.05270
1
1
0
8
[]
[]
[]
1
Poster
https://aclanthology.org/2023.calcs-1.7.bib
https://aclanthology.org/2023.calcs-1.7/
@inproceedings{dhawan-etal-2023-unified, title = "Unified Model for Code-Switching Speech Recognition and Language Identification Based on Concatenated Tokenizer", author = "Dhawan, Kunal and Rekesh, KDimating and Ginsburg, Boris", editor = "Winata, Genta and Kar, Sudipta and Zhukova, Marina and Solorio, Thamar and Diab, Mona and Sitaram, Sunayana and Choudhury, Monojit and Bali, Kalika", booktitle = "Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.calcs-1.7", pages = "74--82", abstract = "Code-Switching (CS) multilingual Automatic Speech Recognition (ASR) models can transcribe speech containing two or more alternating languages during a conversation. This paper proposes (1) a new method for creating code-switching ASR datasets from purely monolingual data sources, and (2) a novel Concatenated Tokenizer that enables ASR models to generate language ID for each emitted text token while reusing existing monolingual tokenizers. The efficacy of these approaches for building CS ASR models is demonstrated for two language pairs, English-Hindi and English-Spanish, where we achieve new state-of-the-art results on the Miami Bangor CS evaluation corpus. In addition to competitive ASR performance, the proposed Concatenated Tokenizer models are highly effective for spoken language identification, achieving 98{\%}+ accuracy on the out-of-distribution FLEURS dataset.", }
Code-Switching (CS) multilingual Automatic Speech Recognition (ASR) models can transcribe speech containing two or more alternating languages during a conversation. This paper proposes (1) a new method for creating code-switching ASR datasets from purely monolingual data sources, and (2) a novel Concatenated Tokenizer that enables ASR models to generate language ID for each emitted text token while reusing existing monolingual tokenizers. The efficacy of these approaches for building CS ASR models is demonstrated for two language pairs, English-Hindi and English-Spanish, where we achieve new state-of-the-art results on the Miami Bangor CS evaluation corpus. In addition to competitive ASR performance, the proposed Concatenated Tokenizer models are highly effective for spoken language identification, achieving 98{\%}+ accuracy on the out-of-distribution FLEURS dataset.
[ "Dhawan, Kunal", "Rekesh, KDimating", "Ginsburg, Boris" ]
Unified Model for Code-Switching Speech Recognition and Language Identification Based on Concatenated Tokenizer
calcs-1.7
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.calcs-1.8.bib
https://aclanthology.org/2023.calcs-1.8/
@inproceedings{ogunremi-etal-2023-multilingual, title = "Multilingual self-supervised speech representations improve the speech recognition of low-resource {A}frican languages with codeswitching", author = "Ogunremi, Tolulope and Manning, Christopher and Jurafsky, Dan", editor = "Winata, Genta and Kar, Sudipta and Zhukova, Marina and Solorio, Thamar and Diab, Mona and Sitaram, Sunayana and Choudhury, Monojit and Bali, Kalika", booktitle = "Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.calcs-1.8", pages = "83--88", abstract = "While many speakers of low-resource languages regularly code-switch between their languages and other regional languages or English, datasets of codeswitched speech are too small to train bespoke acoustic models from scratch or do language model rescoring. Here we propose finetuning self-supervised speech representations such as wav2vec 2.0 XLSR to recognize code-switched data. We find that finetuning self-supervised multilingual representations and augmenting them with n-gram language models trained from transcripts reduces absolute word error rates by up to 20{\%} compared to baselines of hybrid models trained from scratch on code-switched data. Our findings suggest that in circumstances with limited training data finetuning self-supervised representations is a better performing and viable solution.", }
While many speakers of low-resource languages regularly code-switch between their languages and other regional languages or English, datasets of codeswitched speech are too small to train bespoke acoustic models from scratch or do language model rescoring. Here we propose finetuning self-supervised speech representations such as wav2vec 2.0 XLSR to recognize code-switched data. We find that finetuning self-supervised multilingual representations and augmenting them with n-gram language models trained from transcripts reduces absolute word error rates by up to 20{\%} compared to baselines of hybrid models trained from scratch on code-switched data. Our findings suggest that in circumstances with limited training data finetuning self-supervised representations is a better performing and viable solution.
[ "Ogunremi, Tolulope", "Manning, Christopher", "Jurafsky, Dan" ]
Multilingual self-supervised speech representations improve the speech recognition of low-resource African languages with codeswitching
calcs-1.8
2311.15077
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.1.bib
https://aclanthology.org/2023.conll-1.1/
@inproceedings{zhang-etal-2023-language, title = "Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics", author = "Zhang, Yuhan and Gibson, Edward and Davis, Forrest", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.1", doi = "10.18653/v1/2023.conll-1.1", pages = "1--14", abstract = "Language models (LMs) have been argued to overlap substantially with human beings in grammaticality judgment tasks. But when humans systematically make errors in language processing, should we expect LMs to behave like cognitive models of language and mimic human behavior? We answer this question by investigating LMs{'} more subtle judgments associated with {``}language illusions{''} {--} sentences that are vague in meaning, implausible, or ungrammatical but receive unexpectedly high acceptability judgments by humans. We looked at three illusions: the comparative illusion (e.g. {``}More people have been to Russia than I have{''}), the depth-charge illusion (e.g. {``}No head injury is too trivial to be ignored{''}), and the negative polarity item (NPI) illusion (e.g. {``}The hunter who no villager believed to be trustworthy will ever shoot a bear{''}). We found that probabilities represented by LMs were more likely to align with human judgments of being {``}tricked{''} by the NPI illusion which examines a structural dependency, compared to the comparative and the depth-charge illusions which require sophisticated semantic understanding. No single LM or metric yielded results that are entirely consistent with human behavior. Ultimately, we show that LMs are limited both in their construal as cognitive models of human language processing and in their capacity to recognize nuanced but critical information in complicated language materials.", }
Language models (LMs) have been argued to overlap substantially with human beings in grammaticality judgment tasks. But when humans systematically make errors in language processing, should we expect LMs to behave like cognitive models of language and mimic human behavior? We answer this question by investigating LMs{'} more subtle judgments associated with {``}language illusions{''} {--} sentences that are vague in meaning, implausible, or ungrammatical but receive unexpectedly high acceptability judgments by humans. We looked at three illusions: the comparative illusion (e.g. {``}More people have been to Russia than I have{''}), the depth-charge illusion (e.g. {``}No head injury is too trivial to be ignored{''}), and the negative polarity item (NPI) illusion (e.g. {``}The hunter who no villager believed to be trustworthy will ever shoot a bear{''}). We found that probabilities represented by LMs were more likely to align with human judgments of being {``}tricked{''} by the NPI illusion which examines a structural dependency, compared to the comparative and the depth-charge illusions which require sophisticated semantic understanding. No single LM or metric yielded results that are entirely consistent with human behavior. Ultimately, we show that LMs are limited both in their construal as cognitive models of human language processing and in their capacity to recognize nuanced but critical information in complicated language materials.
[ "Zhang, Yuhan", "Gibson, Edward", "Davis, Forrest" ]
Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics
conll-1.1
2311.01386
[ "https://github.com/forrestdavis/languageillusions" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.2.bib
https://aclanthology.org/2023.conll-1.2/
@inproceedings{ma-etal-2023-tomchallenges, title = "{T}o{MC}hallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks for Exploring Theory of Mind", author = "Ma, Xiaomeng and Gao, Lingyu and Xu, Qihui", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.2", doi = "10.18653/v1/2023.conll-1.2", pages = "15--26", abstract = "Theory of Mind (ToM), the capacity to comprehend the mental states of distinct individuals, is essential for numerous practical applications. With the development of large language models (LLMs), there is a heated debate about whether they are able to perform ToM tasks. Previous studies have used different tasks and prompts to test the ToM on LLMs and the results are inconsistent: some studies asserted these models are capable of exhibiting ToM, while others suggest the opposite. In this study, We present ToMChallenges, a dataset for comprehensively evaluating the Theory of Mind based on Sally-Anne and Smarties tests with a diverse set of tasks. In addition, we also propose an auto-grader to streamline the answer evaluation process. We tested three models: davinci, turbo, and gpt-4. Our evaluation results and error analyses show that LLMs have inconsistent behaviors across prompts and tasks. Performing the ToM tasks robustly remains a challenge for the LLMs. In addition, our paper wants to raise awareness in evaluating the ToM in LLMs and we want to invite more discussion on how to design the prompts and tasks for ToM tasks that can better access the LLMs{'} ability.", }
Theory of Mind (ToM), the capacity to comprehend the mental states of distinct individuals, is essential for numerous practical applications. With the development of large language models (LLMs), there is a heated debate about whether they are able to perform ToM tasks. Previous studies have used different tasks and prompts to test the ToM on LLMs and the results are inconsistent: some studies asserted these models are capable of exhibiting ToM, while others suggest the opposite. In this study, We present ToMChallenges, a dataset for comprehensively evaluating the Theory of Mind based on Sally-Anne and Smarties tests with a diverse set of tasks. In addition, we also propose an auto-grader to streamline the answer evaluation process. We tested three models: davinci, turbo, and gpt-4. Our evaluation results and error analyses show that LLMs have inconsistent behaviors across prompts and tasks. Performing the ToM tasks robustly remains a challenge for the LLMs. In addition, our paper wants to raise awareness in evaluating the ToM in LLMs and we want to invite more discussion on how to design the prompts and tasks for ToM tasks that can better access the LLMs{'} ability.
[ "Ma, Xiaomeng", "Gao, Lingyu", "Xu, Qihui" ]
ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks for Exploring Theory of Mind
conll-1.2
2305.15068
[ "" ]
https://huggingface.co/papers/2305.15068
1
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.conll-1.3.bib
https://aclanthology.org/2023.conll-1.3/
@inproceedings{bentz-2023-zipfian, title = "The {Z}ipfian Challenge: Learning the statistical fingerprint of natural languages", author = "Bentz, Christian", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.3", doi = "10.18653/v1/2023.conll-1.3", pages = "27--37", abstract = "Human languages are often claimed to fundamentally differ from other communication systems. But what is it exactly that unites them as a separate category? This article proposes to approach this problem {--} here termed the Zipfian Challenge {--} as a standard classification task. A corpus with textual material from diverse writing systems and languages, as well as other symbolic and non-symbolic systems, is provided. These are subsequently used to train and test binary classification algorithms, assigning labels {``}writing{''} and {``}non-writing{''} to character strings of the test sets. The performance is generally high, reaching 98{\%} accuracy for the best algorithms. Human languages emerge to have a statistical fingerprint: large unit inventories, high entropy, and few repetitions of adjacent units. This fingerprint can be used to tease them apart from other symbolic and non-symbolic systems.", }
Human languages are often claimed to fundamentally differ from other communication systems. But what is it exactly that unites them as a separate category? This article proposes to approach this problem {--} here termed the Zipfian Challenge {--} as a standard classification task. A corpus with textual material from diverse writing systems and languages, as well as other symbolic and non-symbolic systems, is provided. These are subsequently used to train and test binary classification algorithms, assigning labels {``}writing{''} and {``}non-writing{''} to character strings of the test sets. The performance is generally high, reaching 98{\%} accuracy for the best algorithms. Human languages emerge to have a statistical fingerprint: large unit inventories, high entropy, and few repetitions of adjacent units. This fingerprint can be used to tease them apart from other symbolic and non-symbolic systems.
[ "Bentz, Christian" ]
The Zipfian Challenge: Learning the statistical fingerprint of natural languages
conll-1.3
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.4.bib
https://aclanthology.org/2023.conll-1.4/
@inproceedings{zhang-etal-2023-effects, title = "On the Effects of Structural Modeling for Neural Semantic Parsing", author = "Zhang, Xiang and He, Shizhu and Liu, Kang and Zhao, Jun", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.4", doi = "10.18653/v1/2023.conll-1.4", pages = "38--57", abstract = "Semantic parsing aims to map natural language sentences to predefined formal languages, such as logic forms and programming languages, as the semantic annotation. From the theoretic views of linguistic and programming language, structures play an important role in both languages, which had motivated semantic parsers since the task was proposed in the beginning. But in the neural era, semantic parsers treating both natural and formal language as sequences, such as Seq2Seq and LLMs, have got more attentions. On the other side, lots of neural progress have been made for grammar induction, which only focuses on natural languages. Although closely related in the sense of structural modeling, these techniques hadn{'}t been jointly analyzed on the semantic parsing testbeds. To gain the better understanding on structures for semantic parsing, we design a taxonomy of structural modeling methods, and evaluate some representative techniques on semantic parsing, including both compositional and i.i.d. generalizations. In addition to the previous opinion that structures will help in general, we find that (1) structures must be designed for the specific dataset and generalization level, and (2) what really matters is not the structure choice of either source or target side, but the choice combination of both sides. Based on the finding, we further propose a metric that can evaluate the structure choice, which we believe can boost the automation of grammar designs for specific datasets and domains.", }
Semantic parsing aims to map natural language sentences to predefined formal languages, such as logic forms and programming languages, as the semantic annotation. From the theoretic views of linguistic and programming language, structures play an important role in both languages, which had motivated semantic parsers since the task was proposed in the beginning. But in the neural era, semantic parsers treating both natural and formal language as sequences, such as Seq2Seq and LLMs, have got more attentions. On the other side, lots of neural progress have been made for grammar induction, which only focuses on natural languages. Although closely related in the sense of structural modeling, these techniques hadn{'}t been jointly analyzed on the semantic parsing testbeds. To gain the better understanding on structures for semantic parsing, we design a taxonomy of structural modeling methods, and evaluate some representative techniques on semantic parsing, including both compositional and i.i.d. generalizations. In addition to the previous opinion that structures will help in general, we find that (1) structures must be designed for the specific dataset and generalization level, and (2) what really matters is not the structure choice of either source or target side, but the choice combination of both sides. Based on the finding, we further propose a metric that can evaluate the structure choice, which we believe can boost the automation of grammar designs for specific datasets and domains.
[ "Zhang, Xiang", "He, Shizhu", "Liu, Kang", "Zhao, Jun" ]
On the Effects of Structural Modeling for Neural Semantic Parsing
conll-1.4
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.5.bib
https://aclanthology.org/2023.conll-1.5/
@inproceedings{vaidya-etal-2023-humans, title = "Humans and language models diverge when predicting repeating text", author = "Vaidya, Aditya and Turek, Javier and Huth, Alexander", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.5", doi = "10.18653/v1/2023.conll-1.5", pages = "58--69", abstract = "Language models that are trained on the next-word prediction task have been shown to accurately model human behavior in word prediction and reading speed. In contrast with these findings, we present a scenario in which the performance of humans and LMs diverges. We collected a dataset of human next-word predictions for five stimuli that are formed by repeating spans of text. Human and GPT-2 LM predictions are strongly aligned in the first presentation of a text span, but their performance quickly diverges when memory (or in-context learning) begins to play a role. We traced the cause of this divergence to specific attention heads in a middle layer. Adding a power-law recency bias to these attention heads yielded a model that performs much more similarly to humans. We hope that this scenario will spur future work in bringing LMs closer to human behavior.", }
Language models that are trained on the next-word prediction task have been shown to accurately model human behavior in word prediction and reading speed. In contrast with these findings, we present a scenario in which the performance of humans and LMs diverges. We collected a dataset of human next-word predictions for five stimuli that are formed by repeating spans of text. Human and GPT-2 LM predictions are strongly aligned in the first presentation of a text span, but their performance quickly diverges when memory (or in-context learning) begins to play a role. We traced the cause of this divergence to specific attention heads in a middle layer. Adding a power-law recency bias to these attention heads yielded a model that performs much more similarly to humans. We hope that this scenario will spur future work in bringing LMs closer to human behavior.
[ "Vaidya, Aditya", "Turek, Javier", "Huth, Alex", "er" ]
Humans and language models diverge when predicting repeating text
conll-1.5
2310.06408
[ "https://github.com/huthlab/lm-repeating-text" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.6.bib
https://aclanthology.org/2023.conll-1.6/
@inproceedings{knuples-etal-2023-investigating, title = "Investigating the Nature of Disagreements on Mid-Scale Ratings: A Case Study on the Abstractness-Concreteness Continuum", author = "Knuple{\v{s}}, Urban and Frassinelli, Diego and Schulte im Walde, Sabine", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.6", doi = "10.18653/v1/2023.conll-1.6", pages = "70--86", abstract = "Humans tend to strongly agree on ratings on a scale for extreme cases (e.g., a CAT is judged as very concrete), but judgements on mid-scale words exhibit more disagreement. Yet, collected rating norms are heavily exploited across disciplines. Our study focuses on concreteness ratings and (i) implements correlations and supervised classification to identify salient multi-modal characteristics of mid-scale words, and (ii) applies a hard clustering to identify patterns of systematic disagreement across raters. Our results suggest to either fine-tune or filter mid-scale target words before utilising them.", }
Humans tend to strongly agree on ratings on a scale for extreme cases (e.g., a CAT is judged as very concrete), but judgements on mid-scale words exhibit more disagreement. Yet, collected rating norms are heavily exploited across disciplines. Our study focuses on concreteness ratings and (i) implements correlations and supervised classification to identify salient multi-modal characteristics of mid-scale words, and (ii) applies a hard clustering to identify patterns of systematic disagreement across raters. Our results suggest to either fine-tune or filter mid-scale target words before utilising them.
[ "Knuple{\\v{s}}, Urban", "Frassinelli, Diego", "Schulte im Walde, Sabine" ]
Investigating the Nature of Disagreements on Mid-Scale Ratings: A Case Study on the Abstractness-Concreteness Continuum
conll-1.6
2311.04563
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.7.bib
https://aclanthology.org/2023.conll-1.7/
@inproceedings{akbari-etal-2023-archbert, title = "{A}rch{BERT}: Bi-Modal Understanding of Neural Architectures and Natural Languages", author = "Akbari, Mohammad and Ranjbar Alvar, Saeed and Kamranian, Behnam and Banitalebi-Dehkordi, Amin and Zhang, Yong", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.7", doi = "10.18653/v1/2023.conll-1.7", pages = "87--107", abstract = "Building multi-modal language models has been a trend in the recent years, where additional modalities such as image, video, speech, etc. are jointly learned along with natural languages (i.e., textual information). Despite the success of these multi-modal language models with different modalities, there is no existing solution for neural network architectures and natural languages. Providing neural architectural information as a new modality allows us to provide fast architecture-2-text and text-2-architecture retrieval/generation services on the cloud with a single inference. Such solution is valuable in terms of helping beginner and intermediate ML users to come up with better neural architectures or AutoML approaches with a simple text query. In this paper, we propose ArchBERT, a bi-modal model for joint learning and understanding of neural architectures and natural languages, which opens up new avenues for research in this area. We also introduce a pre-training strategy named Masked Architecture Modeling (MAM) for a more generalized joint learning. Moreover, we introduce and publicly release two new bi-modal datasets for training and validating our methods. The ArchBERT{'}s performance is verified through a set of numerical experiments on different downstream tasks such as architecture-oriented reasoning, question answering, and captioning (summarization). Datasets, codes, and demos are available as supplementary materials.", }
Building multi-modal language models has been a trend in the recent years, where additional modalities such as image, video, speech, etc. are jointly learned along with natural languages (i.e., textual information). Despite the success of these multi-modal language models with different modalities, there is no existing solution for neural network architectures and natural languages. Providing neural architectural information as a new modality allows us to provide fast architecture-2-text and text-2-architecture retrieval/generation services on the cloud with a single inference. Such solution is valuable in terms of helping beginner and intermediate ML users to come up with better neural architectures or AutoML approaches with a simple text query. In this paper, we propose ArchBERT, a bi-modal model for joint learning and understanding of neural architectures and natural languages, which opens up new avenues for research in this area. We also introduce a pre-training strategy named Masked Architecture Modeling (MAM) for a more generalized joint learning. Moreover, we introduce and publicly release two new bi-modal datasets for training and validating our methods. The ArchBERT{'}s performance is verified through a set of numerical experiments on different downstream tasks such as architecture-oriented reasoning, question answering, and captioning (summarization). Datasets, codes, and demos are available as supplementary materials.
[ "Akbari, Mohammad", "Ranjbar Alvar, Saeed", "Kamranian, Behnam", "Banitalebi-Dehkordi, Amin", "Zhang, Yong" ]
ArchBERT: Bi-Modal Understanding of Neural Architectures and Natural Languages
conll-1.7
2310.17737
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.8.bib
https://aclanthology.org/2023.conll-1.8/
@inproceedings{de-langis-kang-2023-comparative, title = "A Comparative Study on Textual Saliency of Styles from Eye Tracking, Annotations, and Language Models", author = "de Langis, Karin and Kang, Dongyeop", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.8", doi = "10.18653/v1/2023.conll-1.8", pages = "108--121", abstract = "There is growing interest in incorporating eye-tracking data and other implicit measures of human language processing into natural language processing (NLP) pipelines. The data from human language processing contain unique insight into human linguistic understanding that could be exploited by language models. However, many unanswered questions remain about the nature of this data and how it can best be utilized in downstream NLP tasks. In this paper, we present EyeStyliency, an eye-tracking dataset for human processing of stylistic text (e.g., politeness). We develop an experimental protocol to collect these style-specific eye movements. We further investigate how this saliency data compares to both human annotation methods and model-based interpretability metrics. We find that while eye-tracking data is unique, it also intersects with both human annotations and model-based importance scores, providing a possible bridge between human- and machine-based perspectives. We propose utilizing this type of data to evaluate the cognitive plausibility of models that interpret style. Our eye-tracking data and processing code are publicly available.", }
There is growing interest in incorporating eye-tracking data and other implicit measures of human language processing into natural language processing (NLP) pipelines. The data from human language processing contain unique insight into human linguistic understanding that could be exploited by language models. However, many unanswered questions remain about the nature of this data and how it can best be utilized in downstream NLP tasks. In this paper, we present EyeStyliency, an eye-tracking dataset for human processing of stylistic text (e.g., politeness). We develop an experimental protocol to collect these style-specific eye movements. We further investigate how this saliency data compares to both human annotation methods and model-based interpretability metrics. We find that while eye-tracking data is unique, it also intersects with both human annotations and model-based importance scores, providing a possible bridge between human- and machine-based perspectives. We propose utilizing this type of data to evaluate the cognitive plausibility of models that interpret style. Our eye-tracking data and processing code are publicly available.
[ "de Langis, Karin", "Kang, Dongyeop" ]
A Comparative Study on Textual Saliency of Styles from Eye Tracking, Annotations, and Language Models
conll-1.8
2212.09873
[ "https://github.com/minnesotanlp/eyestyliency" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.9.bib
https://aclanthology.org/2023.conll-1.9/
@inproceedings{asami-sugawara-2023-propres, title = "{PROPRES}: Investigating the Projectivity of Presupposition with Various Triggers and Environments", author = "Asami, Daiki and Sugawara, Saku", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.9", doi = "10.18653/v1/2023.conll-1.9", pages = "122--137", abstract = "What makes a presupposition of an utterance {---}information taken for granted by its speaker{---} different from other pragmatic inferences such as an entailment is projectivity (e.g., the negative sentence the boy did not stop shedding tears presupposes the boy had shed tears before). The projectivity may vary depending on the combination of presupposition triggers and environments. However, prior natural language understanding studies fail to take it into account as they either use no human baseline or include only negation as an entailment-canceling environment to evaluate models{'} performance. The current study attempts to reconcile these issues. We introduce a new dataset, projectivity of presupposition (PROPRES), which includes 12k premise{--}hypothesis pairs crossing six triggers involving some lexical variety with five environments. Our human evaluation reveals that humans exhibit variable projectivity in some cases. However, the model evaluation shows that the best-performed model, DeBERTa, does not fully capture it. Our findings suggest that probing studies on pragmatic inferences should take extra care of the human judgment variability and the combination of linguistic items.", }
What makes a presupposition of an utterance {---}information taken for granted by its speaker{---} different from other pragmatic inferences such as an entailment is projectivity (e.g., the negative sentence the boy did not stop shedding tears presupposes the boy had shed tears before). The projectivity may vary depending on the combination of presupposition triggers and environments. However, prior natural language understanding studies fail to take it into account as they either use no human baseline or include only negation as an entailment-canceling environment to evaluate models{'} performance. The current study attempts to reconcile these issues. We introduce a new dataset, projectivity of presupposition (PROPRES), which includes 12k premise{--}hypothesis pairs crossing six triggers involving some lexical variety with five environments. Our human evaluation reveals that humans exhibit variable projectivity in some cases. However, the model evaluation shows that the best-performed model, DeBERTa, does not fully capture it. Our findings suggest that probing studies on pragmatic inferences should take extra care of the human judgment variability and the combination of linguistic items.
[ "Asami, Daiki", "Sugawara, Saku" ]
PROPRES: Investigating the Projectivity of Presupposition with Various Triggers and Environments
conll-1.9
2312.08755
[ "https://github.com/nii-cl/projectivity-of-presupposition" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.10.bib
https://aclanthology.org/2023.conll-1.10/
@inproceedings{ryu-etal-2023-minimal, title = "A Minimal Approach for Natural Language Action Space in Text-based Games", author = "Ryu, Dongwon and Fang, Meng and Haffari, Gholamreza and Pan, Shirui and Shareghi, Ehsan", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.10", doi = "10.18653/v1/2023.conll-1.10", pages = "138--154", abstract = "Text-based games (TGs) are language-based interactive environments for reinforcement learning. While language models (LMs) and knowledge graphs (KGs) are commonly used for handling large action space in TGs, it is unclear whether these techniques are necessary or overused. In this paper, we revisit the challenge of exploring the action space in TGs and propose $\epsilon$-admissible exploration, a minimal approach of utilizing admissible actions, for training phase. Additionally, we present a text-based actor-critic (TAC) agent that produces textual commands for game, solely from game observations, without requiring any KG or LM. Our method, on average across 10 games from Jericho, outperforms strong baselines and state-of-the-art agents that use LM and KG. Our approach highlights that a much lighter model design, with a fresh perspective on utilizing the information within the environments, suffices for an effective exploration of exponentially large action spaces.", }
Text-based games (TGs) are language-based interactive environments for reinforcement learning. While language models (LMs) and knowledge graphs (KGs) are commonly used for handling large action space in TGs, it is unclear whether these techniques are necessary or overused. In this paper, we revisit the challenge of exploring the action space in TGs and propose $\epsilon$-admissible exploration, a minimal approach of utilizing admissible actions, for training phase. Additionally, we present a text-based actor-critic (TAC) agent that produces textual commands for game, solely from game observations, without requiring any KG or LM. Our method, on average across 10 games from Jericho, outperforms strong baselines and state-of-the-art agents that use LM and KG. Our approach highlights that a much lighter model design, with a fresh perspective on utilizing the information within the environments, suffices for an effective exploration of exponentially large action spaces.
[ "Ryu, Dongwon", "Fang, Meng", "Haffari, Gholamreza", "Pan, Shirui", "Shareghi, Ehsan" ]
A Minimal Approach for Natural Language Action Space in Text-based Games
conll-1.10
2305.04082
[ "https://github.com/ktr0921/tac" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.11.bib
https://aclanthology.org/2023.conll-1.11/
@inproceedings{wijnholds-moortgat-2023-structural, title = "Structural Ambiguity and its Disambiguation in Language Model Based Parsers: the Case of {D}utch Clause Relativization", author = "Wijnholds, Gijs and Moortgat, Michael", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.11", doi = "10.18653/v1/2023.conll-1.11", pages = "155--164", abstract = "This paper addresses structural ambiguity in Dutch relative clauses. By investigating the task of disambiguation by grounding, we study how the presence of a prior sentence can resolve relative clause ambiguities. We apply this method to two parsing architectures in an attempt to demystify the parsing and language model components of two present-day neural parsers. Results show that a neurosymbolic parser, based on proof nets, is more open to data bias correction than an approach based on universal dependencies, although both set-ups suffer from a comparable initial data bias.", }
This paper addresses structural ambiguity in Dutch relative clauses. By investigating the task of disambiguation by grounding, we study how the presence of a prior sentence can resolve relative clause ambiguities. We apply this method to two parsing architectures in an attempt to demystify the parsing and language model components of two present-day neural parsers. Results show that a neurosymbolic parser, based on proof nets, is more open to data bias correction than an approach based on universal dependencies, although both set-ups suffer from a comparable initial data bias.
[ "Wijnholds, Gijs", "Moortgat, Michael" ]
Structural Ambiguity and its Disambiguation in Language Model Based Parsers: the Case of Dutch Clause Relativization
conll-1.11
2305.14917
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.12.bib
https://aclanthology.org/2023.conll-1.12/
@inproceedings{el-mesbahi-etal-2023-utility, title = "On the utility of enhancing {BERT} syntactic bias with Token Reordering Pretraining", author = "El Mesbahi, Yassir and Mahmud, Atif and Ghaddar, Abbas and Rezagholizadeh, Mehdi and Langlais, Phillippe and Parthasarathi, Prasanna", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.12", doi = "10.18653/v1/2023.conll-1.12", pages = "165--182", abstract = "Self-supervised Language Modelling (LM) objectives {---}like BERT masked LM{---} have become the default choice for pretraining language models. TOken Reordering (TOR) pretraining objectives, beyond token prediction, have not been extensively studied yet. In this work, we explore challenges that underlie the development and usefulness of such objectives on downstream language tasks. In particular, we design a novel TOR pretraining objective which predicts whether two tokens are adjacent or not given a partial bag-of-tokens input. In addition, we investigate the usefulness of Graph Isomorphism Network (GIN), when placed on top of the BERT encoder, in order to enhance the overall model ability to leverage topological signal from the encoded representations. We compare language understanding abilities of TOR to the one of MLM on word-order sensitive (e.g. Dependency Parsing) and insensitive (e.g. text classification) tasks in both full training and few-shot settings. Our results indicate that TOR is competitive to MLM on the GLUE language understanding benchmark, and slightly superior on syntax-dependent datasets, especially in the few-shot setting.", }
Self-supervised Language Modelling (LM) objectives {---}like BERT masked LM{---} have become the default choice for pretraining language models. TOken Reordering (TOR) pretraining objectives, beyond token prediction, have not been extensively studied yet. In this work, we explore challenges that underlie the development and usefulness of such objectives on downstream language tasks. In particular, we design a novel TOR pretraining objective which predicts whether two tokens are adjacent or not given a partial bag-of-tokens input. In addition, we investigate the usefulness of Graph Isomorphism Network (GIN), when placed on top of the BERT encoder, in order to enhance the overall model ability to leverage topological signal from the encoded representations. We compare language understanding abilities of TOR to the one of MLM on word-order sensitive (e.g. Dependency Parsing) and insensitive (e.g. text classification) tasks in both full training and few-shot settings. Our results indicate that TOR is competitive to MLM on the GLUE language understanding benchmark, and slightly superior on syntax-dependent datasets, especially in the few-shot setting.
[ "El Mesbahi, Yassir", "Mahmud, Atif", "Ghaddar, Abbas", "Rezagholizadeh, Mehdi", "Langlais, Phillippe", "Parthasarathi, Prasanna" ]
On the utility of enhancing BERT syntactic bias with Token Reordering Pretraining
conll-1.12
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.13.bib
https://aclanthology.org/2023.conll-1.13/
@inproceedings{owan-etal-2023-quirk, title = "Quirk or Palmer: A Comparative Study of Modal Verb Frameworks with Annotated Datasets", author = "Owan, Risako and Gini, Maria and Kang, Dongyeop", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.13", doi = "10.18653/v1/2023.conll-1.13", pages = "183--199", abstract = "Modal verbs, such as can, may, and must, are commonly used in daily communication to convey the speaker{'}s perspective related to the likelihood and/or mode of the proposition. They can differ greatly in meaning depending on how they{'}re used and the context of a sentence (e.g. {``}They must help each other out.{''} vs. {``}They must have helped each other out.{''}). Despite their practical importance in natural language understanding, linguists have yet to agree on a single, prominent framework for the categorization of modal verb senses. This lack of agreement stems from high degrees of flexibility and polysemy from the modal verbs, making it more difficult for researchers to incorporate insights from this family of words into their work. As a tool to help navigate this issue, this work presents MoVerb, a dataset consisting of 27,240 annotations of modal verb senses over 4,540 utterances containing one or more sentences from social conversations. Each utterance is annotated by three annotators using two different theoretical frameworks (i.e., Quirk and Palmer) of modal verb senses. We observe that both frameworks have similar inter-annotator agreements, despite having a different number of sense labels (eight for Quirk and three for Palmer). With RoBERTa-based classifiers fine-tuned on MoVerb, we achieve F1 scores of 82.2 and 78.3 on Quirk and Palmer, respectively, showing that modal verb sense disambiguation is not a trivial task.", }
Modal verbs, such as can, may, and must, are commonly used in daily communication to convey the speaker{'}s perspective related to the likelihood and/or mode of the proposition. They can differ greatly in meaning depending on how they{'}re used and the context of a sentence (e.g. {``}They must help each other out.{''} vs. {``}They must have helped each other out.{''}). Despite their practical importance in natural language understanding, linguists have yet to agree on a single, prominent framework for the categorization of modal verb senses. This lack of agreement stems from high degrees of flexibility and polysemy from the modal verbs, making it more difficult for researchers to incorporate insights from this family of words into their work. As a tool to help navigate this issue, this work presents MoVerb, a dataset consisting of 27,240 annotations of modal verb senses over 4,540 utterances containing one or more sentences from social conversations. Each utterance is annotated by three annotators using two different theoretical frameworks (i.e., Quirk and Palmer) of modal verb senses. We observe that both frameworks have similar inter-annotator agreements, despite having a different number of sense labels (eight for Quirk and three for Palmer). With RoBERTa-based classifiers fine-tuned on MoVerb, we achieve F1 scores of 82.2 and 78.3 on Quirk and Palmer, respectively, showing that modal verb sense disambiguation is not a trivial task.
[ "Owan, Risako", "Gini, Maria", "Kang, Dongyeop" ]
Quirk or Palmer: A Comparative Study of Modal Verb Frameworks with Annotated Datasets
conll-1.13
2212.10152
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.14.bib
https://aclanthology.org/2023.conll-1.14/
@inproceedings{lee-etal-2023-quantifying, title = "Quantifying Information of Tokens for Simple and Flexible Simultaneous Machine Translation", author = "Lee, DongHyun and Park, Minkyung and Lee, Byung-Jun", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.14", doi = "10.18653/v1/2023.conll-1.14", pages = "200--210", abstract = "Simultaneous Translation (ST) involves translating with only partial source inputs instead of the entire source inputs, a process that can potentially result in translation quality degradation. Previous approaches to balancing translation quality and latency have demonstrated that it is more efficient and effective to leverage an offline model with a reasonable policy. However, using an offline model also leads to a distribution shift since it is not trained with partial source inputs, and it can be improved by training an additional module that informs us when to translate. In this paper, we propose an Information Quantifier (IQ) that models source and target information to determine whether the offline model has sufficient information for translation, trained with oracle action sequences generated from the offline model. IQ, by quantifying information, helps in formulating a suitable policy for Simultaneous Translation that better generalizes and also allows us to control the trade-off between quality and latency naturally. Experiments on various language pairs show that our proposed model outperforms baselines.", }
Simultaneous Translation (ST) involves translating with only partial source inputs instead of the entire source inputs, a process that can potentially result in translation quality degradation. Previous approaches to balancing translation quality and latency have demonstrated that it is more efficient and effective to leverage an offline model with a reasonable policy. However, using an offline model also leads to a distribution shift since it is not trained with partial source inputs, and it can be improved by training an additional module that informs us when to translate. In this paper, we propose an Information Quantifier (IQ) that models source and target information to determine whether the offline model has sufficient information for translation, trained with oracle action sequences generated from the offline model. IQ, by quantifying information, helps in formulating a suitable policy for Simultaneous Translation that better generalizes and also allows us to control the trade-off between quality and latency naturally. Experiments on various language pairs show that our proposed model outperforms baselines.
[ "Lee, DongHyun", "Park, Minkyung", "Lee, Byung-Jun" ]
Quantifying Information of Tokens for Simple and Flexible Simultaneous Machine Translation
conll-1.14
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.15.bib
https://aclanthology.org/2023.conll-1.15/
@inproceedings{sravani-mamidi-2023-enhancing, title = "Enhancing Code-mixed Text Generation Using Synthetic Data Filtering in Neural Machine Translation", author = "Sravani, Dama and Mamidi, Radhika", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.15", doi = "10.18653/v1/2023.conll-1.15", pages = "211--220", abstract = "Code-Mixing, the act of mixing two or more languages, is a common communicative phenomenon in multi-lingual societies. The lack of quality in code-mixed data is a bottleneck for NLP systems. On the other hand, Monolingual systems perform well due to ample high-quality data. To bridge the gap, creating coherent translations of monolingual sentences to their code-mixed counterparts can improve accuracy in code-mixed settings for NLP downstream tasks. In this paper, we propose a neural machine translation approach to generate high-quality code-mixed sentences by leveraging human judgements. We train filters based on human judgements to identify natural code-mixed sentences from a larger synthetically generated code-mixed corpus, resulting in a three-way silver parallel corpus between monolingual English, monolingual Indian language and code-mixed English with an Indian language. Using these corpora, we fine-tune multi-lingual encoder-decoder models viz, mT5 and mBART, for the translation task. Our results indicate that our approach of using filtered data for training outperforms the current systems for code-mixed generation in Hindi-English. Apart from Hindi-English, the approach performs well when applied to Telugu, a low-resource language, to generate Telugu-English code-mixed sentences.", }
Code-Mixing, the act of mixing two or more languages, is a common communicative phenomenon in multi-lingual societies. The lack of quality in code-mixed data is a bottleneck for NLP systems. On the other hand, Monolingual systems perform well due to ample high-quality data. To bridge the gap, creating coherent translations of monolingual sentences to their code-mixed counterparts can improve accuracy in code-mixed settings for NLP downstream tasks. In this paper, we propose a neural machine translation approach to generate high-quality code-mixed sentences by leveraging human judgements. We train filters based on human judgements to identify natural code-mixed sentences from a larger synthetically generated code-mixed corpus, resulting in a three-way silver parallel corpus between monolingual English, monolingual Indian language and code-mixed English with an Indian language. Using these corpora, we fine-tune multi-lingual encoder-decoder models viz, mT5 and mBART, for the translation task. Our results indicate that our approach of using filtered data for training outperforms the current systems for code-mixed generation in Hindi-English. Apart from Hindi-English, the approach performs well when applied to Telugu, a low-resource language, to generate Telugu-English code-mixed sentences.
[ "Sravani, Dama", "Mamidi, Radhika" ]
Enhancing Code-mixed Text Generation Using Synthetic Data Filtering in Neural Machine Translation
conll-1.15
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.16.bib
https://aclanthology.org/2023.conll-1.16/
@inproceedings{skopek-etal-2023-towards, title = "Towards Better Evaluation of Instruction-Following: A Case-Study in Summarization", author = "Skopek, Ondrej and Aralikatte, Rahul and Gooding, Sian and Carbune, Victor", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.16", doi = "10.18653/v1/2023.conll-1.16", pages = "221--237", abstract = "Despite recent advances, evaluating how well large language models (LLMs) follow user instructions remains an open problem. While evaluation methods of language models have seen a rise in prompt-based approaches, limited work on the correctness of these methods has been conducted. In this work, we perform a meta-evaluation of a variety of metrics to quantify how accurately they measure the instruction-following abilities of LLMs. Our investigation is performed on grounded query-based summarization by collecting a new short-form, real-world dataset riSum, containing 300 document-instruction pairs with 3 answers each. All 900 answers are rated by 3 human annotators. Using riSum, we analyze the agreement between evaluation methods and human judgment. Finally, we propose new LLM-based reference-free evaluation methods that improve upon established baselines and perform on par with costly reference-based metrics that require high-quality summaries.", }
Despite recent advances, evaluating how well large language models (LLMs) follow user instructions remains an open problem. While evaluation methods of language models have seen a rise in prompt-based approaches, limited work on the correctness of these methods has been conducted. In this work, we perform a meta-evaluation of a variety of metrics to quantify how accurately they measure the instruction-following abilities of LLMs. Our investigation is performed on grounded query-based summarization by collecting a new short-form, real-world dataset riSum, containing 300 document-instruction pairs with 3 answers each. All 900 answers are rated by 3 human annotators. Using riSum, we analyze the agreement between evaluation methods and human judgment. Finally, we propose new LLM-based reference-free evaluation methods that improve upon established baselines and perform on par with costly reference-based metrics that require high-quality summaries.
[ "Skopek, Ondrej", "Aralikatte, Rahul", "Gooding, Sian", "Carbune, Victor" ]
Towards Better Evaluation of Instruction-Following: A Case-Study in Summarization
conll-1.16
2310.08394
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.17.bib
https://aclanthology.org/2023.conll-1.17/
@inproceedings{gessler-schneider-2023-syntactic, title = "Syntactic Inductive Bias in Transformer Language Models: Especially Helpful for Low-Resource Languages?", author = "Gessler, Luke and Schneider, Nathan", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.17", doi = "10.18653/v1/2023.conll-1.17", pages = "238--253", abstract = "A line of work on Transformer-based language models such as BERT has attempted to use syntactic inductive bias to enhance the pretraining process, on the theory that building syntactic structure into the training process should reduce the amount of data needed for training. But such methods are often tested for high-resource languages such as English. In this work, we investigate whether these methods can compensate for data sparseness in low-resource languages, hypothesizing that they ought to be more effective for low-resource languages. We experiment with five low-resource languages: Uyghur, Wolof, Maltese, Coptic, and Ancient Greek. We find that these syntactic inductive bias methods produce uneven results in low-resource settings, and provide surprisingly little benefit in most cases.", }
A line of work on Transformer-based language models such as BERT has attempted to use syntactic inductive bias to enhance the pretraining process, on the theory that building syntactic structure into the training process should reduce the amount of data needed for training. But such methods are often tested for high-resource languages such as English. In this work, we investigate whether these methods can compensate for data sparseness in low-resource languages, hypothesizing that they ought to be more effective for low-resource languages. We experiment with five low-resource languages: Uyghur, Wolof, Maltese, Coptic, and Ancient Greek. We find that these syntactic inductive bias methods produce uneven results in low-resource settings, and provide surprisingly little benefit in most cases.
[ "Gessler, Luke", "Schneider, Nathan" ]
Syntactic Inductive Bias in Transformer Language Models: Especially Helpful for Low-Resource Languages?
conll-1.17
2311.00268
[ "https://github.com/lgessler/lr-sib" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.18.bib
https://aclanthology.org/2023.conll-1.18/
@inproceedings{molnar-etal-2023-attribution, title = "Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue", author = "Molnar, Aron and Jumelet, Jaap and Giulianelli, Mario and Sinclair, Arabella", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.18", doi = "10.18653/v1/2023.conll-1.18", pages = "254--273", abstract = "Language models are often used as the backbone of modern dialogue systems. These models are pre-trained on large amounts of written fluent language. Repetition is typically penalised when evaluating language model generations. However, it is a key component of dialogue. Humans use local and partner specific repetitions; these are preferred by human users and lead to more successful communication in dialogue. In this study, we evaluate (a) whether language models produce human-like levels of repetition in dialogue, and (b) what are the processing mechanisms related to lexical re-use they use during comprehension. We believe that such joint analysis of model production and comprehension behaviour can inform the development of cognitively inspired dialogue generation systems.", }
Language models are often used as the backbone of modern dialogue systems. These models are pre-trained on large amounts of written fluent language. Repetition is typically penalised when evaluating language model generations. However, it is a key component of dialogue. Humans use local and partner specific repetitions; these are preferred by human users and lead to more successful communication in dialogue. In this study, we evaluate (a) whether language models produce human-like levels of repetition in dialogue, and (b) what are the processing mechanisms related to lexical re-use they use during comprehension. We believe that such joint analysis of model production and comprehension behaviour can inform the development of cognitively inspired dialogue generation systems.
[ "Molnar, Aron", "Jumelet, Jaap", "Giulianelli, Mario", "Sinclair, Arabella" ]
Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue
conll-1.18
2311.13061
[ "" ]
https://huggingface.co/papers/2311.13061
1
1
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.conll-1.19.bib
https://aclanthology.org/2023.conll-1.19/
@inproceedings{sun-etal-2023-validity, title = "The Validity of Evaluation Results: Assessing Concurrence Across Compositionality Benchmarks", author = "Sun, Kaiser and Williams, Adina and Hupkes, Dieuwke", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.19", doi = "10.18653/v1/2023.conll-1.19", pages = "274--293", abstract = "NLP models have progressed drastically in recent years, according to numerous datasets proposed to evaluate performance. Questions remain, however, about how particular dataset design choices may impact the conclusions we draw about model capabilities. In this work, we investigate this question in the domain of compositional generalization. We examine the performance of six modeling approaches across 4 datasets, split according to 8 compositional splitting strategies, ranking models by 18 compositional generalization splits in total. Our results show that: i) the datasets, although all designed to evaluate compositional generalization, rank modeling approaches differently; ii) datasets generated by humans align better with each other than with synthetic datasets, or than the latter among themselves; iii) generally, whether datasets are sampled from the same source is more predictive of the resulting model ranking than whether they maintain the same interpretation of compositionality; and iv) specific lexical items in dataset impacts the measurement consistency. Overall, our results demonstrate that much work remains to be done when it comes to assessing whether popular evaluation datasets measure what they intend to measure, and suggests that elucidating more rigorous standards for establishing the validity of evaluation sets could benefit the field.", }
NLP models have progressed drastically in recent years, according to numerous datasets proposed to evaluate performance. Questions remain, however, about how particular dataset design choices may impact the conclusions we draw about model capabilities. In this work, we investigate this question in the domain of compositional generalization. We examine the performance of six modeling approaches across 4 datasets, split according to 8 compositional splitting strategies, ranking models by 18 compositional generalization splits in total. Our results show that: i) the datasets, although all designed to evaluate compositional generalization, rank modeling approaches differently; ii) datasets generated by humans align better with each other than with synthetic datasets, or than the latter among themselves; iii) generally, whether datasets are sampled from the same source is more predictive of the resulting model ranking than whether they maintain the same interpretation of compositionality; and iv) specific lexical items in dataset impacts the measurement consistency. Overall, our results demonstrate that much work remains to be done when it comes to assessing whether popular evaluation datasets measure what they intend to measure, and suggests that elucidating more rigorous standards for establishing the validity of evaluation sets could benefit the field.
[ "Sun, Kaiser", "Williams, Adina", "Hupkes, Dieuwke" ]
The Validity of Evaluation Results: Assessing Concurrence Across Compositionality Benchmarks
conll-1.19
2310.17514
[ "https://github.com/facebookresearch/compositionalityvalidity" ]
https://huggingface.co/papers/2310.17514
2
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.conll-1.20.bib
https://aclanthology.org/2023.conll-1.20/
@inproceedings{weber-etal-2023-mind, title = "Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning", author = "Weber, Lucas and Bruni, Elia and Hupkes, Dieuwke", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.20", doi = "10.18653/v1/2023.conll-1.20", pages = "294--313", abstract = "Finding the best way of adapting pre-trained language models to a task is a big challenge in current NLP. Just like the previous generation of \textit{task-tuned} models (TT), models that are adapted to tasks via in-context-learning (ICL) or instruction tuning (IT) are robust in some setups, but not in others. Here, we present a detailed analysis of which design choices cause instabilities and inconsistencies in LLM predictions. First, we show how spurious correlations between input distributions and labels {--} a known issue in TT models {--} form only a minor problem for prompted models. Then we engage in a systematic, holistic evaluation of different factors that have been found to influence predictions in a prompting setup. We test all possible combinations of a range of factors on both vanilla and instruction-tuned LLMs of different scale, and statistically analyse the results to show which factors are the most influential, the most interactive or the most stable. From our results, we deduce which factors can be used without precautions, should be avoided or handled with care in most settings.", }
Finding the best way of adapting pre-trained language models to a task is a big challenge in current NLP. Just like the previous generation of \textit{task-tuned} models (TT), models that are adapted to tasks via in-context-learning (ICL) or instruction tuning (IT) are robust in some setups, but not in others. Here, we present a detailed analysis of which design choices cause instabilities and inconsistencies in LLM predictions. First, we show how spurious correlations between input distributions and labels {--} a known issue in TT models {--} form only a minor problem for prompted models. Then we engage in a systematic, holistic evaluation of different factors that have been found to influence predictions in a prompting setup. We test all possible combinations of a range of factors on both vanilla and instruction-tuned LLMs of different scale, and statistically analyse the results to show which factors are the most influential, the most interactive or the most stable. From our results, we deduce which factors can be used without precautions, should be avoided or handled with care in most settings.
[ "Weber, Lucas", "Bruni, Elia", "Hupkes, Dieuwke" ]
Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning
conll-1.20
2310.13486
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.21.bib
https://aclanthology.org/2023.conll-1.21/
@inproceedings{pal-etal-2023-med, title = "{M}ed-{HALT}: Medical Domain Hallucination Test for Large Language Models", author = "Pal, Ankit and Umapathi, Logesh Kumar and Sankarasubbu, Malaikannan", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.21", doi = "10.18653/v1/2023.conll-1.21", pages = "314--334", abstract = "This research paper focuses on the challenges posed by hallucinations in large language models (LLMs), particularly in the context of the medical domain. Hallucination, wherein these models generate plausible yet unverified or incorrect information, can have serious consequences in healthcare applications. We propose a new benchmark and dataset, Med-HALT (Medical Domain Hallucination Test), designed specifically to evaluate and reduce hallucinations. Med-HALT provides a diverse multinational dataset derived from medical examinations across various countries and includes multiple innovative testing modalities. Med-HALT includes two categories of tests reasoning and memory-based hallucination tests, designed to assess LLMs{'} problem-solving and information retrieval abilities. Our study evaluated leading LLMs, including Text Davinci, GPT-3.5, LlaMa-2, MPT, and Falcon, revealing significant differences in their performance. The paper provides detailed insights into the dataset, promoting transparency and reproducibility. Through this work, we aim to contribute to the development of safer and more reliable language models in healthcare. Our benchmark can be found at medhalt.github.io", }
This research paper focuses on the challenges posed by hallucinations in large language models (LLMs), particularly in the context of the medical domain. Hallucination, wherein these models generate plausible yet unverified or incorrect information, can have serious consequences in healthcare applications. We propose a new benchmark and dataset, Med-HALT (Medical Domain Hallucination Test), designed specifically to evaluate and reduce hallucinations. Med-HALT provides a diverse multinational dataset derived from medical examinations across various countries and includes multiple innovative testing modalities. Med-HALT includes two categories of tests reasoning and memory-based hallucination tests, designed to assess LLMs{'} problem-solving and information retrieval abilities. Our study evaluated leading LLMs, including Text Davinci, GPT-3.5, LlaMa-2, MPT, and Falcon, revealing significant differences in their performance. The paper provides detailed insights into the dataset, promoting transparency and reproducibility. Through this work, we aim to contribute to the development of safer and more reliable language models in healthcare. Our benchmark can be found at medhalt.github.io
[ "Pal, Ankit", "Umapathi, Logesh Kumar", "Sankarasubbu, Malaikannan" ]
Med-HALT: Medical Domain Hallucination Test for Large Language Models
conll-1.21
2307.15343
[ "" ]
https://huggingface.co/papers/2307.15343
2
2
0
3
[]
[ "openlifescienceai/Med-HALT" ]
[]
1
Poster
https://aclanthology.org/2023.conll-1.22.bib
https://aclanthology.org/2023.conll-1.22/
@inproceedings{madureira-etal-2023-revising, title = "Revising with a Backward Glance: Regressions and Skips during Reading as Cognitive Signals for Revision Policies in Incremental Processing", author = "Madureira, Brielen and {\c{C}}elikkol, Pelin and Schlangen, David", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.22", doi = "10.18653/v1/2023.conll-1.22", pages = "335--351", abstract = "In NLP, incremental processors produce output in instalments, based on incoming prefixes of the linguistic input. Some tokens trigger revisions, causing edits to the output hypothesis, but little is known about why models revise when they revise. A policy that detects the time steps where revisions should happen can improve efficiency. Still, retrieving a suitable signal to train a revision policy is an open problem, since it is not naturally available in datasets. In this work, we investigate the appropriateness of regressions and skips in human reading eye-tracking data as signals to inform revision policies in incremental sequence labelling. Using generalised mixed-effects models, we find that the probability of regressions and skips by humans can potentially serve as useful predictors for revisions in BiLSTMs and Transformer models, with consistent results for various languages.", }
In NLP, incremental processors produce output in instalments, based on incoming prefixes of the linguistic input. Some tokens trigger revisions, causing edits to the output hypothesis, but little is known about why models revise when they revise. A policy that detects the time steps where revisions should happen can improve efficiency. Still, retrieving a suitable signal to train a revision policy is an open problem, since it is not naturally available in datasets. In this work, we investigate the appropriateness of regressions and skips in human reading eye-tracking data as signals to inform revision policies in incremental sequence labelling. Using generalised mixed-effects models, we find that the probability of regressions and skips by humans can potentially serve as useful predictors for revisions in BiLSTMs and Transformer models, with consistent results for various languages.
[ "Madureira, Brielen", "{\\c{C}}elikkol, Pelin", "Schlangen, David" ]
Revising with a Backward Glance: Regressions and Skips during Reading as Cognitive Signals for Revision Policies in Incremental Processing
conll-1.22
2310.18229
[ "https://github.com/briemadu/revreg" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.23.bib
https://aclanthology.org/2023.conll-1.23/
@inproceedings{van-dijk-etal-2023-chiscor, title = "{C}hi{SC}or: A Corpus of Freely-Told Fantasy Stories by {D}utch Children for Computational Linguistics and Cognitive Science", author = "van Dijk, Bram and van Duijn, Max and Verberne, Suzan and Spruit, Marco", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.23", doi = "10.18653/v1/2023.conll-1.23", pages = "352--363", abstract = "In this resource paper we release ChiSCor, a new corpus containing 619 fantasy stories, told freely by 442 Dutch children aged 4-12. ChiSCor was compiled for studying how children render character perspectives, and unravelling language and cognition in development, with computational tools. Unlike existing resources, ChiSCor{'}s stories were produced in natural contexts, in line with recent calls for more ecologically valid datasets. ChiSCor hosts text, audio, and annotations for character complexity and linguistic complexity. Additional metadata (e.g. education of caregivers) is available for one third of the Dutch children. ChiSCor also includes a small set of 62 English stories. This paper details how ChiSCor was compiled and shows its potential for future work with three brief case studies: i) we show that the syntactic complexity of stories is strikingly stable across children{'}s ages; ii) we extend work on Zipfian distributions in free speech and show that ChiSCor obeys Zipf{'}s law closely, reflecting its social context; iii) we show that even though ChiSCor is relatively small, the corpus is rich enough to train informative lemma vectors that allow us to analyse children{'}s language use. We end with a reflection on the value of narrative datasets in computational linguistics.", }
In this resource paper we release ChiSCor, a new corpus containing 619 fantasy stories, told freely by 442 Dutch children aged 4-12. ChiSCor was compiled for studying how children render character perspectives, and unravelling language and cognition in development, with computational tools. Unlike existing resources, ChiSCor{'}s stories were produced in natural contexts, in line with recent calls for more ecologically valid datasets. ChiSCor hosts text, audio, and annotations for character complexity and linguistic complexity. Additional metadata (e.g. education of caregivers) is available for one third of the Dutch children. ChiSCor also includes a small set of 62 English stories. This paper details how ChiSCor was compiled and shows its potential for future work with three brief case studies: i) we show that the syntactic complexity of stories is strikingly stable across children{'}s ages; ii) we extend work on Zipfian distributions in free speech and show that ChiSCor obeys Zipf{'}s law closely, reflecting its social context; iii) we show that even though ChiSCor is relatively small, the corpus is rich enough to train informative lemma vectors that allow us to analyse children{'}s language use. We end with a reflection on the value of narrative datasets in computational linguistics.
[ "van Dijk, Bram", "van Duijn, Max", "Verberne, Suzan", "Spruit, Marco" ]
ChiSCor: A Corpus of Freely-Told Fantasy Stories by Dutch Children for Computational Linguistics and Cognitive Science
conll-1.23
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.24.bib
https://aclanthology.org/2023.conll-1.24/
@inproceedings{donmez-etal-2023-hnc, title = "{HNC}: Leveraging Hard Negative Captions towards Models with Fine-Grained Visual-Linguistic Comprehension Capabilities", author = {D{\"o}nmez, Esra and Tilli, Pascal and Yang, Hsiu-Yu and Vu, Ngoc Thang and Silberer, Carina}, editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.24", doi = "10.18653/v1/2023.conll-1.24", pages = "364--388", abstract = "Image-Text-Matching (ITM) is one of the defacto methods of learning generalized representations from a large corpus in Vision and Language (VL). However, due to the weak association between the web-collected image{--}text pairs, models fail to show fine-grained understanding of the combined semantics of these modalities. To this end, we propose Hard Negative Captions (HNC): an automatically created dataset containing foiled hard negative captions for ITM training towards achieving fine-grained cross-modal comprehension in VL. Additionally, we provide a challenging manually-created test set for benchmarking models on a fine-grained cross-modal mismatch with varying levels of compositional complexity. Our results show the effectiveness of training on HNC by improving the models{'} zero-shot capabilities in detecting mismatches on diagnostic tasks and performing robustly under noisy visual input scenarios. Also, we demonstrate that HNC models yield a comparable or better initialization for fine-tuning. Our code and data are publicly available.", }
Image-Text-Matching (ITM) is one of the defacto methods of learning generalized representations from a large corpus in Vision and Language (VL). However, due to the weak association between the web-collected image{--}text pairs, models fail to show fine-grained understanding of the combined semantics of these modalities. To this end, we propose Hard Negative Captions (HNC): an automatically created dataset containing foiled hard negative captions for ITM training towards achieving fine-grained cross-modal comprehension in VL. Additionally, we provide a challenging manually-created test set for benchmarking models on a fine-grained cross-modal mismatch with varying levels of compositional complexity. Our results show the effectiveness of training on HNC by improving the models{'} zero-shot capabilities in detecting mismatches on diagnostic tasks and performing robustly under noisy visual input scenarios. Also, we demonstrate that HNC models yield a comparable or better initialization for fine-tuning. Our code and data are publicly available.
[ "D{\\\"o}nmez, Esra", "Tilli, Pascal", "Yang, Hsiu-Yu", "Vu, Ngoc Thang", "Silberer, Carina" ]
HNC: Leveraging Hard Negative Captions towards Models with Fine-Grained Visual-Linguistic Comprehension Capabilities
conll-1.24
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.25.bib
https://aclanthology.org/2023.conll-1.25/
@inproceedings{van-duijn-etal-2023-theory, title = "Theory of Mind in Large Language Models: Examining Performance of 11 State-of-the-Art models vs. Children Aged 7-10 on Advanced Tests", author = "van Duijn, Max and van Dijk, Bram and Kouwenhoven, Tom and de Valk, Werner and Spruit, Marco and van der Putten, Peter", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.25", doi = "10.18653/v1/2023.conll-1.25", pages = "389--402", abstract = "To what degree should we ascribe cognitive capacities to Large Language Models (LLMs), such as the ability to reason about intentions and beliefs known as Theory of Mind (ToM)? Here we add to this emerging debate by (i) testing 11 base- and instruction-tuned LLMs on capabilities relevant to ToM beyond the dominant false-belief paradigm, including non-literal language usage and recursive intentionality; (ii) using newly rewritten versions of standardized tests to gauge LLMs{'} robustness; (iii) prompting and scoring for open besides closed questions; and (iv) benchmarking LLM performance against that of children aged 7-10 on the same tasks. We find that instruction-tuned LLMs from the GPT family outperform other models, and often also children. Base-LLMs are mostly unable to solve ToM tasks, even with specialized prompting. We suggest that the interlinked evolution and development of language and ToM may help explain what instruction-tuning adds: rewarding cooperative communication that takes into account interlocutor and context. We conclude by arguing for a nuanced perspective on ToM in LLMs.", }
To what degree should we ascribe cognitive capacities to Large Language Models (LLMs), such as the ability to reason about intentions and beliefs known as Theory of Mind (ToM)? Here we add to this emerging debate by (i) testing 11 base- and instruction-tuned LLMs on capabilities relevant to ToM beyond the dominant false-belief paradigm, including non-literal language usage and recursive intentionality; (ii) using newly rewritten versions of standardized tests to gauge LLMs{'} robustness; (iii) prompting and scoring for open besides closed questions; and (iv) benchmarking LLM performance against that of children aged 7-10 on the same tasks. We find that instruction-tuned LLMs from the GPT family outperform other models, and often also children. Base-LLMs are mostly unable to solve ToM tasks, even with specialized prompting. We suggest that the interlinked evolution and development of language and ToM may help explain what instruction-tuning adds: rewarding cooperative communication that takes into account interlocutor and context. We conclude by arguing for a nuanced perspective on ToM in LLMs.
[ "van Duijn, Max", "van Dijk, Bram", "Kouwenhoven, Tom", "de Valk, Werner", "Spruit, Marco", "van der Putten, Peter" ]
Theory of Mind in Large Language Models: Examining Performance of 11 State-of-the-Art models vs. Children Aged 7-10 on Advanced Tests
conll-1.25
2310.20320
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.26.bib
https://aclanthology.org/2023.conll-1.26/
@inproceedings{forristal-etal-2023-block, title = "A Block Metropolis-Hastings Sampler for Controllable Energy-based Text Generation", author = "Forristal, Jarad and Mireshghallah, Fatemehsadat and Durrett, Greg and Berg-Kirkpatrick, Taylor", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.26", doi = "10.18653/v1/2023.conll-1.26", pages = "403--413", abstract = "Recent work has shown that energy-based language modeling is an effective framework for controllable text generation because it enables flexible integration of arbitrary discriminators. However, because energy-based LMs are globally normalized, approximate techniques like Metropolis-Hastings (MH) are required for inference. Past work has largely explored simple proposal distributions that modify a single token at a time, like in Gibbs sampling. In this paper, we develop a novel MH sampler that, in contrast, proposes re-writes of the entire sequence in each step via iterative prompting of a large language model. Our new sampler (a) allows for more efficient and accurate sampling from a target distribution and (b) allows generation length to be determined through the sampling procedure rather than fixed in advance, as past work has required. We perform experiments on two controlled generation tasks, showing both downstream performance gains and more accurate target distribution sampling in comparison with single-token proposal techniques.", }
Recent work has shown that energy-based language modeling is an effective framework for controllable text generation because it enables flexible integration of arbitrary discriminators. However, because energy-based LMs are globally normalized, approximate techniques like Metropolis-Hastings (MH) are required for inference. Past work has largely explored simple proposal distributions that modify a single token at a time, like in Gibbs sampling. In this paper, we develop a novel MH sampler that, in contrast, proposes re-writes of the entire sequence in each step via iterative prompting of a large language model. Our new sampler (a) allows for more efficient and accurate sampling from a target distribution and (b) allows generation length to be determined through the sampling procedure rather than fixed in advance, as past work has required. We perform experiments on two controlled generation tasks, showing both downstream performance gains and more accurate target distribution sampling in comparison with single-token proposal techniques.
[ "Forristal, Jarad", "Mireshghallah, Fatemehsadat", "Durrett, Greg", "Berg-Kirkpatrick, Taylor" ]
A Block Metropolis-Hastings Sampler for Controllable Energy-based Text Generation
conll-1.26
2312.04510
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.27.bib
https://aclanthology.org/2023.conll-1.27/
@inproceedings{wang-etal-2023-fragile, title = "How Fragile is Relation Extraction under Entity Replacements?", author = "Wang, Yiwei and Hooi, Bryan and Wang, Fei and Cai, Yujun and Liang, Yuxuan and Zhou, Wenxuan and Tang, Jing and Duan, Manjuan and Chen, Muhao", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.27", doi = "10.18653/v1/2023.conll-1.27", pages = "414--423", abstract = "Relation extraction (RE) aims to extract the relations between entity names from the textual context. In principle, textual context determines the ground-truth relation and the RE models should be able to correctly identify the relations reflected by the textual context. However, existing work has found that the RE models memorize the entity name patterns to make RE predictions while ignoring the textual context. This motivates us to raise the question: are RE models robust to the entity replacements? In this work, we operate the random and type-constrained entity replacements over the RE instances in TACRED and evaluate the state-of-the-art RE models under the entity replacements. We observe the 30{\%} - 50{\%} F1 score drops on the state-of-the-art RE models under entity replacements. These results suggest that we need more efforts to develop effective RE models robust to entity replacements. We release the source code at https://github.com/wangywUST/RobustRE.", }
Relation extraction (RE) aims to extract the relations between entity names from the textual context. In principle, textual context determines the ground-truth relation and the RE models should be able to correctly identify the relations reflected by the textual context. However, existing work has found that the RE models memorize the entity name patterns to make RE predictions while ignoring the textual context. This motivates us to raise the question: are RE models robust to the entity replacements? In this work, we operate the random and type-constrained entity replacements over the RE instances in TACRED and evaluate the state-of-the-art RE models under the entity replacements. We observe the 30{\%} - 50{\%} F1 score drops on the state-of-the-art RE models under entity replacements. These results suggest that we need more efforts to develop effective RE models robust to entity replacements. We release the source code at https://github.com/wangywUST/RobustRE.
[ "Wang, Yiwei", "Hooi, Bryan", "Wang, Fei", "Cai, Yujun", "Liang, Yuxuan", "Zhou, Wenxuan", "Tang, Jing", "Duan, Manjuan", "Chen, Muhao" ]
How Fragile is Relation Extraction under Entity Replacements?
conll-1.27
2305.13551
[ "https://github.com/wangywust/robustre" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.28.bib
https://aclanthology.org/2023.conll-1.28/
@inproceedings{wada-etal-2023-jaspice, title = "{J}a{SPICE}: Automatic Evaluation Metric Using Predicate-Argument Structures for Image Captioning Models", author = "Wada, Yuiga and Kaneda, Kanta and Sugiura, Komei", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.28", doi = "10.18653/v1/2023.conll-1.28", pages = "424--435", abstract = "Image captioning studies heavily rely on automatic evaluation metrics such as BLEU and METEOR. However, such n-gram-based metrics have been shown to correlate poorly with human evaluation, leading to the proposal of alternative metrics such as SPICE for English; however, no equivalent metrics have been established for other languages. Therefore, in this study, we propose an automatic evaluation metric called JaSPICE, which evaluates Japanese captions based on scene graphs. The proposed method generates a scene graph from dependencies and the predicate-argument structure, and extends the graph using synonyms. We conducted experiments employing 10 image captioning models trained on STAIR Captions and PFN-PIC and constructed the Shichimi dataset, which contains 103,170 human evaluations. The results showed that our metric outperformed the baseline metrics for the correlation coefficient with the human evaluation.", }
Image captioning studies heavily rely on automatic evaluation metrics such as BLEU and METEOR. However, such n-gram-based metrics have been shown to correlate poorly with human evaluation, leading to the proposal of alternative metrics such as SPICE for English; however, no equivalent metrics have been established for other languages. Therefore, in this study, we propose an automatic evaluation metric called JaSPICE, which evaluates Japanese captions based on scene graphs. The proposed method generates a scene graph from dependencies and the predicate-argument structure, and extends the graph using synonyms. We conducted experiments employing 10 image captioning models trained on STAIR Captions and PFN-PIC and constructed the Shichimi dataset, which contains 103,170 human evaluations. The results showed that our metric outperformed the baseline metrics for the correlation coefficient with the human evaluation.
[ "Wada, Yuiga", "Kaneda, Kanta", "Sugiura, Komei" ]
JaSPICE: Automatic Evaluation Metric Using Predicate-Argument Structures for Image Captioning Models
conll-1.28
2311.04192
[ "https://github.com/keio-smilab23/JaSPICE" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.29.bib
https://aclanthology.org/2023.conll-1.29/
@inproceedings{karidi-etal-2023-muler, title = "{M}u{LER}: Detailed and Scalable Reference-based Evaluation", author = "Karidi, Taelin and Choshen, Leshem and Patel, Gal and Abend, Omri", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.29", doi = "10.18653/v1/2023.conll-1.29", pages = "436--455", abstract = "We propose a novel methodology (namely, MuLER) that transforms any reference-based evaluation metric for text generation, such as machine translation (MT) into a fine-grained analysis tool. Given a system and a metric, MuLER quantifies how much the chosen metric penalizes specific error types (e.g., errors in translating names of locations). MuLER thus enables a detailed error analysis which can lead to targeted improvement efforts for specific phenomena. We perform experiments in both synthetic and naturalistic settings to support MuLER{'}s validity and showcase its usability in MT evaluation, and other tasks, such as summarization. Analyzing all submissions to WMT in 2014-2020, we find consistent trends. For example, nouns and verbs are among the most frequent POS tags. However, they are among the hardest to translate. Performance on most POS tags improves with overall system performance, but a few are not thus correlated (their identity changes from language to language). Preliminary experiments with summarization reveal similar trends.", }
We propose a novel methodology (namely, MuLER) that transforms any reference-based evaluation metric for text generation, such as machine translation (MT) into a fine-grained analysis tool. Given a system and a metric, MuLER quantifies how much the chosen metric penalizes specific error types (e.g., errors in translating names of locations). MuLER thus enables a detailed error analysis which can lead to targeted improvement efforts for specific phenomena. We perform experiments in both synthetic and naturalistic settings to support MuLER{'}s validity and showcase its usability in MT evaluation, and other tasks, such as summarization. Analyzing all submissions to WMT in 2014-2020, we find consistent trends. For example, nouns and verbs are among the most frequent POS tags. However, they are among the hardest to translate. Performance on most POS tags improves with overall system performance, but a few are not thus correlated (their identity changes from language to language). Preliminary experiments with summarization reveal similar trends.
[ "Karidi, Taelin", "Choshen, Leshem", "Patel, Gal", "Abend, Omri" ]
MuLER: Detailed and Scalable Reference-based Evaluation
conll-1.29
2305.14991
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.30.bib
https://aclanthology.org/2023.conll-1.30/
@inproceedings{he-etal-2023-impact, title = "The Impact of Familiarity on Naming Variation: A Study on Object Naming in {M}andarin {C}hinese", author = "He, Yunke and Liao, Xixian and Liang, Jialing and Boleda, Gemma", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.30", doi = "10.18653/v1/2023.conll-1.30", pages = "456--475", abstract = "Different speakers often produce different names for the same object or entity (e.g., {``}woman{''} vs. {``}tourist{''} for a female tourist). The reasons behind variation in naming are not well understood. We create a Language and Vision dataset for Mandarin Chinese that provides an average of 20 names for 1319 naturalistic images, and investigate how familiarity with a given kind of object relates to the degree of naming variation it triggers across subjects. We propose that familiarity influences naming variation in two competing ways: increasing familiarity can either expand vocabulary, leading to higher variation, or promote convergence on conventional names, thereby reducing variation. We find evidence for both factors being at play. Our study illustrates how computational resources can be used to address research questions in Cognitive Science.", }
Different speakers often produce different names for the same object or entity (e.g., {``}woman{''} vs. {``}tourist{''} for a female tourist). The reasons behind variation in naming are not well understood. We create a Language and Vision dataset for Mandarin Chinese that provides an average of 20 names for 1319 naturalistic images, and investigate how familiarity with a given kind of object relates to the degree of naming variation it triggers across subjects. We propose that familiarity influences naming variation in two competing ways: increasing familiarity can either expand vocabulary, leading to higher variation, or promote convergence on conventional names, thereby reducing variation. We find evidence for both factors being at play. Our study illustrates how computational resources can be used to address research questions in Cognitive Science.
[ "He, Yunke", "Liao, Xixian", "Liang, Jialing", "Boleda, Gemma" ]
The Impact of Familiarity on Naming Variation: A Study on Object Naming in Mandarin Chinese
conll-1.30
2311.10181
[ "https://github.com/amore-upf/manynames" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.31.bib
https://aclanthology.org/2023.conll-1.31/
@inproceedings{roll-etal-2023-psst, title = "{PSST}! Prosodic Speech Segmentation with Transformers", author = "Roll, Nathan and Graham, Calbert and Todd, Simon", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.31", doi = "10.18653/v1/2023.conll-1.31", pages = "476--487", abstract = "We develop and probe a model for detecting the boundaries of prosodic chunks in untranscribed conversational English speech. The model is obtained by fine-tuning a Transformer-based speech-to-text (STT) model to integrate the identification of Intonation Unit (IU) boundaries with the STT task. The model shows robust performance, both on held-out data and on out-of-distribution data representing different dialects and transcription protocols. By evaluating the model on degraded speech data, and comparing it with alternatives, we establish that it relies heavily on lexico-syntactic information inferred from audio, and not solely on acoustic information typically understood to cue prosodic structure. We release our model as both a transcription tool and a baseline for further improvements in prosodic segmentation.", }
We develop and probe a model for detecting the boundaries of prosodic chunks in untranscribed conversational English speech. The model is obtained by fine-tuning a Transformer-based speech-to-text (STT) model to integrate the identification of Intonation Unit (IU) boundaries with the STT task. The model shows robust performance, both on held-out data and on out-of-distribution data representing different dialects and transcription protocols. By evaluating the model on degraded speech data, and comparing it with alternatives, we establish that it relies heavily on lexico-syntactic information inferred from audio, and not solely on acoustic information typically understood to cue prosodic structure. We release our model as both a transcription tool and a baseline for further improvements in prosodic segmentation.
[ "Roll, Nathan", "Graham, Calbert", "Todd, Simon" ]
PSST! Prosodic Speech Segmentation with Transformers
conll-1.31
2302.01984
[ "https://github.com/nathan-roll1/psst" ]
https://huggingface.co/papers/2302.01984
1
1
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.conll-1.32.bib
https://aclanthology.org/2023.conll-1.32/
@inproceedings{ghosh-etal-2023-alignment, title = "Alignment via Mutual Information", author = "Ghosh, Shinjini and Kim, Yoon and Fernandez Astudillo, Ramon and Naseem, Tahira and Andreas, Jacob", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.32", doi = "10.18653/v1/2023.conll-1.32", pages = "488--497", abstract = "Many language learning tasks require learners to infer correspondences between data in two modalities. Often, these alignments are many-to-many and context-sensitive. For example, translating into morphologically rich languages requires learning not just how words, but morphemes, should be translated; words and morphemes may have different meanings (or groundings) depending on the context in which they are used. We describe an information-theoretic approach to context-sensitive, many-to-many alignment. Our approach first trains a masked sequence model to place distributions over missing spans in (source, target) sequences. Next, it uses this model to compute pointwise mutual information between source and target spans conditional on context. Finally, it aligns spans with high mutual information. We apply this approach to two learning problems: character-based word translation (using alignments for joint morphological segmentation and lexicon learning) and visually grounded reference resolution (using alignments to jointly localize referents and learn word meanings). In both cases, our proposed approach outperforms both structured and neural baselines, showing that conditional mutual information offers an effective framework for formalizing alignment problems in general domains.", }
Many language learning tasks require learners to infer correspondences between data in two modalities. Often, these alignments are many-to-many and context-sensitive. For example, translating into morphologically rich languages requires learning not just how words, but morphemes, should be translated; words and morphemes may have different meanings (or groundings) depending on the context in which they are used. We describe an information-theoretic approach to context-sensitive, many-to-many alignment. Our approach first trains a masked sequence model to place distributions over missing spans in (source, target) sequences. Next, it uses this model to compute pointwise mutual information between source and target spans conditional on context. Finally, it aligns spans with high mutual information. We apply this approach to two learning problems: character-based word translation (using alignments for joint morphological segmentation and lexicon learning) and visually grounded reference resolution (using alignments to jointly localize referents and learn word meanings). In both cases, our proposed approach outperforms both structured and neural baselines, showing that conditional mutual information offers an effective framework for formalizing alignment problems in general domains.
[ "Ghosh, Shinjini", "Kim, Yoon", "Fern", "ez Astudillo, Ramon", "Naseem, Tahira", "Andreas, Jacob" ]
Alignment via Mutual Information
conll-1.32
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.33.bib
https://aclanthology.org/2023.conll-1.33/
@inproceedings{dehouck-2023-challenging, title = "Challenging the {``}One Single Vector per Token{''} Assumption", author = "Dehouck, Mathieu", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.33", doi = "10.18653/v1/2023.conll-1.33", pages = "498--507", abstract = "In this paper we question the almost universal assumption that in neural networks each token should be represented by a single vector. In fact, it is so natural to use one vector per word that most people do not even consider it as an assumption of their various models. Via a series of experiments on dependency parsing, in which we let each token in a sentence be represented by a sequence of vectors, we show that the {``}one single vector per token{''} assumption might be too strong for recurrent neural networks. Indeed, biaffine parsers seem to work better when their encoder accesses its input{'}s tokens{'} representations in several time steps rather than all at once. This seems to indicate that having only one occasion to look at a token through its vector is too strong a constraint for recurrent neural networks and calls for further studies on the way tokens are fed to neural networks.", }
In this paper we question the almost universal assumption that in neural networks each token should be represented by a single vector. In fact, it is so natural to use one vector per word that most people do not even consider it as an assumption of their various models. Via a series of experiments on dependency parsing, in which we let each token in a sentence be represented by a sequence of vectors, we show that the {``}one single vector per token{''} assumption might be too strong for recurrent neural networks. Indeed, biaffine parsers seem to work better when their encoder accesses its input{'}s tokens{'} representations in several time steps rather than all at once. This seems to indicate that having only one occasion to look at a token through its vector is too strong a constraint for recurrent neural networks and calls for further studies on the way tokens are fed to neural networks.
[ "Dehouck, Mathieu" ]
Challenging the “One Single Vector per Token” Assumption
conll-1.33
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.34.bib
https://aclanthology.org/2023.conll-1.34/
@inproceedings{abudouwaili-etal-2023-strategies, title = "Strategies to Improve Low-Resource Agglutinative Languages Morphological Inflection", author = "Abudouwaili, Gulinigeer and Ablez, Wayit and Abiderexiti, Kahaerjiang and Wumaier, Aishan and Yi, Nian", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.34", doi = "10.18653/v1/2023.conll-1.34", pages = "508--520", abstract = "Morphological inflection is a crucial task in the field of morphology and is typically considered a sequence transduction task. In recent years, it has received substantial attention from researchers and made significant progress. Models have achieved impressive performance levels for both high- and low-resource languages. However, when the distribution of instances in the training dataset changes, or novel lemma or feature labels are predicted, the model{'}s accuracy declines. In agglutinative languages, morphological inflection involves phonological phenomena while generating new words, which can alter the syllable patterns at the boundary between the lemma and the suffixes. This paper proposes four strategies for low-resource agglutinative languages to enhance the model{'}s generalization ability. Firstly, a convolution module extracts syllable-like units from lemmas, allowing the model to learn syllable features. Secondly, the lemma and feature labels are represented separately in the input, and the position encoding of the feature labels is marked so that the model learns the order between suffixes and labels. Thirdly, the model recognizes the common substrings in lemmas through two special characters and copies them into words. Finally, combined with syllable features, we improve the data augmentation method. A series of experiments show that the proposed model in this paper is superior to other baseline models.", }
Morphological inflection is a crucial task in the field of morphology and is typically considered a sequence transduction task. In recent years, it has received substantial attention from researchers and made significant progress. Models have achieved impressive performance levels for both high- and low-resource languages. However, when the distribution of instances in the training dataset changes, or novel lemma or feature labels are predicted, the model{'}s accuracy declines. In agglutinative languages, morphological inflection involves phonological phenomena while generating new words, which can alter the syllable patterns at the boundary between the lemma and the suffixes. This paper proposes four strategies for low-resource agglutinative languages to enhance the model{'}s generalization ability. Firstly, a convolution module extracts syllable-like units from lemmas, allowing the model to learn syllable features. Secondly, the lemma and feature labels are represented separately in the input, and the position encoding of the feature labels is marked so that the model learns the order between suffixes and labels. Thirdly, the model recognizes the common substrings in lemmas through two special characters and copies them into words. Finally, combined with syllable features, we improve the data augmentation method. A series of experiments show that the proposed model in this paper is superior to other baseline models.
[ "Abudouwaili, Gulinigeer", "Ablez, Wayit", "Abiderexiti, Kahaerjiang", "Wumaier, Aishan", "Yi, Nian" ]
Strategies to Improve Low-Resource Agglutinative Languages Morphological Inflection
conll-1.34
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.35.bib
https://aclanthology.org/2023.conll-1.35/
@inproceedings{fields-kennington-2023-exploring, title = "Exploring Transformers as Compact, Data-efficient Language Models", author = "Fields, Clayton and Kennington, Casey", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.35", doi = "10.18653/v1/2023.conll-1.35", pages = "521--531", abstract = "Large scale transformer models, trained with massive datasets have become the standard in natural language processing. The huge size of most transformers make research with these models impossible for those with limited computational resources. Additionally, the enormous pretraining data requirements of transformers exclude pretraining them with many smaller datasets that might provide enlightening results. In this study, we show that transformers can be significantly reduced in size, with as few as 5.7 million parameters, and still retain most of their downstream capability. Further we show that transformer models can retain comparable results when trained on human-scale datasets, as few as 5 million words of pretraining data. Overall, the results of our study suggest transformers function well as compact, data efficient language models and that complex model compression methods, such as model distillation are not necessarily superior to pretraining reduced size transformer models from scratch.", }
Large scale transformer models, trained with massive datasets have become the standard in natural language processing. The huge size of most transformers make research with these models impossible for those with limited computational resources. Additionally, the enormous pretraining data requirements of transformers exclude pretraining them with many smaller datasets that might provide enlightening results. In this study, we show that transformers can be significantly reduced in size, with as few as 5.7 million parameters, and still retain most of their downstream capability. Further we show that transformer models can retain comparable results when trained on human-scale datasets, as few as 5 million words of pretraining data. Overall, the results of our study suggest transformers function well as compact, data efficient language models and that complex model compression methods, such as model distillation are not necessarily superior to pretraining reduced size transformer models from scratch.
[ "Fields, Clayton", "Kennington, Casey" ]
Exploring Transformers as Compact, Data-efficient Language Models
conll-1.35
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.36.bib
https://aclanthology.org/2023.conll-1.36/
@inproceedings{ishii-miyao-2023-tree, title = "Tree-shape Uncertainty for Analyzing the Inherent Branching Bias of Unsupervised Parsing Models", author = "Ishii, Taiga and Miyao, Yusuke", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.36", doi = "10.18653/v1/2023.conll-1.36", pages = "532--547", abstract = "This paper presents the formalization of tree-shape uncertainty that enables us to analyze the inherent branching bias of unsupervised parsing models using raw texts alone. Previous work analyzed the branching bias of unsupervised parsing models by comparing the outputs of trained parsers with gold syntactic trees. However, such approaches do not consider the fact that texts can be generated by different grammars with different syntactic trees, possibly failing to clearly separate the inherent bias of the model and the bias in train data learned by the model. To this end, we formulate tree-shape uncertainty and derive sufficient conditions that can be used for creating texts that are expected to contain no biased information on branching. In the experiment, we show that training parsers on such unbiased texts can effectively detect the branching bias of existing unsupervised parsing models. Such bias may depend only on the algorithm, or it may depend on seemingly unrelated dataset statistics such as sequence length and vocabulary size.", }
This paper presents the formalization of tree-shape uncertainty that enables us to analyze the inherent branching bias of unsupervised parsing models using raw texts alone. Previous work analyzed the branching bias of unsupervised parsing models by comparing the outputs of trained parsers with gold syntactic trees. However, such approaches do not consider the fact that texts can be generated by different grammars with different syntactic trees, possibly failing to clearly separate the inherent bias of the model and the bias in train data learned by the model. To this end, we formulate tree-shape uncertainty and derive sufficient conditions that can be used for creating texts that are expected to contain no biased information on branching. In the experiment, we show that training parsers on such unbiased texts can effectively detect the branching bias of existing unsupervised parsing models. Such bias may depend only on the algorithm, or it may depend on seemingly unrelated dataset statistics such as sequence length and vocabulary size.
[ "Ishii, Taiga", "Miyao, Yusuke" ]
Tree-shape Uncertainty for Analyzing the Inherent Branching Bias of Unsupervised Parsing Models
conll-1.36
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.37.bib
https://aclanthology.org/2023.conll-1.37/
@inproceedings{pal-etal-2023-future, title = "Future Lens: Anticipating Subsequent Tokens from a Single Hidden State", author = "Pal, Koyena and Sun, Jiuding and Yuan, Andrew and Wallace, Byron and Bau, David", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.37", doi = "10.18653/v1/2023.conll-1.37", pages = "548--560", abstract = "We conjecture that hidden state vectors corresponding to individual input tokens encode information sufficient to accurately predict several tokens ahead. More concretely, in this paper we ask: Given a hidden (internal) representation of a single token at position t in an input, can we reliably anticipate the tokens that will appear at positions {\mbox{$\geq$}} t + 2? To test this, we measure linear approximation and causal intervention methods in GPT-J-6B to evaluate the degree to which individual hidden states in the network contain signal rich enough to predict future hidden states and, ultimately, token outputs. We find that, at some layers, we can approximate a model{'}s output with more than 48{\%} accuracy with respect to its prediction of subsequent tokens through a single hidden state. Finally we present a {``}Future Lens{''} visualization that uses these methods to create a new view of transformer states.", }
We conjecture that hidden state vectors corresponding to individual input tokens encode information sufficient to accurately predict several tokens ahead. More concretely, in this paper we ask: Given a hidden (internal) representation of a single token at position t in an input, can we reliably anticipate the tokens that will appear at positions {\mbox{$\geq$}} t + 2? To test this, we measure linear approximation and causal intervention methods in GPT-J-6B to evaluate the degree to which individual hidden states in the network contain signal rich enough to predict future hidden states and, ultimately, token outputs. We find that, at some layers, we can approximate a model{'}s output with more than 48{\%} accuracy with respect to its prediction of subsequent tokens through a single hidden state. Finally we present a {``}Future Lens{''} visualization that uses these methods to create a new view of transformer states.
[ "Pal, Koyena", "Sun, Jiuding", "Yuan, Andrew", "Wallace, Byron", "Bau, David" ]
Future Lens: Anticipating Subsequent Tokens from a Single Hidden State
conll-1.37
2311.04897
[ "" ]
https://huggingface.co/papers/2311.04897
0
1
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.conll-1.38.bib
https://aclanthology.org/2023.conll-1.38/
@inproceedings{zhao-etal-2023-cross, title = "Cross-Document Event Coreference Resolution: Instruct Humans or Instruct {GPT}?", author = "Zhao, Jin and Xue, Nianwen and Min, Bonan", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.38", doi = "10.18653/v1/2023.conll-1.38", pages = "561--574", abstract = "This paper explores utilizing Large Language Models (LLMs) to perform Cross-Document Event Coreference Resolution (CDEC) annotations and evaluates how they fare against human annotators with different levels of training. Specifically, we formulate CDEC as a multi-category classification problem on pairs of events that are represented as decontextualized sentences, and compare the predictions of GPT-4 with the judgment of fully trained annotators and crowdworkers on the same data set. Our study indicates that GPT-4 with zero-shot learning outperformed crowd-workers by a large margin and exhibits a level of performance comparable to trained annotators. Upon closer analysis, GPT-4 also exhibits tendencies of being overly confident, and force annotation decisions even when such decisions are not warranted due to insufficient information. Our results have implications on how to perform complicated annotations such as CDEC in the age of LLMs, and show that the best way to acquire such annotations might be to combine the strengths of LLMs and trained human annotators in the annotation process, and using untrained or undertrained crowdworkers is no longer a viable option to acquire high-quality data to advance the state of the art for such problems.", }
This paper explores utilizing Large Language Models (LLMs) to perform Cross-Document Event Coreference Resolution (CDEC) annotations and evaluates how they fare against human annotators with different levels of training. Specifically, we formulate CDEC as a multi-category classification problem on pairs of events that are represented as decontextualized sentences, and compare the predictions of GPT-4 with the judgment of fully trained annotators and crowdworkers on the same data set. Our study indicates that GPT-4 with zero-shot learning outperformed crowd-workers by a large margin and exhibits a level of performance comparable to trained annotators. Upon closer analysis, GPT-4 also exhibits tendencies of being overly confident, and force annotation decisions even when such decisions are not warranted due to insufficient information. Our results have implications on how to perform complicated annotations such as CDEC in the age of LLMs, and show that the best way to acquire such annotations might be to combine the strengths of LLMs and trained human annotators in the annotation process, and using untrained or undertrained crowdworkers is no longer a viable option to acquire high-quality data to advance the state of the art for such problems.
[ "Zhao, Jin", "Xue, Nianwen", "Min, Bonan" ]
Cross-Document Event Coreference Resolution: Instruct Humans or Instruct GPT?
conll-1.38
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.39.bib
https://aclanthology.org/2023.conll-1.39/
@inproceedings{ray-choudhury-kalra-2023-implications, title = "Implications of Annotation Artifacts in Edge Probing Test Datasets", author = "Ray Choudhury, Sagnik and Kalra, Jushaan", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.39", doi = "10.18653/v1/2023.conll-1.39", pages = "575--586", abstract = "Edge probing tests are classification tasks that test for grammatical knowledge encoded in token representations coming from contextual encoders such as large language models (LLMs). Many LLM encoders have shown high performance in EP tests, leading to conjectures about their ability to encode linguistic knowledge. However, a large body of research claims that the tests necessarily do not measure the LLM{'}s capacity to encode knowledge, but rather reflect the classifiers{'} ability to learn the problem. Much of this criticism stems from the fact that often the classifiers have very similar accuracy when an LLM vs a random encoder is used. Consequently, several modifications to the tests have been suggested, including information theoretic probes. We show that commonly used edge probing test datasets have various biases including memorization. When these biases are removed, the LLM encoders do show a significant difference from the random ones, even with the simple non-information theoretic probes.", }
Edge probing tests are classification tasks that test for grammatical knowledge encoded in token representations coming from contextual encoders such as large language models (LLMs). Many LLM encoders have shown high performance in EP tests, leading to conjectures about their ability to encode linguistic knowledge. However, a large body of research claims that the tests necessarily do not measure the LLM{'}s capacity to encode knowledge, but rather reflect the classifiers{'} ability to learn the problem. Much of this criticism stems from the fact that often the classifiers have very similar accuracy when an LLM vs a random encoder is used. Consequently, several modifications to the tests have been suggested, including information theoretic probes. We show that commonly used edge probing test datasets have various biases including memorization. When these biases are removed, the LLM encoders do show a significant difference from the random ones, even with the simple non-information theoretic probes.
[ "Ray Choudhury, Sagnik", "Kalra, Jushaan" ]
Implications of Annotation Artifacts in Edge Probing Test Datasets
conll-1.39
2310.13856
[ "https://github.com/josh1108/eptest" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-1.40.bib
https://aclanthology.org/2023.conll-1.40/
@inproceedings{ghasemi-madani-minervini-2023-refer, title = "{REFER}: An End-to-end Rationale Extraction Framework for Explanation Regularization", author = "Ghasemi Madani, Mohammad Reza and Minervini, Pasquale", editor = "Jiang, Jing and Reitter, David and Deng, Shumin", booktitle = "Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-1.40", doi = "10.18653/v1/2023.conll-1.40", pages = "587--602", abstract = "Human-annotated textual explanations are becoming increasingly important in Explainable Natural Language Processing. Rationale extraction aims to provide faithful (i.e. reflective of the behavior of the model) and plausible (i.e. convincing to humans) explanations by highlighting the inputs that had the largest impact on the prediction without compromising the performance of the task model. In recent works, the focus of training rationale extractors was primarily on optimizing for plausibility using human highlights, while the task model was trained on jointly optimizing for task predictive accuracy and faithfulness. We propose REFER, a framework that employs a differentiable rationale extractor that allows to back-propagate through the rationale extraction process. We analyze the impact of using human highlights during training by jointly training the task model and the rationale extractor. In our experiments, REFER yields significantly better results in terms of faithfulness, plausibility, and downstream task accuracy on both in-distribution and out-of-distribution data. On both e-SNLI and CoS-E, our best setting produces better results in terms of composite normalized relative gain than the previous baselines by 11{\%} and 3{\%}, respectively.", }
Human-annotated textual explanations are becoming increasingly important in Explainable Natural Language Processing. Rationale extraction aims to provide faithful (i.e. reflective of the behavior of the model) and plausible (i.e. convincing to humans) explanations by highlighting the inputs that had the largest impact on the prediction without compromising the performance of the task model. In recent works, the focus of training rationale extractors was primarily on optimizing for plausibility using human highlights, while the task model was trained on jointly optimizing for task predictive accuracy and faithfulness. We propose REFER, a framework that employs a differentiable rationale extractor that allows to back-propagate through the rationale extraction process. We analyze the impact of using human highlights during training by jointly training the task model and the rationale extractor. In our experiments, REFER yields significantly better results in terms of faithfulness, plausibility, and downstream task accuracy on both in-distribution and out-of-distribution data. On both e-SNLI and CoS-E, our best setting produces better results in terms of composite normalized relative gain than the previous baselines by 11{\%} and 3{\%}, respectively.
[ "Ghasemi Madani, Mohammad Reza", "Minervini, Pasquale" ]
REFER: An End-to-end Rationale Extraction Framework for Explanation Regularization
conll-1.40
2310.14418
[ "" ]
https://huggingface.co/papers/2310.14418
2
1
0
2
[]
[]
[]
1
Poster
https://aclanthology.org/2023.conll-babylm.1.bib
https://aclanthology.org/2023.conll-babylm.1/
@inproceedings{warstadt-etal-2023-findings, title = "Findings of the {B}aby{LM} Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora", author = "Warstadt, Alex and Mueller, Aaron and Choshen, Leshem and Wilcox, Ethan and Zhuang, Chengxu and Ciro, Juan and Mosquera, Rafael and Paranjabe, Bhargavi and Williams, Adina and Linzen, Tal and Cotterell, Ryan", editor = "Warstadt, Alex and Mueller, Aaron and Choshen, Leshem and Wilcox, Ethan and Zhuang, Chengxu and Ciro, Juan and Mosquera, Rafael and Paranjabe, Bhargavi and Williams, Adina and Linzen, Tal and Cotterell, Ryan", booktitle = "Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-babylm.1", doi = "10.18653/v1/2023.conll-babylm.1", pages = "1--34", }
No abstract found
[ "Warstadt, Alex", "Mueller, Aaron", "Choshen, Leshem", "Wilcox, Ethan", "Zhuang, Chengxu", "Ciro, Juan", "Mosquera, Rafael", "Paranjabe, Bhargavi", "Williams, Adina", "Linzen, Tal", "Cotterell, Ryan" ]
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
conll-babylm.1
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-babylm.2.bib
https://aclanthology.org/2023.conll-babylm.2/
@inproceedings{bunzeck-zarriess-2023-gpt, title = "{GPT}-wee: How Small Can a Small Language Model Really Get?", author = "Bunzeck, Bastian and Zarrie{\ss}, Sina", editor = "Warstadt, Alex and Mueller, Aaron and Choshen, Leshem and Wilcox, Ethan and Zhuang, Chengxu and Ciro, Juan and Mosquera, Rafael and Paranjabe, Bhargavi and Williams, Adina and Linzen, Tal and Cotterell, Ryan", booktitle = "Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-babylm.2", doi = "10.18653/v1/2023.conll-babylm.2", pages = "35--46", }
No abstract found
[ "Bunzeck, Bastian", "Zarrie{\\ss}, Sina" ]
GPT-wee: How Small Can a Small Language Model Really Get?
conll-babylm.2
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-babylm.3.bib
https://aclanthology.org/2023.conll-babylm.3/
@inproceedings{fields-etal-2023-tiny, title = "Tiny Language Models Enriched with Multimodal Knowledge from Multiplex Networks", author = "Fields, Clayton and Natouf, Osama and McMains, Andrew and Henry, Catherine and Kennington, Casey", editor = "Warstadt, Alex and Mueller, Aaron and Choshen, Leshem and Wilcox, Ethan and Zhuang, Chengxu and Ciro, Juan and Mosquera, Rafael and Paranjabe, Bhargavi and Williams, Adina and Linzen, Tal and Cotterell, Ryan", booktitle = "Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-babylm.3", doi = "10.18653/v1/2023.conll-babylm.3", pages = "47--57", }
No abstract found
[ "Fields, Clayton", "Natouf, Osama", "McMains, Andrew", "Henry, Catherine", "Kennington, Casey" ]
Tiny Language Models Enriched with Multimodal Knowledge from Multiplex Networks
conll-babylm.3
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-babylm.4.bib
https://aclanthology.org/2023.conll-babylm.4/
@inproceedings{proskurina-etal-2023-mini, title = "Mini Minds: Exploring Bebeshka and Zlata Baby Models", author = "Proskurina, Irina and Metzler, Guillaume and Velcin, Julien", editor = "Warstadt, Alex and Mueller, Aaron and Choshen, Leshem and Wilcox, Ethan and Zhuang, Chengxu and Ciro, Juan and Mosquera, Rafael and Paranjabe, Bhargavi and Williams, Adina and Linzen, Tal and Cotterell, Ryan", booktitle = "Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-babylm.4", doi = "10.18653/v1/2023.conll-babylm.4", pages = "58--68", }
No abstract found
[ "Proskurina, Irina", "Metzler, Guillaume", "Velcin, Julien" ]
Mini Minds: Exploring Bebeshka and Zlata Baby Models
conll-babylm.4
2311.03216
[ "https://github.com/upunaprosk/small-language-models" ]
https://huggingface.co/papers/2311.03216
1
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.conll-babylm.5.bib
https://aclanthology.org/2023.conll-babylm.5/
@inproceedings{chen-portelance-2023-grammar, title = "Grammar induction pretraining for language modeling in low resource contexts", author = "Chen, Xuanda and Portelance, Eva", editor = "Warstadt, Alex and Mueller, Aaron and Choshen, Leshem and Wilcox, Ethan and Zhuang, Chengxu and Ciro, Juan and Mosquera, Rafael and Paranjabe, Bhargavi and Williams, Adina and Linzen, Tal and Cotterell, Ryan", booktitle = "Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-babylm.5", doi = "10.18653/v1/2023.conll-babylm.5", pages = "69--73", }
No abstract found
[ "Chen, Xu", "a", "Portelance, Eva" ]
Grammar induction pretraining for language modeling in low resource contexts
conll-babylm.5
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-babylm.6.bib
https://aclanthology.org/2023.conll-babylm.6/
@inproceedings{jumelet-etal-2023-chapgtp, title = "{C}hap{GTP}, {ILLC}{'}s Attempt at Raising a {B}aby{LM}: Improving Data Efficiency by Automatic Task Formation", author = "Jumelet, Jaap and Hanna, Michael and de Heer Kloots, Marianne and Langedijk, Anna and Pouw, Charlotte and van der Wal, Oskar", editor = "Warstadt, Alex and Mueller, Aaron and Choshen, Leshem and Wilcox, Ethan and Zhuang, Chengxu and Ciro, Juan and Mosquera, Rafael and Paranjabe, Bhargavi and Williams, Adina and Linzen, Tal and Cotterell, Ryan", booktitle = "Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-babylm.6", doi = "10.18653/v1/2023.conll-babylm.6", pages = "74--85", }
No abstract found
[ "Jumelet, Jaap", "Hanna, Michael", "de Heer Kloots, Marianne", "Langedijk, Anna", "Pouw, Charlotte", "van der Wal, Oskar" ]
ChapGTP, ILLC's Attempt at Raising a BabyLM: Improving Data Efficiency by Automatic Task Formation
conll-babylm.6
2310.11282
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.conll-babylm.7.bib
https://aclanthology.org/2023.conll-babylm.7/
@inproceedings{yang-etal-2023-penn, title = "{P}enn {\&} {BGU} {B}aby{BERT}a+ for Strict-Small {B}aby{LM} Challenge", author = "Yang, Yahan and Sulem, Elior and Lee, Insup and Roth, Dan", editor = "Warstadt, Alex and Mueller, Aaron and Choshen, Leshem and Wilcox, Ethan and Zhuang, Chengxu and Ciro, Juan and Mosquera, Rafael and Paranjabe, Bhargavi and Williams, Adina and Linzen, Tal and Cotterell, Ryan", booktitle = "Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-babylm.7", doi = "10.18653/v1/2023.conll-babylm.7", pages = "86--88", }
No abstract found
[ "Yang, Yahan", "Sulem, Elior", "Lee, Insup", "Roth, Dan" ]
Penn & BGU BabyBERTa+ for Strict-Small BabyLM Challenge
conll-babylm.7
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster