Datasets:

bibtex_url
stringlengths
41
52
proceedings
stringlengths
38
49
bibtext
stringlengths
788
3.49k
abstract
stringlengths
0
2.12k
authors
sequencelengths
1
58
title
stringlengths
16
181
id
stringlengths
7
18
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
170 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
56
num_comments
int64
-1
9
n_authors
int64
-1
57
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
99
Datasets
sequencelengths
0
5
Spaces
sequencelengths
0
57
https://aclanthology.org/2024.clinicalnlp-1.57.bib
https://aclanthology.org/2024.clinicalnlp-1.57/
@inproceedings{ben-abacha-etal-2024-overview, title = "Overview of the {MEDIQA}-{CORR} 2024 Shared Task on Medical Error Detection and Correction", author = "Ben Abacha, Asma and Yim, Wen-wai and Fu, Yujuan and Sun, Zhaoyi and Xia, Fei and Yetisgen, Meliha", editor = "Naumann, Tristan and Ben Abacha, Asma and Bethard, Steven and Roberts, Kirk and Bitterman, Danielle", booktitle = "Proceedings of the 6th Clinical Natural Language Processing Workshop", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.clinicalnlp-1.57", doi = "10.18653/v1/2024.clinicalnlp-1.57", pages = "596--603", abstract = "Automatic detection and correction of medical errors enables a more rigorous validation of medical documentation as well as clinical notes generated by large language models. Such solutions can ensure the accuracy and medical coherence of clinical texts and enhance patient care and health outcomes. The MEDIQA-CORR 2024 shared task focused on detecting and correcting different types of medical errors in clinical texts. Seventeen teams participated in the shared task and experimented with a broad range of approaches and models. In this paper, we describe the MEDIQA-CORR task, datasets, and the participants{'} results and methods.", }
Automatic detection and correction of medical errors enables a more rigorous validation of medical documentation as well as clinical notes generated by large language models. Such solutions can ensure the accuracy and medical coherence of clinical texts and enhance patient care and health outcomes. The MEDIQA-CORR 2024 shared task focused on detecting and correcting different types of medical errors in clinical texts. Seventeen teams participated in the shared task and experimented with a broad range of approaches and models. In this paper, we describe the MEDIQA-CORR task, datasets, and the participants{'} results and methods.
[ "Ben Abacha, Asma", "Yim, Wen-wai", "Fu, Yujuan", "Sun, Zhaoyi", "Xia, Fei", "Yetisgen, Meliha" ]
Overview of the MEDIQA-CORR 2024 Shared Task on Medical Error Detection and Correction
clinicalnlp-1.57
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.clinicalnlp-1.58.bib
https://aclanthology.org/2024.clinicalnlp-1.58/
@inproceedings{zhao-rios-2024-utsa, title = "{UTSA}-{NLP} at {C}hemo{T}imelines 2024: Evaluating Instruction-Tuned Language Models for Temporal Relation Extraction", author = "Zhao, Xingmeng and Rios, Anthony", editor = "Naumann, Tristan and Ben Abacha, Asma and Bethard, Steven and Roberts, Kirk and Bitterman, Danielle", booktitle = "Proceedings of the 6th Clinical Natural Language Processing Workshop", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.clinicalnlp-1.58", doi = "10.18653/v1/2024.clinicalnlp-1.58", pages = "604--615", abstract = "This paper presents our approach for the 2024 ChemoTimelines shared task. Specifically, we explored using Large Language Models (LLMs) for temporal relation extraction. We evaluate multiple model variations based on how the training data is used. For instance, we transform the task into a question-answering problem and use QA pairs to extract chemo-related events and their temporal relations. Next, we add all the documents to each question-answer pair as examples in our training dataset. Finally, we explore adding unlabeled data for continued pretraining. Each addition is done iteratively. Our results show that adding the document helps, but unlabeled data does not yield performance improvements, possibly because we used only 1{\%} of the available data. Moreover, we find that instruction-tuned models still substantially underperform more traditional systems (e.g., EntityBERT).", }
This paper presents our approach for the 2024 ChemoTimelines shared task. Specifically, we explored using Large Language Models (LLMs) for temporal relation extraction. We evaluate multiple model variations based on how the training data is used. For instance, we transform the task into a question-answering problem and use QA pairs to extract chemo-related events and their temporal relations. Next, we add all the documents to each question-answer pair as examples in our training dataset. Finally, we explore adding unlabeled data for continued pretraining. Each addition is done iteratively. Our results show that adding the document helps, but unlabeled data does not yield performance improvements, possibly because we used only 1{\%} of the available data. Moreover, we find that instruction-tuned models still substantially underperform more traditional systems (e.g., EntityBERT).
[ "Zhao, Xingmeng", "Rios, Anthony" ]
UTSA-NLP at ChemoTimelines 2024: Evaluating Instruction-Tuned Language Models for Temporal Relation Extraction
clinicalnlp-1.58
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.clinicalnlp-1.59.bib
https://aclanthology.org/2024.clinicalnlp-1.59/
@inproceedings{toma-etal-2024-wanglab, title = "{W}ang{L}ab at {MEDIQA}-{CORR} 2024: Optimized {LLM}-based Programs for Medical Error Detection and Correction", author = "Toma, Augustin and Xie, Ronald and Palayew, Steven and Lawler, Patrick and Wang, Bo", editor = "Naumann, Tristan and Ben Abacha, Asma and Bethard, Steven and Roberts, Kirk and Bitterman, Danielle", booktitle = "Proceedings of the 6th Clinical Natural Language Processing Workshop", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.clinicalnlp-1.59", doi = "10.18653/v1/2024.clinicalnlp-1.59", pages = "616--623", abstract = "Medical errors in clinical text pose significant risks to patient safety. The MEDIQA-CORR 2024 shared task focuses on detecting and correcting these errors across three subtasks: identifying the presence of an error, extracting the erroneous sentence, and generating a corrected sentence. In this paper, we present our approach that achieved top performance in all three subtasks. For the MS dataset, which contains subtle errors, we developed a retrieval-based system leveraging external medical question-answering datasets. For the UW dataset, reflecting more realistic clinical notes, we created a pipeline of modules to detect, localize, and correct errors. Both approaches utilized the DSPy framework for optimizing prompts and few-shot examples in large language model (LLM) based programs. Our results demonstrate the effectiveness of LLM based programs for medical error correction. However, our approach has limitations in addressing the full diversity of potential errors in medical documentation. We discuss the implications of our work and highlight future research directions to advance the robustness and applicability of medical error detection and correction systems.", }
Medical errors in clinical text pose significant risks to patient safety. The MEDIQA-CORR 2024 shared task focuses on detecting and correcting these errors across three subtasks: identifying the presence of an error, extracting the erroneous sentence, and generating a corrected sentence. In this paper, we present our approach that achieved top performance in all three subtasks. For the MS dataset, which contains subtle errors, we developed a retrieval-based system leveraging external medical question-answering datasets. For the UW dataset, reflecting more realistic clinical notes, we created a pipeline of modules to detect, localize, and correct errors. Both approaches utilized the DSPy framework for optimizing prompts and few-shot examples in large language model (LLM) based programs. Our results demonstrate the effectiveness of LLM based programs for medical error correction. However, our approach has limitations in addressing the full diversity of potential errors in medical documentation. We discuss the implications of our work and highlight future research directions to advance the robustness and applicability of medical error detection and correction systems.
[ "Toma, Augustin", "Xie, Ronald", "Palayew, Steven", "Lawler, Patrick", "Wang, Bo" ]
WangLab at MEDIQA-CORR 2024: Optimized LLM-based Programs for Medical Error Detection and Correction
clinicalnlp-1.59
Poster
2404.14544
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.clinicalnlp-1.60.bib
https://aclanthology.org/2024.clinicalnlp-1.60/
@inproceedings{toma-etal-2024-wanglab-mediqa, title = "{W}ang{L}ab at {MEDIQA}-{M}3{G} 2024: Multimodal Medical Answer Generation using Large Language Models", author = "Toma, Augustin and Xie, Ronald and Palayew, Steven and Bader, Gary and Wang, Bo", editor = "Naumann, Tristan and Ben Abacha, Asma and Bethard, Steven and Roberts, Kirk and Bitterman, Danielle", booktitle = "Proceedings of the 6th Clinical Natural Language Processing Workshop", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.clinicalnlp-1.60", doi = "10.18653/v1/2024.clinicalnlp-1.60", pages = "624--634", abstract = "This paper outlines our submission to the MEDIQA2024 Multilingual and Multimodal Medical Answer Generation (M3G) shared task. We report results for two standalone solutions under the English category of the task, the first involving two consecutive API calls to the Claude 3 Opus API and the second involving training an image-disease label joint embedding in the style of CLIP for image classification. These two solutions scored 1st and 2nd place respectively on the competition leaderboard, substantially outperforming the next best solution. Additionally, we discuss insights gained from post-competition experiments. While the performance of these two described solutions have significant room for improvement due to the difficulty of the shared task and the challenging nature of medical visual question answering in general, we identify the multi-stage LLM approach and the CLIP image classification approach as promising avenues for further investigation.", }
This paper outlines our submission to the MEDIQA2024 Multilingual and Multimodal Medical Answer Generation (M3G) shared task. We report results for two standalone solutions under the English category of the task, the first involving two consecutive API calls to the Claude 3 Opus API and the second involving training an image-disease label joint embedding in the style of CLIP for image classification. These two solutions scored 1st and 2nd place respectively on the competition leaderboard, substantially outperforming the next best solution. Additionally, we discuss insights gained from post-competition experiments. While the performance of these two described solutions have significant room for improvement due to the difficulty of the shared task and the challenging nature of medical visual question answering in general, we identify the multi-stage LLM approach and the CLIP image classification approach as promising avenues for further investigation.
[ "Toma, Augustin", "Xie, Ronald", "Palayew, Steven", "Bader, Gary", "Wang, Bo" ]
WangLab at MEDIQA-M3G 2024: Multimodal Medical Answer Generation using Large Language Models
clinicalnlp-1.60
Poster
2404.14567
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.clinicalnlp-1.61.bib
https://aclanthology.org/2024.clinicalnlp-1.61/
@inproceedings{jo-etal-2024-lg, title = "{LG} {AI} Research {\&} {KAIST} at {EHRSQL} 2024: Self-Training Large Language Models with Pseudo-Labeled Unanswerable Questions for a Reliable Text-to-{SQL} System on {EHR}s", author = "Jo, Yongrae and Lee, Seongyun and Seo, Minju and Hwang, Sung Ju and Lee, Moontae", editor = "Naumann, Tristan and Ben Abacha, Asma and Bethard, Steven and Roberts, Kirk and Bitterman, Danielle", booktitle = "Proceedings of the 6th Clinical Natural Language Processing Workshop", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.clinicalnlp-1.61", doi = "10.18653/v1/2024.clinicalnlp-1.61", pages = "635--643", abstract = "Text-to-SQL models are pivotal for making Electronic Health Records (EHRs) accessible to healthcare professionals without SQL knowledge. With the advancements in large language models, these systems have become more adept at translating complex questions into SQL queries. Nonetheless, the critical need for reliability in healthcare necessitates these models to accurately identify unanswerable questions or uncertain predictions, preventing misinformation. To address this problem, we present a self-training strategy using pseudo-labeled unanswerable questions to enhance the reliability of text-to-SQL models for EHRs. This approach includes a two-stage training process followed by a filtering method based on the token entropy and query execution. Our methodology{'}s effectiveness is validated by our top performance in the EHRSQL 2024 shared task, showcasing the potential to improve healthcare decision-making through more reliable text-to-SQL systems.", }
Text-to-SQL models are pivotal for making Electronic Health Records (EHRs) accessible to healthcare professionals without SQL knowledge. With the advancements in large language models, these systems have become more adept at translating complex questions into SQL queries. Nonetheless, the critical need for reliability in healthcare necessitates these models to accurately identify unanswerable questions or uncertain predictions, preventing misinformation. To address this problem, we present a self-training strategy using pseudo-labeled unanswerable questions to enhance the reliability of text-to-SQL models for EHRs. This approach includes a two-stage training process followed by a filtering method based on the token entropy and query execution. Our methodology{'}s effectiveness is validated by our top performance in the EHRSQL 2024 shared task, showcasing the potential to improve healthcare decision-making through more reliable text-to-SQL systems.
[ "Jo, Yongrae", "Lee, Seongyun", "Seo, Minju", "Hwang, Sung Ju", "Lee, Moontae" ]
LG AI Research & KAIST at EHRSQL 2024: Self-Training Large Language Models with Pseudo-Labeled Unanswerable Questions for a Reliable Text-to-SQL System on EHRs
clinicalnlp-1.61
Poster
2405.11162
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.clinicalnlp-1.62.bib
https://aclanthology.org/2024.clinicalnlp-1.62/
@inproceedings{lee-etal-2024-overview, title = "Overview of the {EHRSQL} 2024 Shared Task on Reliable Text-to-{SQL} Modeling on Electronic Health Records", author = "Lee, Gyubok and Kweon, Sunjun and Bae, Seongsu and Choi, Edward", editor = "Naumann, Tristan and Ben Abacha, Asma and Bethard, Steven and Roberts, Kirk and Bitterman, Danielle", booktitle = "Proceedings of the 6th Clinical Natural Language Processing Workshop", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.clinicalnlp-1.62", doi = "10.18653/v1/2024.clinicalnlp-1.62", pages = "644--654", abstract = "Electronic Health Records (EHRs) are relational databases that store the entire medical histories of patients within hospitals. They record numerous aspects of patients{'} medical care, from hospital admission and diagnosis to treatment and discharge. While EHRs are vital sources of clinical data, exploring them beyond a predefined set of queries requires skills in query languages like SQL. To make information retrieval more accessible, one strategy is to build a question-answering system, possibly leveraging text-to-SQL models that can automatically translate natural language questions into corresponding SQL queries and use these queries to retrieve the answers. The EHRSQL 2024 shared task aims to advance and promote research in developing a question-answering system for EHRs using text-to-SQL modeling, capable of reliably providing requested answers to various healthcare professionals to improve their clinical work processes and satisfy their needs. Among more than 100 participants who applied to the shared task, eight teams completed the entire shared task processes and demonstrated a wide range of methods to effectively solve this task. In this paper, we describe the task of reliable text-to-SQL modeling, the dataset, and the methods and results of the participants. We hope this shared task will spur further research and insights into developing reliable question-answering systems for EHRs.", }
Electronic Health Records (EHRs) are relational databases that store the entire medical histories of patients within hospitals. They record numerous aspects of patients{'} medical care, from hospital admission and diagnosis to treatment and discharge. While EHRs are vital sources of clinical data, exploring them beyond a predefined set of queries requires skills in query languages like SQL. To make information retrieval more accessible, one strategy is to build a question-answering system, possibly leveraging text-to-SQL models that can automatically translate natural language questions into corresponding SQL queries and use these queries to retrieve the answers. The EHRSQL 2024 shared task aims to advance and promote research in developing a question-answering system for EHRs using text-to-SQL modeling, capable of reliably providing requested answers to various healthcare professionals to improve their clinical work processes and satisfy their needs. Among more than 100 participants who applied to the shared task, eight teams completed the entire shared task processes and demonstrated a wide range of methods to effectively solve this task. In this paper, we describe the task of reliable text-to-SQL modeling, the dataset, and the methods and results of the participants. We hope this shared task will spur further research and insights into developing reliable question-answering systems for EHRs.
[ "Lee, Gyubok", "Kweon, Sunjun", "Bae, Seongsu", "Choi, Edward" ]
Overview of the EHRSQL 2024 Shared Task on Reliable Text-to-SQL Modeling on Electronic Health Records
clinicalnlp-1.62
Poster
2405.06673
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.clinicalnlp-1.63.bib
https://aclanthology.org/2024.clinicalnlp-1.63/
@inproceedings{jabir-etal-2024-saama, title = "Saama Technologies at {EHRSQL} 2024: {SQL} Generation through Classification Answer Selector by {LLM}", author = "Jabir, Mohammed and Kanakarajan, Kamal and Sankarasubbu, Malaikannan", editor = "Naumann, Tristan and Ben Abacha, Asma and Bethard, Steven and Roberts, Kirk and Bitterman, Danielle", booktitle = "Proceedings of the 6th Clinical Natural Language Processing Workshop", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.clinicalnlp-1.63", doi = "10.18653/v1/2024.clinicalnlp-1.63", pages = "655--671", abstract = "The EHRSQL task aims to develop a dependable text-to-SQL model for Electronic Health Records (EHR) databases, which are crucial sources of clinical data that store patients{'} medical histories in hospitals. Large language models (LLM) have been proven to exhibit state-of-the-art performance for text-to-SQL tasks across various domains. To this end, we have developed a framework, SQL Generation through Classification Answer Selector by LLM (SCAS), which comprises two modules. The CAS module determines the answerability of the question, while the SG model generates the SQL query exclusively for answerable questions. Our system ranked 7th on the leaderboard with a Reliability Score of 53.21 on the official test set.", }
The EHRSQL task aims to develop a dependable text-to-SQL model for Electronic Health Records (EHR) databases, which are crucial sources of clinical data that store patients{'} medical histories in hospitals. Large language models (LLM) have been proven to exhibit state-of-the-art performance for text-to-SQL tasks across various domains. To this end, we have developed a framework, SQL Generation through Classification Answer Selector by LLM (SCAS), which comprises two modules. The CAS module determines the answerability of the question, while the SG model generates the SQL query exclusively for answerable questions. Our system ranked 7th on the leaderboard with a Reliability Score of 53.21 on the official test set.
[ "Jabir, Mohammed", "Kanakarajan, Kamal", "Sankarasubbu, Malaikannan" ]
Saama Technologies at EHRSQL 2024: SQL Generation through Classification Answer Selector by LLM
clinicalnlp-1.63
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.clinicalnlp-1.64.bib
https://aclanthology.org/2024.clinicalnlp-1.64/
@inproceedings{kim-etal-2024-ku, title = "{KU}-{DMIS} at {EHRSQL} 2024 : Generating {SQL} query via question templatization in {EHR}", author = "Kim, Hajung and Kim, Chanhwi and Lee, Hoonick and Jang, Kyochul and Lee, Jiwoo and Lee, Kyungjae and Kim, Gangwoo and Kang, Jaewoo", editor = "Naumann, Tristan and Ben Abacha, Asma and Bethard, Steven and Roberts, Kirk and Bitterman, Danielle", booktitle = "Proceedings of the 6th Clinical Natural Language Processing Workshop", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.clinicalnlp-1.64", doi = "10.18653/v1/2024.clinicalnlp-1.64", pages = "672--686", abstract = "Transforming natural language questions into SQL queries is crucial for precise data retrieval from electronic health record (EHR) databases. A significant challenge in this process is detecting and rejecting unanswerable questions that request information outside the database{'}s scope or exceed the system{'}s capabilities. In this paper, we introduce a novel text-to-SQL framework that focuses on standardizing the structure of questions into a templated format. Our framework begins by fine-tuning GPT-3.5-turbo, a powerful large language model (LLM), with detailed prompts involving the table schemas of the EHR database system. Our approach shows promising results on the EHRSQL-2024 benchmark dataset, part of the ClinicalNLP shared task. Although fine-tuning GPT achieves third place on the development set, it struggled with the diverse questions in the test set. With our framework, we improve our system{'}s adaptability and achieve fourth position in the official leaderboard of the EHRSQL-2024 challenge.", }
Transforming natural language questions into SQL queries is crucial for precise data retrieval from electronic health record (EHR) databases. A significant challenge in this process is detecting and rejecting unanswerable questions that request information outside the database{'}s scope or exceed the system{'}s capabilities. In this paper, we introduce a novel text-to-SQL framework that focuses on standardizing the structure of questions into a templated format. Our framework begins by fine-tuning GPT-3.5-turbo, a powerful large language model (LLM), with detailed prompts involving the table schemas of the EHR database system. Our approach shows promising results on the EHRSQL-2024 benchmark dataset, part of the ClinicalNLP shared task. Although fine-tuning GPT achieves third place on the development set, it struggled with the diverse questions in the test set. With our framework, we improve our system{'}s adaptability and achieve fourth position in the official leaderboard of the EHRSQL-2024 challenge.
[ "Kim, Hajung", "Kim, Chanhwi", "Lee, Hoonick", "Jang, Kyochul", "Lee, Jiwoo", "Lee, Kyungjae", "Kim, Gangwoo", "Kang, Jaewoo" ]
KU-DMIS at EHRSQL 2024 : Generating SQL query via question templatization in EHR
clinicalnlp-1.64
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.clinicalnlp-1.65.bib
https://aclanthology.org/2024.clinicalnlp-1.65/
@inproceedings{kim-etal-2024-probgate, title = "{P}rob{G}ate at {EHRSQL} 2024: Enhancing {SQL} Query Generation Accuracy through Probabilistic Threshold Filtering and Error Handling", author = "Kim, Sangryul and Han, Donghee and Kim, Sehyun", editor = "Naumann, Tristan and Ben Abacha, Asma and Bethard, Steven and Roberts, Kirk and Bitterman, Danielle", booktitle = "Proceedings of the 6th Clinical Natural Language Processing Workshop", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.clinicalnlp-1.65", doi = "10.18653/v1/2024.clinicalnlp-1.65", pages = "687--696", abstract = "Recently, deep learning-based language models have significantly enhanced text-to-SQL tasks, with promising applications in retrieving patient records within the medical domain. One notable challenge in such applications is discerning unanswerable queries. Through fine-tuning model, we demonstrate the feasibility of converting medical record inquiries into SQL queries. Additionally, we introduce an entropy-based method to identify and filter out unanswerable results. We further enhance result quality by filtering low-confidence SQL through log probability-based distribution, while grammatical and schema errors are mitigated by executing queries on the actual database.We experimentally verified that our method can filter unanswerable questions, which can be widely utilized even when the parameters of the model are not accessible, and that it can be effectively utilized in practice.", }
Recently, deep learning-based language models have significantly enhanced text-to-SQL tasks, with promising applications in retrieving patient records within the medical domain. One notable challenge in such applications is discerning unanswerable queries. Through fine-tuning model, we demonstrate the feasibility of converting medical record inquiries into SQL queries. Additionally, we introduce an entropy-based method to identify and filter out unanswerable results. We further enhance result quality by filtering low-confidence SQL through log probability-based distribution, while grammatical and schema errors are mitigated by executing queries on the actual database.We experimentally verified that our method can filter unanswerable questions, which can be widely utilized even when the parameters of the model are not accessible, and that it can be effectively utilized in practice.
[ "Kim, Sangryul", "Han, Donghee", "Kim, Sehyun" ]
ProbGate at EHRSQL 2024: Enhancing SQL Query Generation Accuracy through Probabilistic Threshold Filtering and Error Handling
clinicalnlp-1.65
Poster
2404.16659
[ "https://github.com/venzino-han/probgate_ehrsql" ]
https://huggingface.co/papers/2404.16659
1
0
0
3
1
[]
[]
[]
https://aclanthology.org/2024.clinicalnlp-1.66.bib
https://aclanthology.org/2024.clinicalnlp-1.66/
@inproceedings{thomas-etal-2024-ltrc, title = "{LTRC}-{IIITH} at {EHRSQL} 2024: Enhancing Reliability of Text-to-{SQL} Systems through Abstention and Confidence Thresholding", author = "Thomas, Jerrin and Mishra, Pruthwik and Sharma, Dipti and Krishnamurthy, Parameswari", editor = "Naumann, Tristan and Ben Abacha, Asma and Bethard, Steven and Roberts, Kirk and Bitterman, Danielle", booktitle = "Proceedings of the 6th Clinical Natural Language Processing Workshop", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.clinicalnlp-1.66", doi = "10.18653/v1/2024.clinicalnlp-1.66", pages = "697--702", abstract = "In this paper, we present our work in the EHRSQL 2024 shared task which tackles reliable text-to-SQL modeling on Electronic Health Records. Our proposed system tackles the task with three modules - abstention module, text-to-SQL generation module, and reliability module. The abstention module identifies whether the question is answerable given the database schema. If the question is answerable, the text-to-SQL generation module generates the SQL query and associated confidence score. The reliability module has two key components - confidence score thresholding, which rejects generations with confidence below a pre-defined level, and error filtering, which identifies and excludes SQL queries that result in execution errors. In the official leaderboard for the task, our system ranks 6th. We have also made the source code public.", }
In this paper, we present our work in the EHRSQL 2024 shared task which tackles reliable text-to-SQL modeling on Electronic Health Records. Our proposed system tackles the task with three modules - abstention module, text-to-SQL generation module, and reliability module. The abstention module identifies whether the question is answerable given the database schema. If the question is answerable, the text-to-SQL generation module generates the SQL query and associated confidence score. The reliability module has two key components - confidence score thresholding, which rejects generations with confidence below a pre-defined level, and error filtering, which identifies and excludes SQL queries that result in execution errors. In the official leaderboard for the task, our system ranks 6th. We have also made the source code public.
[ "Thomas, Jerrin", "Mishra, Pruthwik", "Sharma, Dipti", "Krishnamurthy, Parameswari" ]
LTRC-IIITH at EHRSQL 2024: Enhancing Reliability of Text-to-SQL Systems through Abstention and Confidence Thresholding
clinicalnlp-1.66
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.clinicalnlp-1.67.bib
https://aclanthology.org/2024.clinicalnlp-1.67/
@inproceedings{thomas-etal-2024-ltrc-iiith, title = "{LTRC}-{IIITH} at {MEDIQA}-{M}3{G} 2024: Medical Visual Question Answering with Vision-Language Models", author = "Thomas, Jerrin and Marimuthu, Sushvin and Krishnamurthy, Parameswari", editor = "Naumann, Tristan and Ben Abacha, Asma and Bethard, Steven and Roberts, Kirk and Bitterman, Danielle", booktitle = "Proceedings of the 6th Clinical Natural Language Processing Workshop", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.clinicalnlp-1.67", doi = "10.18653/v1/2024.clinicalnlp-1.67", pages = "703--707", abstract = "In this paper, we present our work to the MEDIQA-M3G 2024 shared task, which tackles multilingual and multimodal medical answer generation. Our system consists of a lightweight Vision-and-Language Transformer (ViLT) model which is fine-tuned for the clinical dermatology visual question-answering task. In the official leaderboard for the task, our system ranks 6th. After the challenge, we experiment with training the ViLT model on more data. We also explore the capabilities of large Vision-Language Models (VLMs) such as Gemini and LLaVA.", }
In this paper, we present our work to the MEDIQA-M3G 2024 shared task, which tackles multilingual and multimodal medical answer generation. Our system consists of a lightweight Vision-and-Language Transformer (ViLT) model which is fine-tuned for the clinical dermatology visual question-answering task. In the official leaderboard for the task, our system ranks 6th. After the challenge, we experiment with training the ViLT model on more data. We also explore the capabilities of large Vision-Language Models (VLMs) such as Gemini and LLaVA.
[ "Thomas, Jerrin", "Marimuthu, Sushvin", "Krishnamurthy, Parameswari" ]
LTRC-IIITH at MEDIQA-M3G 2024: Medical Visual Question Answering with Vision-Language Models
clinicalnlp-1.67
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.dash-1.1.bib
https://aclanthology.org/2024.dash-1.1/
@inproceedings{qian-etal-2024-ape, title = "{APE}: Active Learning-based Tooling for Finding Informative Few-shot Examples for {LLM}-based Entity Matching", author = "Qian, Kun and Sang, Yisi and Bayat{\dag}, Farima and Belyi, Anton and Chu, Xianqi and Govind, Yash and Khorshidi, Samira and Khot, Rahul and Luna, Katherine and Nikfarjam, Azadeh and Qi, Xiaoguang and Wu, Fei and Zhang, Xianhan and Li, Yunyao", editor = "Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank", booktitle = "Proceedings of the Fifth Workshop on Data Science with Human-in-the-Loop (DaSH 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.dash-1.1", doi = "10.18653/v1/2024.dash-1.1", pages = "1--3", abstract = "Prompt engineering is an iterative procedure that often requires extensive manual effort to formulate suitable instructions for effectively directing large language models (LLMs) in specific tasks. Incorporating few-shot examples is a vital and effective approach to provide LLMs with precise instructions, leading to improved LLM performance. Nonetheless, identifying the most informative demonstrations for LLMs is labor-intensive, frequently entailing sifting through an extensive search space. In this demonstration, we showcase a human-in-the-loop tool called ool (Active Prompt Engineering) designed for refining prompts through active learning. Drawing inspiration from active learning, ool iteratively selects the most ambiguous examples for human feedback, which will be transformed into few-shot examples within the prompt.", }
Prompt engineering is an iterative procedure that often requires extensive manual effort to formulate suitable instructions for effectively directing large language models (LLMs) in specific tasks. Incorporating few-shot examples is a vital and effective approach to provide LLMs with precise instructions, leading to improved LLM performance. Nonetheless, identifying the most informative demonstrations for LLMs is labor-intensive, frequently entailing sifting through an extensive search space. In this demonstration, we showcase a human-in-the-loop tool called ool (Active Prompt Engineering) designed for refining prompts through active learning. Drawing inspiration from active learning, ool iteratively selects the most ambiguous examples for human feedback, which will be transformed into few-shot examples within the prompt.
[ "Qian, Kun", "Sang, Yisi", "Bayat{\\dag}, Farima", "Belyi, Anton", "Chu, Xianqi", "Govind, Yash", "Khorshidi, Samira", "Khot, Rahul", "Luna, Katherine", "Nikfarjam, Azadeh", "Qi, Xiaoguang", "Wu, Fei", "Zhang, Xianhan", "Li, Yunyao" ]
APE: Active Learning-based Tooling for Finding Informative Few-shot Examples for LLM-based Entity Matching
dash-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.dash-1.2.bib
https://aclanthology.org/2024.dash-1.2/
@inproceedings{afzal-etal-2024-towards, title = "Towards Optimizing and Evaluating a Retrieval Augmented {QA} Chatbot using {LLM}s with Human-in-the-Loop", author = "Afzal, Anum and Kowsik, Alexander and Fani, Rajna and Matthes, Florian", editor = "Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank", booktitle = "Proceedings of the Fifth Workshop on Data Science with Human-in-the-Loop (DaSH 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.dash-1.2", doi = "10.18653/v1/2024.dash-1.2", pages = "4--16", abstract = "Large Language Models have found application in various mundane and repetitive tasks including Human Resource (HR) support. We worked with the domain experts of a large multinational company to develop an HR support chatbot as an efficient and effective tool for addressing employee inquiries. We inserted a human-in-the-loop in various parts of the development cycles such as dataset collection, prompt optimization, and evaluation of generated output. By enhancing the LLM-driven chatbot{'}s response quality and exploring alternative retrieval methods, we have created an efficient, scalable, and flexible tool for HR professionals to address employee inquiries effectively. Our experiments and evaluation conclude that GPT-4 outperforms other models and can overcome inconsistencies in data through internal reasoning capabilities. Additionally, through expert analysis, we infer that reference-free evaluation metrics such as G-Eval and Prometheus demonstrate reliability closely aligned with that of human evaluation.", }
Large Language Models have found application in various mundane and repetitive tasks including Human Resource (HR) support. We worked with the domain experts of a large multinational company to develop an HR support chatbot as an efficient and effective tool for addressing employee inquiries. We inserted a human-in-the-loop in various parts of the development cycles such as dataset collection, prompt optimization, and evaluation of generated output. By enhancing the LLM-driven chatbot{'}s response quality and exploring alternative retrieval methods, we have created an efficient, scalable, and flexible tool for HR professionals to address employee inquiries effectively. Our experiments and evaluation conclude that GPT-4 outperforms other models and can overcome inconsistencies in data through internal reasoning capabilities. Additionally, through expert analysis, we infer that reference-free evaluation metrics such as G-Eval and Prometheus demonstrate reliability closely aligned with that of human evaluation.
[ "Afzal, Anum", "Kowsik, Alex", "er", "Fani, Rajna", "Matthes, Florian" ]
Towards Optimizing and Evaluating a Retrieval Augmented QA Chatbot using LLMs with Human-in-the-Loop
dash-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.dash-1.3.bib
https://aclanthology.org/2024.dash-1.3/
@inproceedings{maharaj-etal-2024-evaluation, title = "Evaluation and Continual Improvement for an Enterprise {AI} Assistant", author = "Maharaj, Akash and Qian, Kun and Bhattacharya, Uttaran and Fang, Sally and Galatanu, Horia and Garg, Manas and Hanessian, Rachel and Kapoor, Nishant and Russell, Ken and Vaithyanathan, Shivakumar and Li, Yunyao", editor = "Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank", booktitle = "Proceedings of the Fifth Workshop on Data Science with Human-in-the-Loop (DaSH 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.dash-1.3", doi = "10.18653/v1/2024.dash-1.3", pages = "17--24", abstract = "The development of conversational AI assistants is an iterative process with many components involved. As such, the evaluation and continual improvement of these assistants is a complex and multifaceted problem. This paper introduces the challenges in evaluating and improving a generative AI assistant for enterprise that is under active development and how we address these challenges. We also share preliminary results and discuss lessons learned.", }
The development of conversational AI assistants is an iterative process with many components involved. As such, the evaluation and continual improvement of these assistants is a complex and multifaceted problem. This paper introduces the challenges in evaluating and improving a generative AI assistant for enterprise that is under active development and how we address these challenges. We also share preliminary results and discuss lessons learned.
[ "Maharaj, Akash", "Qian, Kun", "Bhattacharya, Uttaran", "Fang, Sally", "Galatanu, Horia", "Garg, Manas", "Hanessian, Rachel", "Kapoor, Nishant", "Russell, Ken", "Vaithyanathan, Shivakumar", "Li, Yunyao" ]
Evaluation and Continual Improvement for an Enterprise AI Assistant
dash-1.3
Poster
2407.12003
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.dash-1.4.bib
https://aclanthology.org/2024.dash-1.4/
@inproceedings{yang-etal-2024-mini, title = "Mini-{DA}: Improving Your Model Performance through Minimal Data Augmentation using {LLM}", author = "Yang, Shuangtao and Liu, Xiaoyi and Dong, Xiaozheng and Fu, Bo", editor = "Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank", booktitle = "Proceedings of the Fifth Workshop on Data Science with Human-in-the-Loop (DaSH 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.dash-1.4", doi = "10.18653/v1/2024.dash-1.4", pages = "25--30", abstract = "When performing data augmentation using large language models (LLMs), the common approach is to directly generate a large number of new samples based on the original dataset, and then model is trained on the integration of augmented dataset and the original dataset. However, data generation demands extensive computational resources. In this study, we propose Mini-DA, a minimized data augmentation method that leverages the feedback from the target model during the training process to select only the most challenging samples from the validation set for augmentation. Our experimental results show in text classification task, by using as little as 13 percent of the original augmentation volume, Mini-DA can achieve performance comparable to full data augmentation for intent detection task, significantly improving data and computational resource utilization efficiency.", }
When performing data augmentation using large language models (LLMs), the common approach is to directly generate a large number of new samples based on the original dataset, and then model is trained on the integration of augmented dataset and the original dataset. However, data generation demands extensive computational resources. In this study, we propose Mini-DA, a minimized data augmentation method that leverages the feedback from the target model during the training process to select only the most challenging samples from the validation set for augmentation. Our experimental results show in text classification task, by using as little as 13 percent of the original augmentation volume, Mini-DA can achieve performance comparable to full data augmentation for intent detection task, significantly improving data and computational resource utilization efficiency.
[ "Yang, Shuangtao", "Liu, Xiaoyi", "Dong, Xiaozheng", "Fu, Bo" ]
Mini-DA: Improving Your Model Performance through Minimal Data Augmentation using LLM
dash-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.dash-1.5.bib
https://aclanthology.org/2024.dash-1.5/
@inproceedings{nguyen-etal-2024-curatron, title = "{CURATRON}: Complete and Robust Preference Data for Rigorous Alignment of Large Language Models", author = "Nguyen, Son The and Naresh, Niranjan Uma and Tulabandhula, Theja", editor = "Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank", booktitle = "Proceedings of the Fifth Workshop on Data Science with Human-in-the-Loop (DaSH 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.dash-1.5", doi = "10.18653/v1/2024.dash-1.5", pages = "31--39", abstract = "This paper addresses the challenges of aligning large language models (LLMs) with human values via preference learning (PL), focusing on incomplete and corrupted data in preference datasets. We propose a novel method for robustly and completely recalibrating values within these datasets to enhance LLMs{'} resilience against the issues. In particular, we devise a guaranteed polynomial time ranking algorithm that robustifies several existing models, such as the classic Bradley{--}Terry{--}Luce (BTL) model and certain generalizations of it. To the best of our knowledge, our present work is the first to propose an algorithm that provably recovers an $\epsilon$-optimal ranking with high probability while allowing as large as $O(n)$ perturbed pairwise comparison results per model response. Furthermore, we show robust recovery results in the partially observed setting. Our experiments confirm that our algorithms handle adversarial noise and unobserved comparisons well in LLM preference dataset settings. This work contributes to the development and scaling of more reliable and ethically aligned AI models by equipping the dataset curation pipeline with the ability to handle missing and maliciously manipulated inputs.", }
This paper addresses the challenges of aligning large language models (LLMs) with human values via preference learning (PL), focusing on incomplete and corrupted data in preference datasets. We propose a novel method for robustly and completely recalibrating values within these datasets to enhance LLMs{'} resilience against the issues. In particular, we devise a guaranteed polynomial time ranking algorithm that robustifies several existing models, such as the classic Bradley{--}Terry{--}Luce (BTL) model and certain generalizations of it. To the best of our knowledge, our present work is the first to propose an algorithm that provably recovers an $\epsilon$-optimal ranking with high probability while allowing as large as $O(n)$ perturbed pairwise comparison results per model response. Furthermore, we show robust recovery results in the partially observed setting. Our experiments confirm that our algorithms handle adversarial noise and unobserved comparisons well in LLM preference dataset settings. This work contributes to the development and scaling of more reliable and ethically aligned AI models by equipping the dataset curation pipeline with the ability to handle missing and maliciously manipulated inputs.
[ "Nguyen, Son The", "Naresh, Niranjan Uma", "Tulab", "hula, Theja" ]
CURATRON: Complete and Robust Preference Data for Rigorous Alignment of Large Language Models
dash-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.1.bib
https://aclanthology.org/2024.figlang-1.1/
@inproceedings{jang-etal-2024-context, title = "Context vs. Human Disagreement in Sarcasm Detection", author = "Jang, Hyewon and Jakob, Moritz and Frassinelli, Diego", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.1", doi = "10.18653/v1/2024.figlang-1.1", pages = "1--7", abstract = "Prior work has highlighted the importance of context in the identification of sarcasm by humans and language models. This work examines how much context is required for a better identification of sarcasm by both parties. We collect textual responses to dialogical prompts and sarcasm judgment to the responses placed after long contexts, short contexts, and no contexts. We find that both for humans and language models, the presence of context is generally important in identifying sarcasm in the response. But increasing the amount of context provides no added benefit to humans (long = short {\textgreater} none). This is the same for language models, but only on easily agreed-upon sentences; for sentences with disagreement among human evaluators, different models show different behavior. We also show how sarcasm detection patterns stay consistent as the amount of context is manipulated despite the low agreement in human evaluation.", }
Prior work has highlighted the importance of context in the identification of sarcasm by humans and language models. This work examines how much context is required for a better identification of sarcasm by both parties. We collect textual responses to dialogical prompts and sarcasm judgment to the responses placed after long contexts, short contexts, and no contexts. We find that both for humans and language models, the presence of context is generally important in identifying sarcasm in the response. But increasing the amount of context provides no added benefit to humans (long = short {\textgreater} none). This is the same for language models, but only on easily agreed-upon sentences; for sentences with disagreement among human evaluators, different models show different behavior. We also show how sarcasm detection patterns stay consistent as the amount of context is manipulated despite the low agreement in human evaluation.
[ "Jang, Hyewon", "Jakob, Moritz", "Frassinelli, Diego" ]
Context vs. Human Disagreement in Sarcasm Detection
figlang-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.2.bib
https://aclanthology.org/2024.figlang-1.2/
@inproceedings{hankins-2024-optimizing, title = "Optimizing Multilingual Euphemism Detection using Low-Rank Adaption Within and Across Languages", author = "Hankins, Nicholas", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.2", doi = "10.18653/v1/2024.figlang-1.2", pages = "8--14", abstract = "This short paper presents an investigation into the effectiveness of various classification methods as a submission in the Multilingual Euphemism Detection Shared Task for the Fourth Workshop on Figurative Language Processing co-located with NAACL 2024. The process used by the participant utilizes pre-trained large language models combined with parameter efficient fine-tuning methods, specifically Low-Rank Adaptation (LoRA), in classifying euphemisms across four different languages - Mandarin Chinese, American English, Spanish, and Yor{\`u}b{\'a}. The study is comprised of three main components that aim to explore heuristic methods to navigate how base models can most efficiently be fine-tuned into classifiers to learn figurative language. Multilingual labeled training data was utilized to fine-tune classifiers for each language, and later combined for one large classifier, while unseen test data was finally used to evaluate the accuracy of the best performing classifiers. In addition, cross-lingual tests were conducted by applying each language{'}s data on each of the other language{'}s classifiers. All of the results provide insights into the potential of pre-trained base models combined with LoRA fine-tuning methods in accurately classifying euphemisms across and within different languages.", }
This short paper presents an investigation into the effectiveness of various classification methods as a submission in the Multilingual Euphemism Detection Shared Task for the Fourth Workshop on Figurative Language Processing co-located with NAACL 2024. The process used by the participant utilizes pre-trained large language models combined with parameter efficient fine-tuning methods, specifically Low-Rank Adaptation (LoRA), in classifying euphemisms across four different languages - Mandarin Chinese, American English, Spanish, and Yor{\`u}b{\'a}. The study is comprised of three main components that aim to explore heuristic methods to navigate how base models can most efficiently be fine-tuned into classifiers to learn figurative language. Multilingual labeled training data was utilized to fine-tune classifiers for each language, and later combined for one large classifier, while unseen test data was finally used to evaluate the accuracy of the best performing classifiers. In addition, cross-lingual tests were conducted by applying each language{'}s data on each of the other language{'}s classifiers. All of the results provide insights into the potential of pre-trained base models combined with LoRA fine-tuning methods in accurately classifying euphemisms across and within different languages.
[ "Hankins, Nicholas" ]
Optimizing Multilingual Euphemism Detection using Low-Rank Adaption Within and Across Languages
figlang-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.3.bib
https://aclanthology.org/2024.figlang-1.3/
@inproceedings{khaliq-etal-2024-comparison, title = "Comparison of Image Generation Models for Abstract and Concrete Event Descriptions", author = "Khaliq, Mohammed and Frassinelli, Diego and Schulte Im Walde, Sabine", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.3", doi = "10.18653/v1/2024.figlang-1.3", pages = "15--21", abstract = "With the advent of diffusion-based image generation models such as DALL-E, Stable Diffusion and Midjourney, high quality images can be easily generated using textual inputs. It is unclear, however, to what extent the generated images resemble human mental representations, especially regarding abstract event knowledge. We analyse the capability of four state-of-the-art models in generating images of verb-object event pairs when we systematically manipulate the degrees of abstractness of both the verbs and the object nouns. Human judgements assess the generated images and demonstrate that DALL-E is strongest for event pairs with concrete nouns (e.g., {``}pour water{''}; {``}believe person{''}), while Midjourney is preferred for event pairs with abstract nouns (e.g., {``}raise awareness{''}; {``}remain mystery{''}), irrespective of the concreteness of the verb. Across models, humans were most unsatisfied with images of events pairs that combined concrete verbs with abstract direct-object nouns (e.g., {``}speak truth{''}), and an additional ad-hoc annotation contributes this to its potential for figurative language.", }
With the advent of diffusion-based image generation models such as DALL-E, Stable Diffusion and Midjourney, high quality images can be easily generated using textual inputs. It is unclear, however, to what extent the generated images resemble human mental representations, especially regarding abstract event knowledge. We analyse the capability of four state-of-the-art models in generating images of verb-object event pairs when we systematically manipulate the degrees of abstractness of both the verbs and the object nouns. Human judgements assess the generated images and demonstrate that DALL-E is strongest for event pairs with concrete nouns (e.g., {``}pour water{''}; {``}believe person{''}), while Midjourney is preferred for event pairs with abstract nouns (e.g., {``}raise awareness{''}; {``}remain mystery{''}), irrespective of the concreteness of the verb. Across models, humans were most unsatisfied with images of events pairs that combined concrete verbs with abstract direct-object nouns (e.g., {``}speak truth{''}), and an additional ad-hoc annotation contributes this to its potential for figurative language.
[ "Khaliq, Mohammed", "Frassinelli, Diego", "Schulte Im Walde, Sabine" ]
Comparison of Image Generation Models for Abstract and Concrete Event Descriptions
figlang-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.4.bib
https://aclanthology.org/2024.figlang-1.4/
@inproceedings{hulsing-schulte-im-walde-2024-cross, title = "Cross-Lingual Metaphor Detection for Low-Resource Languages", author = {H{\"u}lsing, Anna and Schulte Im Walde, Sabine}, editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.4", doi = "10.18653/v1/2024.figlang-1.4", pages = "22--34", abstract = "Research on metaphor detection (MD) in a multilingual setup has recently gained momentum. As for many tasks, it is however unclear how the amount of data used to pretrain large language models affects the performance, and whether non-neural models might provide a reasonable alternative, especially for MD in low-resource languages. This paper compares neural and non-neural cross-lingual models for English as the source language and Russian, German and Latin as target languages. In a series of experiments we show that the neural cross-lingual adapter architecture MAD-X performs best across target languages. Zero-shot classification with mBERT achieves decent results above the majority baseline, while few-shot classification with mBERT heavily depends on shot-selection, which is inconvenient in a cross-lingual setup where no validation data for the target language exists. The non-neural model, a random forest classifier with conceptual features, is outperformed by the neural models. Overall, we recommend MAD-X for metaphor detection not only in high-resource but also in low-resource scenarios regarding the amounts of pretraining data for mBERT.", }
Research on metaphor detection (MD) in a multilingual setup has recently gained momentum. As for many tasks, it is however unclear how the amount of data used to pretrain large language models affects the performance, and whether non-neural models might provide a reasonable alternative, especially for MD in low-resource languages. This paper compares neural and non-neural cross-lingual models for English as the source language and Russian, German and Latin as target languages. In a series of experiments we show that the neural cross-lingual adapter architecture MAD-X performs best across target languages. Zero-shot classification with mBERT achieves decent results above the majority baseline, while few-shot classification with mBERT heavily depends on shot-selection, which is inconvenient in a cross-lingual setup where no validation data for the target language exists. The non-neural model, a random forest classifier with conceptual features, is outperformed by the neural models. Overall, we recommend MAD-X for metaphor detection not only in high-resource but also in low-resource scenarios regarding the amounts of pretraining data for mBERT.
[ "H{\\\"u}lsing, Anna", "Schulte Im Walde, Sabine" ]
Cross-Lingual Metaphor Detection for Low-Resource Languages
figlang-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.5.bib
https://aclanthology.org/2024.figlang-1.5/
@inproceedings{de-luca-fornaciari-etal-2024-hard, title = "A Hard Nut to Crack: Idiom Detection with Conversational Large Language Models", author = "De Luca Fornaciari, Francesca and Altuna, Bego{\~n}a and Gonzalez-Dios, Itziar and Melero, Maite", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.5", doi = "10.18653/v1/2024.figlang-1.5", pages = "35--44", abstract = "In this work, we explore idiomatic language processing with Large Language Models (LLMs). We introduce the Idiomatic language Test Suite IdioTS, a dataset of difficult examples specifically designed by language experts to assess the capabilities of LLMs to process figurative language at sentence level. We propose a comprehensive evaluation methodology based on an idiom detection task, where LLMs are prompted with detecting an idiomatic expression in a given English sentence. We present a thorough automatic and manual evaluation of the results and a comprehensive error analysis.", }
In this work, we explore idiomatic language processing with Large Language Models (LLMs). We introduce the Idiomatic language Test Suite IdioTS, a dataset of difficult examples specifically designed by language experts to assess the capabilities of LLMs to process figurative language at sentence level. We propose a comprehensive evaluation methodology based on an idiom detection task, where LLMs are prompted with detecting an idiomatic expression in a given English sentence. We present a thorough automatic and manual evaluation of the results and a comprehensive error analysis.
[ "De Luca Fornaciari, Francesca", "Altuna, Bego{\\~n}a", "Gonzalez-Dios, Itziar", "Melero, Maite" ]
A Hard Nut to Crack: Idiom Detection with Conversational Large Language Models
figlang-1.5
Poster
2405.10579
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.6.bib
https://aclanthology.org/2024.figlang-1.6/
@inproceedings{kuhn-mitrovic-2024-elephant, title = "The Elephant in the Room: Ten Challenges of Computational Detection of Rhetorical Figures", author = {K{\"u}hn, Ramona and Mitrovi{\'c}, Jelena}, editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.6", doi = "10.18653/v1/2024.figlang-1.6", pages = "45--52", abstract = "Computational detection of rhetorical figures focuses mostly on figures such as metaphor, irony, or analogy. However, there exist many more figures that are neither less important nor less prevalent. We wanted to pinpoint the reasons why researchers often avoid other figures and to shed light on the challenges they struggle with when investigating those figures. In this comprehensive survey, we analyzed over 40 papers dealing with the computational detection of rhetorical figures other than metaphor, simile, sarcasm, and irony. We encountered recurrent challenges from which we compiled a ten point list. Furthermore, we suggest solutions for each challenge to encourage researchers to investigate a greater variety of rhetorical figures.", }
Computational detection of rhetorical figures focuses mostly on figures such as metaphor, irony, or analogy. However, there exist many more figures that are neither less important nor less prevalent. We wanted to pinpoint the reasons why researchers often avoid other figures and to shed light on the challenges they struggle with when investigating those figures. In this comprehensive survey, we analyzed over 40 papers dealing with the computational detection of rhetorical figures other than metaphor, simile, sarcasm, and irony. We encountered recurrent challenges from which we compiled a ten point list. Furthermore, we suggest solutions for each challenge to encourage researchers to investigate a greater variety of rhetorical figures.
[ "K{\\\"u}hn, Ramona", "Mitrovi{\\'c}, Jelena" ]
The Elephant in the Room: Ten Challenges of Computational Detection of Rhetorical Figures
figlang-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.7.bib
https://aclanthology.org/2024.figlang-1.7/
@inproceedings{dipper-etal-2024-guidelines, title = "Guidelines for the Annotation of Intentional Linguistic Metaphor", author = "Dipper, Stefanie and Roussel, Adam and Wiemann, Alexandra and Kim, Won and Nguyen, Tra-my", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.7", doi = "10.18653/v1/2024.figlang-1.7", pages = "53--58", abstract = "This paper presents guidelines for the annotation of intentional (i.e. non-conventionalized) linguistic metaphors. Expressions that contribute to the same metaphorical image are annotated as a chain, additionally a semantically contrasting expression of the target domain is marked as an anchor. So far, a corpus of ten TEDx talks with a total of 20k tokens has been annotated according to these guidelines. 1.25{\%} of the tokens are intentional metaphorical expressions.", }
This paper presents guidelines for the annotation of intentional (i.e. non-conventionalized) linguistic metaphors. Expressions that contribute to the same metaphorical image are annotated as a chain, additionally a semantically contrasting expression of the target domain is marked as an anchor. So far, a corpus of ten TEDx talks with a total of 20k tokens has been annotated according to these guidelines. 1.25{\%} of the tokens are intentional metaphorical expressions.
[ "Dipper, Stefanie", "Roussel, Adam", "Wiemann, Alex", "ra", "Kim, Won", "Nguyen, Tra-my" ]
Guidelines for the Annotation of Intentional Linguistic Metaphor
figlang-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.8.bib
https://aclanthology.org/2024.figlang-1.8/
@inproceedings{montero-etal-2024-evaluating, title = "Evaluating the Development of Linguistic Metaphor Annotation in {M}exican {S}panish Popular Science Tweets", author = "Montero, Alec and Bel-Enguix, Gemma and Ojeda-Trueba, Sergio-Luis and Col{\'\i}n Rodea, Marisela", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.8", doi = "10.18653/v1/2024.figlang-1.8", pages = "59--64", abstract = "Following previous work on metaphor annotation and automatic metaphor processing, this study presents the evaluation of an initial phase in the novel area of linguistic metaphor detection in Mexican Spanish popular science tweets. Specifically, we examine the challenges posed by the annotation process stemming from disagreement among annotators. During this phase of our work, we conducted the annotation of a corpus comprising 3733 Mexican Spanish popular science tweets. This corpus was divided into two halves and each half was then assigned to two different pairs of native Mexican Spanish-speaking annotators. Despite rigorous methodology and continuous training, inter-annotator agreement as measured by Cohen{'}s kappa was found to be low, slightly above chance levels, although the concordance percentage exceeded 60{\%}. By elucidating the inherent complexity of metaphor annotation tasks, our evaluation emphasizes the implications of these findings and offers insights for future research in this field, with the aim of creating a robust dataset for machine learning.", }
Following previous work on metaphor annotation and automatic metaphor processing, this study presents the evaluation of an initial phase in the novel area of linguistic metaphor detection in Mexican Spanish popular science tweets. Specifically, we examine the challenges posed by the annotation process stemming from disagreement among annotators. During this phase of our work, we conducted the annotation of a corpus comprising 3733 Mexican Spanish popular science tweets. This corpus was divided into two halves and each half was then assigned to two different pairs of native Mexican Spanish-speaking annotators. Despite rigorous methodology and continuous training, inter-annotator agreement as measured by Cohen{'}s kappa was found to be low, slightly above chance levels, although the concordance percentage exceeded 60{\%}. By elucidating the inherent complexity of metaphor annotation tasks, our evaluation emphasizes the implications of these findings and offers insights for future research in this field, with the aim of creating a robust dataset for machine learning.
[ "Montero, Alec", "Bel-Enguix, Gemma", "Ojeda-Trueba, Sergio-Luis", "Col{\\'\\i}n Rodea, Marisela" ]
Evaluating the Development of Linguistic Metaphor Annotation in Mexican Spanish Popular Science Tweets
figlang-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.9.bib
https://aclanthology.org/2024.figlang-1.9/
@inproceedings{firsich-rios-2024-gpt4, title = "Can {GPT}4 Detect Euphemisms across Multiple Languages?", author = "Firsich, Todd and Rios, Anthony", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.9", doi = "10.18653/v1/2024.figlang-1.9", pages = "65--72", abstract = "A euphemism is a word or phrase used in place of another word or phrase that might be considered harsh, blunt, unpleasant, or offensive. Euphemisms generally soften the impact of what is being said, making it more palatable or appropriate for the context or audience. Euphemisms can vary significantly between languages, reflecting cultural sensitivities and taboos, and what might be a mild expression in one language could carry a stronger connotation or be completely misunderstood in another. This paper uses prompting techniques to evaluate OpenAI{'}s GPT4 for detecting euphemisms across multiple languages as part of the 2024 FigLang shared task. We evaluate both zero-shot and few-shot approaches. Our method achieved an average macro F1 of .732, ranking first in the competition. Moreover, we found that GPT4 does not perform uniformly across all languages, with a difference of .233 between the best (English .831) and the worst (Spanish .598) languages.", }
A euphemism is a word or phrase used in place of another word or phrase that might be considered harsh, blunt, unpleasant, or offensive. Euphemisms generally soften the impact of what is being said, making it more palatable or appropriate for the context or audience. Euphemisms can vary significantly between languages, reflecting cultural sensitivities and taboos, and what might be a mild expression in one language could carry a stronger connotation or be completely misunderstood in another. This paper uses prompting techniques to evaluate OpenAI{'}s GPT4 for detecting euphemisms across multiple languages as part of the 2024 FigLang shared task. We evaluate both zero-shot and few-shot approaches. Our method achieved an average macro F1 of .732, ranking first in the competition. Moreover, we found that GPT4 does not perform uniformly across all languages, with a difference of .233 between the best (English .831) and the worst (Spanish .598) languages.
[ "Firsich, Todd", "Rios, Anthony" ]
Can GPT4 Detect Euphemisms across Multiple Languages?
figlang-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.10.bib
https://aclanthology.org/2024.figlang-1.10/
@inproceedings{vitiugin-paakki-2024-ensemble, title = "Ensemble-based Multilingual Euphemism Detection: a Behavior-Guided Approach", author = "Vitiugin, Fedor and Paakki, Henna", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.10", doi = "10.18653/v1/2024.figlang-1.10", pages = "73--78", abstract = "This paper describes the system submitted by our team to the Multilingual Euphemism Detection Shared Task for the Fourth Workshop on Figurative Language Processing (FigLang 2024). We propose a novel model for multilingual euphemism detection, combining contextual and behavior-related features. The system classifies texts that potentially contain euphemistic terms with an ensemble classifier based on outputs from behavior-related fine-tuned models. Our results show that, for this kind of task, our model outperforms baselines and state-of-the-art euphemism detection methods. As for the leader-board, our classification model achieved a macro averaged F1 score of [anonymized], reaching the [anonymized] place.", }
This paper describes the system submitted by our team to the Multilingual Euphemism Detection Shared Task for the Fourth Workshop on Figurative Language Processing (FigLang 2024). We propose a novel model for multilingual euphemism detection, combining contextual and behavior-related features. The system classifies texts that potentially contain euphemistic terms with an ensemble classifier based on outputs from behavior-related fine-tuned models. Our results show that, for this kind of task, our model outperforms baselines and state-of-the-art euphemism detection methods. As for the leader-board, our classification model achieved a macro averaged F1 score of [anonymized], reaching the [anonymized] place.
[ "Vitiugin, Fedor", "Paakki, Henna" ]
Ensemble-based Multilingual Euphemism Detection: a Behavior-Guided Approach
figlang-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.11.bib
https://aclanthology.org/2024.figlang-1.11/
@inproceedings{uduehi-bunescu-2024-expectation, title = "An Expectation-Realization Model for Metaphor Detection", author = "Uduehi, Oseremen and Bunescu, Razvan", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.11", doi = "10.18653/v1/2024.figlang-1.11", pages = "79--84", abstract = "We propose a new model for metaphor detection in which an expectation component estimates representations of expected word meanings in a given context, whereas a realization component computes representations of target word meanings in context. We also introduce a systematic evaluation methodology that estimates generalization performance in three settings: within distribution, a new strong out of distribution setting, and a novel out-of-pretraining setting. Across all settings, the expectation-realization model obtains results that are competitive with or better than previous metaphor detection models.", }
We propose a new model for metaphor detection in which an expectation component estimates representations of expected word meanings in a given context, whereas a realization component computes representations of target word meanings in context. We also introduce a systematic evaluation methodology that estimates generalization performance in three settings: within distribution, a new strong out of distribution setting, and a novel out-of-pretraining setting. Across all settings, the expectation-realization model obtains results that are competitive with or better than previous metaphor detection models.
[ "Uduehi, Oseremen", "Bunescu, Razvan" ]
An Expectation-Realization Model for Metaphor Detection
figlang-1.11
Poster
2311.03963
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.12.bib
https://aclanthology.org/2024.figlang-1.12/
@inproceedings{chen-etal-2024-textual, title = "A Textual Modal Supplement Framework for Understanding Multi-Modal Figurative Language", author = "Chen, Jiale and Yang, Qihao and Dong, Xuelian and Mao, Xiaoling and Hao, Tianyong", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.12", doi = "10.18653/v1/2024.figlang-1.12", pages = "85--91", abstract = "Figurative language in media such as memes, art, or comics has gained dramatic interest recently. However, the challenge remains in accurately justifying and explaining whether an image caption complements or contradicts the image it accompanies. To tackle this problem, we design a modal-supplement framework MAPPER consisting of a describer and thinker. The describer based on a frozen large vision model is designed to describe an image in detail to capture entailed semantic information. The thinker based on a finetuned large multi-modal model is designed to utilize description, claim and image to make prediction and explanation. Experiment results on a publicly available benchmark dataset from FigLang2024 Task 2 show that our method ranks at top 1 in overall evaluation, the performance exceeds the second place by 28.57{\%}. This indicates that MAPPER is highly effective in understanding, judging and explaining of the figurative language. The source code is available at https://github.com/Libv-Team/figlang2024.", }
Figurative language in media such as memes, art, or comics has gained dramatic interest recently. However, the challenge remains in accurately justifying and explaining whether an image caption complements or contradicts the image it accompanies. To tackle this problem, we design a modal-supplement framework MAPPER consisting of a describer and thinker. The describer based on a frozen large vision model is designed to describe an image in detail to capture entailed semantic information. The thinker based on a finetuned large multi-modal model is designed to utilize description, claim and image to make prediction and explanation. Experiment results on a publicly available benchmark dataset from FigLang2024 Task 2 show that our method ranks at top 1 in overall evaluation, the performance exceeds the second place by 28.57{\%}. This indicates that MAPPER is highly effective in understanding, judging and explaining of the figurative language. The source code is available at https://github.com/Libv-Team/figlang2024.
[ "Chen, Jiale", "Yang, Qihao", "Dong, Xuelian", "Mao, Xiaoling", "Hao, Tianyong" ]
A Textual Modal Supplement Framework for Understanding Multi-Modal Figurative Language
figlang-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.13.bib
https://aclanthology.org/2024.figlang-1.13/
@inproceedings{yang-wang-2024-figclip, title = "{F}ig{CLIP}: A Generative Multimodal Model with Bidirectional Cross-attention for Understanding Figurative Language via Visual Entailment", author = "Yang, Qihao and Wang, Xuelin", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.13", doi = "10.18653/v1/2024.figlang-1.13", pages = "92--98", abstract = "This is a system paper for the FigLang-2024 Multimodal Figurative Language Shared Task. Figurative language is generally represented through multiple modalities, facilitating the expression of complex and abstract ideas. With the popularity of various text-to-image tools, a large number of images containing metaphors or ironies are created. Traditional recognizing textual entailment has been extended to the task of understanding figurative language via visual entailment. However, existing pre-trained multimodal models in open domains often struggle with this task due to the intertwining of counterfactuals, human culture, and imagination. To bridge this gap, we propose FigCLIP, an end-to-end model based on CLIP and GPT-2, to identify multimodal figurative semantics and generate explanations. It employs a bidirectional fusion module with cross-attention and leverages explanations to promote the alignment of figurative image-text representations. Experimental results on the benchmark demonstrate the effectiveness of our method, achieving 70{\%} F1-score, 67{\%} F1@50-score and 50{\%} F1@60-score. It outperforms GPT-4V, which has robust visual reasoning capabilities.", }
This is a system paper for the FigLang-2024 Multimodal Figurative Language Shared Task. Figurative language is generally represented through multiple modalities, facilitating the expression of complex and abstract ideas. With the popularity of various text-to-image tools, a large number of images containing metaphors or ironies are created. Traditional recognizing textual entailment has been extended to the task of understanding figurative language via visual entailment. However, existing pre-trained multimodal models in open domains often struggle with this task due to the intertwining of counterfactuals, human culture, and imagination. To bridge this gap, we propose FigCLIP, an end-to-end model based on CLIP and GPT-2, to identify multimodal figurative semantics and generate explanations. It employs a bidirectional fusion module with cross-attention and leverages explanations to promote the alignment of figurative image-text representations. Experimental results on the benchmark demonstrate the effectiveness of our method, achieving 70{\%} F1-score, 67{\%} F1@50-score and 50{\%} F1@60-score. It outperforms GPT-4V, which has robust visual reasoning capabilities.
[ "Yang, Qihao", "Wang, Xuelin" ]
FigCLIP: A Generative Multimodal Model with Bidirectional Cross-attention for Understanding Figurative Language via Visual Entailment
figlang-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.14.bib
https://aclanthology.org/2024.figlang-1.14/
@inproceedings{simon-2024-register, title = "The Register-specific Distribution of Personification in {H}ungarian: A Corpus-driven Analysis", author = "Simon, Gabor", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.14", doi = "10.18653/v1/2024.figlang-1.14", pages = "99--109", abstract = "The aim of the paper is twofold: (i) to present an extended version of the PerSE corpus, the language resource for investigating personification in Hungarian; (ii) to explore the semantic and lexicogrammatical patterns of Hungarian personification in a corpus-driven analysis, based on the current version of the research corpus. PerSE corpus is compiled from online available Hungarian texts in different registers including journalistic (car reviews and reports on interstate relations) and academic discourse (original research papers from different fields). The paper provides the reader with the infrastructure and the protocol of the semi-automatic and manual annotation in the corpus. Then it gives an overview of the register-specific distribution of personifications and focuses on some of its lexicogrammatical patterns.", }
The aim of the paper is twofold: (i) to present an extended version of the PerSE corpus, the language resource for investigating personification in Hungarian; (ii) to explore the semantic and lexicogrammatical patterns of Hungarian personification in a corpus-driven analysis, based on the current version of the research corpus. PerSE corpus is compiled from online available Hungarian texts in different registers including journalistic (car reviews and reports on interstate relations) and academic discourse (original research papers from different fields). The paper provides the reader with the infrastructure and the protocol of the semi-automatic and manual annotation in the corpus. Then it gives an overview of the register-specific distribution of personifications and focuses on some of its lexicogrammatical patterns.
[ "Simon, Gabor" ]
The Register-specific Distribution of Personification in Hungarian: A Corpus-driven Analysis
figlang-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.15.bib
https://aclanthology.org/2024.figlang-1.15/
@inproceedings{lee-feldman-2024-report, title = "Report on the Multilingual Euphemism Detection Task", author = "Lee, Patrick and Feldman, Anna", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.15", doi = "10.18653/v1/2024.figlang-1.15", pages = "110--114", abstract = "This paper presents the Multilingual Euphemism Detection Shared Task for the Fourth Workshop on Figurative Language Processing (FigLang 2024) held in conjunction with NAACL 2024. Participants were invited to attempt the euphemism detection task on four different languages (American English, global Spanish, Yor{\`u}b{\'a}, and Mandarin Chinese): given input text containing a potentially euphemistic term (PET), determine if its use is euphemistic or not. We present the expanded datasets used for the shared task, summarize each team{'}s methods and findings, and analyze potential implications for future research.", }
This paper presents the Multilingual Euphemism Detection Shared Task for the Fourth Workshop on Figurative Language Processing (FigLang 2024) held in conjunction with NAACL 2024. Participants were invited to attempt the euphemism detection task on four different languages (American English, global Spanish, Yor{\`u}b{\'a}, and Mandarin Chinese): given input text containing a potentially euphemistic term (PET), determine if its use is euphemistic or not. We present the expanded datasets used for the shared task, summarize each team{'}s methods and findings, and analyze potential implications for future research.
[ "Lee, Patrick", "Feldman, Anna" ]
Report on the Multilingual Euphemism Detection Task
figlang-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.figlang-1.16.bib
https://aclanthology.org/2024.figlang-1.16/
@inproceedings{kulkarni-etal-2024-report, title = "A Report on the {F}ig{L}ang 2024 Shared Task on Multimodal Figurative Language", author = "Kulkarni, Shreyas and Saakyan, Arkadiy and Chakrabarty, Tuhin and Muresan, Smaranda", editor = "Ghosh, Debanjan and Muresan, Smaranda and Feldman, Anna and Chakrabarty, Tuhin and Liu, Emmy", booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)", month = jun, year = "2024", address = "Mexico City, Mexico (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.figlang-1.16", doi = "10.18653/v1/2024.figlang-1.16", pages = "115--119", abstract = "We present the outcomes of the Multimodal Figurative Language Shared Task held at the 4th Workshop on Figurative Language Processing (FigLang 2024) co-located at NAACL 2024. The task utilized the V-FLUTE dataset which is comprised of $<$image, text$>$ pairs that use figurative language and includes detailed textual explanations for the entailment or contradiction relationship of each pair. The challenge for participants was to develop models capable of accurately identifying the visual entailment relationship in these multimodal instances and generating persuasive free-text explanations. The results showed that the participants{'} models significantly outperformed the initial baselines in both automated and human evaluations. We also provide an overview of the systems submitted and analyze the results of the evaluations. All participating systems outperformed the LLaVA-ZS baseline, provided by us in F1-score.", }
We present the outcomes of the Multimodal Figurative Language Shared Task held at the 4th Workshop on Figurative Language Processing (FigLang 2024) co-located at NAACL 2024. The task utilized the V-FLUTE dataset which is comprised of $<$image, text$>$ pairs that use figurative language and includes detailed textual explanations for the entailment or contradiction relationship of each pair. The challenge for participants was to develop models capable of accurately identifying the visual entailment relationship in these multimodal instances and generating persuasive free-text explanations. The results showed that the participants{'} models significantly outperformed the initial baselines in both automated and human evaluations. We also provide an overview of the systems submitted and analyze the results of the evaluations. All participating systems outperformed the LLaVA-ZS baseline, provided by us in F1-score.
[ "Kulkarni, Shreyas", "Saakyan, Arkadiy", "Chakrabarty, Tuhin", "Muresan, Smar", "a" ]
A Report on the FigLang 2024 Shared Task on Multimodal Figurative Language
figlang-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.hcinlp-1.1.bib
https://aclanthology.org/2024.hcinlp-1.1/
@inproceedings{jiao-etal-2024-examining, title = "Examining Prosody in Spoken Navigation Instructions for People with Disabilities", author = "Jiao, Cathy and Steinfeld, Aaron and Eskenazi, Maxine", editor = "Blodgett, Su Lin and Curry, Amanda Cercas and Dey, Sunipa and Madaio, Michael and Nenkova, Ani and Yang, Diyi and Xiao, Ziang", booktitle = "Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.hcinlp-1.1", doi = "10.18653/v1/2024.hcinlp-1.1", pages = "1--12", abstract = "The introduction of conversational systems have made synthesized speech technologies common tools for daily activities. However, not all synthetic speech systems are designed with the needs of people with disabilities in mind. This paper describes a study in which 198 people {--} 80 participants with self-reported disabilities and 118 participants without {--} were recruited to listen to navigation instructions from a spoken dialogue system with different prosodic features. Results showed that slowing down speech rate aids in participants{'} number recall, but not in noun recall. From our results, we provide suggestions for developers for building accessible synthetic speech systems.", }
The introduction of conversational systems have made synthesized speech technologies common tools for daily activities. However, not all synthetic speech systems are designed with the needs of people with disabilities in mind. This paper describes a study in which 198 people {--} 80 participants with self-reported disabilities and 118 participants without {--} were recruited to listen to navigation instructions from a spoken dialogue system with different prosodic features. Results showed that slowing down speech rate aids in participants{'} number recall, but not in noun recall. From our results, we provide suggestions for developers for building accessible synthetic speech systems.
[ "Jiao, Cathy", "Steinfeld, Aaron", "Eskenazi, Maxine" ]
Examining Prosody in Spoken Navigation Instructions for People with Disabilities
hcinlp-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.hcinlp-1.2.bib
https://aclanthology.org/2024.hcinlp-1.2/
@inproceedings{kunz-kuhlmann-2024-properties, title = "Properties and Challenges of {LLM}-Generated Explanations", author = "Kunz, Jenny and Kuhlmann, Marco", editor = "Blodgett, Su Lin and Curry, Amanda Cercas and Dey, Sunipa and Madaio, Michael and Nenkova, Ani and Yang, Diyi and Xiao, Ziang", booktitle = "Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.hcinlp-1.2", doi = "10.18653/v1/2024.hcinlp-1.2", pages = "13--27", abstract = "The self-rationalising capabilities of large language models (LLMs) have been explored in restricted settings, using task-specific data sets.However, current LLMs do not (only) rely on specifically annotated data; nonetheless, they frequently explain their outputs.The properties of the generated explanations are influenced by the pre-training corpus and by the target data used for instruction fine-tuning.As the pre-training corpus includes a large amount of human-written explanations {``}in the wild{''}, we hypothesise that LLMs adopt common properties of human explanations.By analysing the outputs for a multi-domain instruction fine-tuning data set, we find that generated explanations show selectivity and contain illustrative elements, but less frequently are subjective or misleading.We discuss reasons and consequences of the properties{'} presence or absence. In particular, we outline positive and negative implications depending on the goals and user groups of the self-rationalising system.", }
The self-rationalising capabilities of large language models (LLMs) have been explored in restricted settings, using task-specific data sets.However, current LLMs do not (only) rely on specifically annotated data; nonetheless, they frequently explain their outputs.The properties of the generated explanations are influenced by the pre-training corpus and by the target data used for instruction fine-tuning.As the pre-training corpus includes a large amount of human-written explanations {``}in the wild{''}, we hypothesise that LLMs adopt common properties of human explanations.By analysing the outputs for a multi-domain instruction fine-tuning data set, we find that generated explanations show selectivity and contain illustrative elements, but less frequently are subjective or misleading.We discuss reasons and consequences of the properties{'} presence or absence. In particular, we outline positive and negative implications depending on the goals and user groups of the self-rationalising system.
[ "Kunz, Jenny", "Kuhlmann, Marco" ]
Properties and Challenges of LLM-Generated Explanations
hcinlp-1.2
Poster
2402.10532
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.hcinlp-1.3.bib
https://aclanthology.org/2024.hcinlp-1.3/
@inproceedings{byun-etal-2024-reference, title = "This Reference Does Not Exist: An Exploration of {LLM} Citation Accuracy and Relevance", author = "Byun, Courtni and Vasicek, Piper and Seppi, Kevin", editor = "Blodgett, Su Lin and Curry, Amanda Cercas and Dey, Sunipa and Madaio, Michael and Nenkova, Ani and Yang, Diyi and Xiao, Ziang", booktitle = "Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.hcinlp-1.3", doi = "10.18653/v1/2024.hcinlp-1.3", pages = "28--39", abstract = "Citations are a fundamental and indispensable part of research writing. They provide support and lend credibility to research findings. Recent GPT-fueled interest in large language models (LLMs) has shone a spotlight on the capabilities and limitations of these models when generating relevant citations for a document. Recent work has focused largely on title and author accuracy. We underline this effort and expand on it with a preliminary exploration in relevance of model-recommended citations. We define three citation-recommendation tasks. We also collect and annotate a dataset of model-recommended citations for those tasks. We find that GPT-4 largely outperforms earlier models on both author and title accuracy in two markedly different CS venues, but may not recommend references that are more relevant than those recommended by the earlier models. The two venues we compare are CHI and EMNLP. All models appear to perform better at recommending EMNLP papers than CHI papers.", }
Citations are a fundamental and indispensable part of research writing. They provide support and lend credibility to research findings. Recent GPT-fueled interest in large language models (LLMs) has shone a spotlight on the capabilities and limitations of these models when generating relevant citations for a document. Recent work has focused largely on title and author accuracy. We underline this effort and expand on it with a preliminary exploration in relevance of model-recommended citations. We define three citation-recommendation tasks. We also collect and annotate a dataset of model-recommended citations for those tasks. We find that GPT-4 largely outperforms earlier models on both author and title accuracy in two markedly different CS venues, but may not recommend references that are more relevant than those recommended by the earlier models. The two venues we compare are CHI and EMNLP. All models appear to perform better at recommending EMNLP papers than CHI papers.
[ "Byun, Courtni", "Vasicek, Piper", "Seppi, Kevin" ]
This Reference Does Not Exist: An Exploration of LLM Citation Accuracy and Relevance
hcinlp-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.hcinlp-1.4.bib
https://aclanthology.org/2024.hcinlp-1.4/
@inproceedings{choi-etal-2024-combining, title = "Combining Multiple Metrics for Evaluating Retrieval-Augmented Conversations", author = "Choi, Jason Ingyu and Collins, Marcus and Agichtein, Eugene and Rokhlenko, Oleg and Malmasi, Shervin", editor = "Blodgett, Su Lin and Curry, Amanda Cercas and Dey, Sunipa and Madaio, Michael and Nenkova, Ani and Yang, Diyi and Xiao, Ziang", booktitle = "Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.hcinlp-1.4", doi = "10.18653/v1/2024.hcinlp-1.4", pages = "40--50", abstract = "Conversational AI is a subtype of Human Computer Interaction that has gained wide adoption. These systems are typically powered by Large Language Models (LLMs) that use Retrieval Augmented Generation (RAG) to infuse external knowledge, which is effective against issues like hallucination. However, automatically evaluating retrieval augmented conversations with minimal human effort remains challenging, particularly in online settings. We address this challenge by proposing a lexical metric, and a novel method for combining it with other metrics, including semantic models. Our approach involves: (1) Conversational Information Utility (CIU), a new automated metric inspired by prior user studies on web search evaluation, to compute information overlap between conversation context and grounded information in an unsupervised, purely lexical way; and (2) a generalized reward model through Mixture-of-Experts (MoE-CIU) that dynamically ensembles CIU with other metrics, including learned ones, into a single reward. Evaluation against human ratings on two public datasets (Topical Chat and Persona Chat) shows that CIU improves correlation against human judgments by 2.0{\%} and 0.9{\%} respectively compared to the second best metric. When MoE is applied to combine lexical and learned semantic metrics, correlations further improve by 9.9{\%} and 5.0{\%}, suggesting that unified reward models are a promising approach.", }
Conversational AI is a subtype of Human Computer Interaction that has gained wide adoption. These systems are typically powered by Large Language Models (LLMs) that use Retrieval Augmented Generation (RAG) to infuse external knowledge, which is effective against issues like hallucination. However, automatically evaluating retrieval augmented conversations with minimal human effort remains challenging, particularly in online settings. We address this challenge by proposing a lexical metric, and a novel method for combining it with other metrics, including semantic models. Our approach involves: (1) Conversational Information Utility (CIU), a new automated metric inspired by prior user studies on web search evaluation, to compute information overlap between conversation context and grounded information in an unsupervised, purely lexical way; and (2) a generalized reward model through Mixture-of-Experts (MoE-CIU) that dynamically ensembles CIU with other metrics, including learned ones, into a single reward. Evaluation against human ratings on two public datasets (Topical Chat and Persona Chat) shows that CIU improves correlation against human judgments by 2.0{\%} and 0.9{\%} respectively compared to the second best metric. When MoE is applied to combine lexical and learned semantic metrics, correlations further improve by 9.9{\%} and 5.0{\%}, suggesting that unified reward models are a promising approach.
[ "Choi, Jason Ingyu", "Collins, Marcus", "Agichtein, Eugene", "Rokhlenko, Oleg", "Malmasi, Shervin" ]
Combining Multiple Metrics for Evaluating Retrieval-Augmented Conversations
hcinlp-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.hcinlp-1.5.bib
https://aclanthology.org/2024.hcinlp-1.5/
@inproceedings{shaib-etal-2024-much, title = "How Much Annotation is Needed to Compare Summarization Models?", author = "Shaib, Chantal and Barrow, Joe and Siu, Alexa and Wallace, Byron and Nenkova, Ani", editor = "Blodgett, Su Lin and Curry, Amanda Cercas and Dey, Sunipa and Madaio, Michael and Nenkova, Ani and Yang, Diyi and Xiao, Ziang", booktitle = "Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.hcinlp-1.5", doi = "10.18653/v1/2024.hcinlp-1.5", pages = "51--59", abstract = "Modern instruction-tuned models have become highly capable in text generation tasks such as summarization, and are expected to be released at a steady pace. In practice one may now wish to choose confidently, but with minimal effort, the best performing summarization model when applied to a new domain or purpose. In this work, we empirically investigate the test sample size necessary to select a preferred model in the context of news summarization. Empirical results reveal that comparative evaluation converges quickly for both automatic and human evaluation, with clear preferences for a system emerging from under 100 examples. The human preference data allows us to quantify how well automatic scores can reproduce preference rankings across a variety of downstream summarization tasks. We find that, while automatic metrics are stable at smaller sample sizes, only some automatic metrics are able to moderately predict model win rates according to human preference.", }
Modern instruction-tuned models have become highly capable in text generation tasks such as summarization, and are expected to be released at a steady pace. In practice one may now wish to choose confidently, but with minimal effort, the best performing summarization model when applied to a new domain or purpose. In this work, we empirically investigate the test sample size necessary to select a preferred model in the context of news summarization. Empirical results reveal that comparative evaluation converges quickly for both automatic and human evaluation, with clear preferences for a system emerging from under 100 examples. The human preference data allows us to quantify how well automatic scores can reproduce preference rankings across a variety of downstream summarization tasks. We find that, while automatic metrics are stable at smaller sample sizes, only some automatic metrics are able to moderately predict model win rates according to human preference.
[ "Shaib, Chantal", "Barrow, Joe", "Siu, Alexa", "Wallace, Byron", "Nenkova, Ani" ]
How Much Annotation is Needed to Compare Summarization Models?
hcinlp-1.5
Poster
2402.18756
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.hcinlp-1.6.bib
https://aclanthology.org/2024.hcinlp-1.6/
@inproceedings{nigam-etal-2024-interactive, title = "An Interactive Co-Pilot for Accelerated Research Ideation", author = "Nigam, Harshit and Patwardhan, Manasi and Vig, Lovekesh and Shroff, Gautam", editor = "Blodgett, Su Lin and Curry, Amanda Cercas and Dey, Sunipa and Madaio, Michael and Nenkova, Ani and Yang, Diyi and Xiao, Ziang", booktitle = "Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.hcinlp-1.6", doi = "10.18653/v1/2024.hcinlp-1.6", pages = "60--73", abstract = "In the realm of research support tools, there exists a notable void in resources tailored specifically for aiding researchers during the crucial ideation phase of the research life-cycle. We address this gap by introducing {`}Acceleron{'}, a {`}Co-Pilot{'} for researchers, designed specifically to accelerate the ideation phase of the research life-cycle. Leveraging the reasoning and domain-specific skills of Large Language Models (LLMs) within an agent-based architecture with distinct personas, Acceleron aids researchers through the formulation of a comprehensive research proposals. It emulates the ideation process, engaging researchers in an interactive fashion to validate the novelty of the proposal and generate plausible set-of hypotheses. Notably, it addresses challenges inherent in LLMs, such as hallucinations, implements a two-stage aspect-based retrieval to manage precision-recall trade-offs, and tackles issues of unanswerability. Our observations and end-user evaluations illustrate the efficacy of Acceleron as an enhancer of researcher{'}s productivity.", }
In the realm of research support tools, there exists a notable void in resources tailored specifically for aiding researchers during the crucial ideation phase of the research life-cycle. We address this gap by introducing {`}Acceleron{'}, a {`}Co-Pilot{'} for researchers, designed specifically to accelerate the ideation phase of the research life-cycle. Leveraging the reasoning and domain-specific skills of Large Language Models (LLMs) within an agent-based architecture with distinct personas, Acceleron aids researchers through the formulation of a comprehensive research proposals. It emulates the ideation process, engaging researchers in an interactive fashion to validate the novelty of the proposal and generate plausible set-of hypotheses. Notably, it addresses challenges inherent in LLMs, such as hallucinations, implements a two-stage aspect-based retrieval to manage precision-recall trade-offs, and tackles issues of unanswerability. Our observations and end-user evaluations illustrate the efficacy of Acceleron as an enhancer of researcher{'}s productivity.
[ "Nigam, Harshit", "Patwardhan, Manasi", "Vig, Lovekesh", "Shroff, Gautam" ]
An Interactive Co-Pilot for Accelerated Research Ideation
hcinlp-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.hcinlp-1.7.bib
https://aclanthology.org/2024.hcinlp-1.7/
@inproceedings{koli-etal-2024-sensemaking, title = "Sensemaking of Socially-Mediated Crisis Information", author = "Koli, Vrushali and Yuan, Jun and Dasgupta, Aritra", editor = "Blodgett, Su Lin and Curry, Amanda Cercas and Dey, Sunipa and Madaio, Michael and Nenkova, Ani and Yang, Diyi and Xiao, Ziang", booktitle = "Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.hcinlp-1.7", doi = "10.18653/v1/2024.hcinlp-1.7", pages = "74--81", abstract = "In times of crisis, the human mind is often a voracious information forager. It might not be immediately apparent what one wants or needs, and people frequently look for answers to their most pressing questions and worst fears. In that context, the pandemic has demonstrated that social media sources, like erstwhile Twitter, are a rich medium for data-driven communication between experts and the public.However, as lay users, we must find needles in a haystack to distinguish credible and actionable information signals from the noise. In this work, we leverage the literature on crisis communication to propose an AI-driven sensemaking model that bridges the gap between what people seek and what they need during a crisis. Our model learns to contrast social media messages concerning expert guidance with subjective opinion and enables semantic interpretation of message characteristics based on the communicative intent of the message author. We provide examples from our tweet collection and present a hypothetical social media usage scenario to demonstrate the efficacy of our proposed model.", }
In times of crisis, the human mind is often a voracious information forager. It might not be immediately apparent what one wants or needs, and people frequently look for answers to their most pressing questions and worst fears. In that context, the pandemic has demonstrated that social media sources, like erstwhile Twitter, are a rich medium for data-driven communication between experts and the public.However, as lay users, we must find needles in a haystack to distinguish credible and actionable information signals from the noise. In this work, we leverage the literature on crisis communication to propose an AI-driven sensemaking model that bridges the gap between what people seek and what they need during a crisis. Our model learns to contrast social media messages concerning expert guidance with subjective opinion and enables semantic interpretation of message characteristics based on the communicative intent of the message author. We provide examples from our tweet collection and present a hypothetical social media usage scenario to demonstrate the efficacy of our proposed model.
[ "Koli, Vrushali", "Yuan, Jun", "Dasgupta, Aritra" ]
Sensemaking of Socially-Mediated Crisis Information
hcinlp-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.hcinlp-1.8.bib
https://aclanthology.org/2024.hcinlp-1.8/
@inproceedings{gautam-srinath-2024-blind, title = "Blind Spots and Biases: Exploring the Role of Annotator Cognitive Biases in {NLP}", author = "Gautam, Sanjana and Srinath, Mukund", editor = "Blodgett, Su Lin and Curry, Amanda Cercas and Dey, Sunipa and Madaio, Michael and Nenkova, Ani and Yang, Diyi and Xiao, Ziang", booktitle = "Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.hcinlp-1.8", doi = "10.18653/v1/2024.hcinlp-1.8", pages = "82--88", abstract = "With the rapid proliferation of artificial intelligence, there is growing concern over its potential to exacerbate existing biases and societal disparities and introduce novel ones. This issue has prompted widespread attention from academia, policymakers, industry, and civil society. While evidence suggests that integrating human perspectives can mitigate bias-related issues in AI systems, it also introduces challenges associated with cognitive biases inherent in human decision-making. Our research focuses on reviewing existing methodologies and ongoing investigations aimed at understanding annotation attributes that contribute to bias.", }
With the rapid proliferation of artificial intelligence, there is growing concern over its potential to exacerbate existing biases and societal disparities and introduce novel ones. This issue has prompted widespread attention from academia, policymakers, industry, and civil society. While evidence suggests that integrating human perspectives can mitigate bias-related issues in AI systems, it also introduces challenges associated with cognitive biases inherent in human decision-making. Our research focuses on reviewing existing methodologies and ongoing investigations aimed at understanding annotation attributes that contribute to bias.
[ "Gautam, Sanjana", "Srinath, Mukund" ]
Blind Spots and Biases: Exploring the Role of Annotator Cognitive Biases in NLP
hcinlp-1.8
Poster
2404.19071
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.hcinlp-1.9.bib
https://aclanthology.org/2024.hcinlp-1.9/
@inproceedings{wang-etal-2024-llmcheckup, title = "{LLMC}heckup: Conversational Examination of Large Language Models via Interpretability Tools and Self-Explanations", author = {Wang, Qianli and Anikina, Tatiana and Feldhus, Nils and Genabith, Josef and Hennig, Leonhard and M{\"o}ller, Sebastian}, editor = "Blodgett, Su Lin and Curry, Amanda Cercas and Dey, Sunipa and Madaio, Michael and Nenkova, Ani and Yang, Diyi and Xiao, Ziang", booktitle = "Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.hcinlp-1.9", doi = "10.18653/v1/2024.hcinlp-1.9", pages = "89--104", abstract = "Interpretability tools that offer explanations in the form of a dialogue have demonstrated their efficacy in enhancing users{'} understanding (Slack et al., 2023; Shen et al., 2023), as one-off explanations may fall short in providing sufficient information to the user. Current solutions for dialogue-based explanations, however, often require external tools and modules and are not easily transferable to tasks they were not designed for. With $\texttt{LLMCheckup}$, we present an easily accessible tool that allows users to chat with any state-of-the-art large language model (LLM) about its behavior. We enable LLMs to generate explanations and perform user intent recognition without fine-tuning, by connecting them with a broad spectrum of Explainable AI (XAI) methods, including white-box explainability tools such as feature attributions, and self-explanations (e.g., for rationale generation). LLM-based (self-)explanations are presented as an interactive dialogue that supports follow-up questions and generates suggestions. $\texttt{LLMCheckup}$ provides tutorials for operations available in the system, catering to individuals with varying levels of expertise in XAI and supporting multiple input modalities. We introduce a new parsing strategy that substantially enhances the user intent recognition accuracy of the LLM. Finally, we showcase $\texttt{LLMCheckup}$ for the tasks of fact checking and commonsense question answering. Our code repository: https://github.com/DFKI-NLP/LLMCheckup", }
Interpretability tools that offer explanations in the form of a dialogue have demonstrated their efficacy in enhancing users{'} understanding (Slack et al., 2023; Shen et al., 2023), as one-off explanations may fall short in providing sufficient information to the user. Current solutions for dialogue-based explanations, however, often require external tools and modules and are not easily transferable to tasks they were not designed for. With $\texttt{LLMCheckup}$, we present an easily accessible tool that allows users to chat with any state-of-the-art large language model (LLM) about its behavior. We enable LLMs to generate explanations and perform user intent recognition without fine-tuning, by connecting them with a broad spectrum of Explainable AI (XAI) methods, including white-box explainability tools such as feature attributions, and self-explanations (e.g., for rationale generation). LLM-based (self-)explanations are presented as an interactive dialogue that supports follow-up questions and generates suggestions. $\texttt{LLMCheckup}$ provides tutorials for operations available in the system, catering to individuals with varying levels of expertise in XAI and supporting multiple input modalities. We introduce a new parsing strategy that substantially enhances the user intent recognition accuracy of the LLM. Finally, we showcase $\texttt{LLMCheckup}$ for the tasks of fact checking and commonsense question answering. Our code repository: https://github.com/DFKI-NLP/LLMCheckup
[ "Wang, Qianli", "Anikina, Tatiana", "Feldhus, Nils", "Genabith, Josef", "Hennig, Leonhard", "M{\\\"o}ller, Sebastian" ]
LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tools and Self-Explanations
hcinlp-1.9
Poster
2401.12576
[ "https://github.com/dfki-nlp/llmcheckup" ]
https://huggingface.co/papers/2401.12576
0
2
0
6
1
[]
[]
[]
https://aclanthology.org/2024.insights-1.1.bib
https://aclanthology.org/2024.insights-1.1/
@inproceedings{ye-etal-2024-mosecrot, title = "{M}o{SEC}ro{T}: Model Stitching with Static Word Embeddings for Crosslingual Zero-shot Transfer", author = {Ye, Haotian and Liu, Yihong and Ma, Chunlan and Sch{\"u}tze, Hinrich}, editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.1", doi = "10.18653/v1/2024.insights-1.1", pages = "1--7", abstract = "Transformer-based pre-trained language models (PLMs) have achieved remarkable performance in various natural language processing (NLP) tasks. However, pre-training such models can take considerable resources that are almost only available to high-resource languages. On the contrary, static word embeddings are easier to train in terms of computing resources and the amount of data required. In this paper, we introduce MoSECroT (Model Stitching with Static Word Embeddings for Crosslingual Zero-shot Transfer, a novel and challenging task that is especially relevant to low-resource languages for which static word embeddings are available. To tackle the task, we present the first framework that leverages relative representations to construct a common space for the embeddings of a source language PLM and the static word embeddings of a target language. In this way, we can train the PLM on source-language training data and perform zero-shot transfer to the target language by simply swapping the embedding layer. However, through extensive experiments on two classification datasets, we show that although our proposed framework is competitive with weak baselines when addressing MoSECroT, it fails to achieve competitive results compared with some strong baselines. In this paper, we attempt to explain this negative result and provide several thoughts on possible improvement.", }
Transformer-based pre-trained language models (PLMs) have achieved remarkable performance in various natural language processing (NLP) tasks. However, pre-training such models can take considerable resources that are almost only available to high-resource languages. On the contrary, static word embeddings are easier to train in terms of computing resources and the amount of data required. In this paper, we introduce MoSECroT (Model Stitching with Static Word Embeddings for Crosslingual Zero-shot Transfer, a novel and challenging task that is especially relevant to low-resource languages for which static word embeddings are available. To tackle the task, we present the first framework that leverages relative representations to construct a common space for the embeddings of a source language PLM and the static word embeddings of a target language. In this way, we can train the PLM on source-language training data and perform zero-shot transfer to the target language by simply swapping the embedding layer. However, through extensive experiments on two classification datasets, we show that although our proposed framework is competitive with weak baselines when addressing MoSECroT, it fails to achieve competitive results compared with some strong baselines. In this paper, we attempt to explain this negative result and provide several thoughts on possible improvement.
[ "Ye, Haotian", "Liu, Yihong", "Ma, Chunlan", "Sch{\\\"u}tze, Hinrich" ]
MoSECroT: Model Stitching with Static Word Embeddings for Crosslingual Zero-shot Transfer
insights-1.1
Poster
2401.04821
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.2.bib
https://aclanthology.org/2024.insights-1.2/
@inproceedings{garcia-de-herreros-etal-2024-explains, title = "What explains the success of cross-modal fine-tuning with {ORCA}?", author = "Garcia De Herreros, Paloma and Gautam, Vagrant and Slusallek, Philipp and Klakow, Dietrich and Mosbach, Marius", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.2", doi = "10.18653/v1/2024.insights-1.2", pages = "8--16", abstract = "ORCA (Shen et al., 2023) is a recent technique for cross-modal fine-tuning, i.e., applying pre-trained transformer models to modalities beyond their training data. The technique consists primarily of training an embedder and fine-tuning the embedder and model. Despite its high performance on a variety of downstream tasks, we do not understand precisely how each of these components contribute to ORCA{'}s success. Therefore, we run a series of ablations and find that embedder training does not help 2D tasks at all, contrary to what the original paper posits. In 1D tasks, some amount of embedder training is necessary but more is not better. In 4 out of 6 datasets we experiment with, it is model fine-tuning that makes the biggest difference. Through our ablations and baselines, we contribute a better understanding of the individual components of ORCA.", }
ORCA (Shen et al., 2023) is a recent technique for cross-modal fine-tuning, i.e., applying pre-trained transformer models to modalities beyond their training data. The technique consists primarily of training an embedder and fine-tuning the embedder and model. Despite its high performance on a variety of downstream tasks, we do not understand precisely how each of these components contribute to ORCA{'}s success. Therefore, we run a series of ablations and find that embedder training does not help 2D tasks at all, contrary to what the original paper posits. In 1D tasks, some amount of embedder training is necessary but more is not better. In 4 out of 6 datasets we experiment with, it is model fine-tuning that makes the biggest difference. Through our ablations and baselines, we contribute a better understanding of the individual components of ORCA.
[ "Garcia De Herreros, Paloma", "Gautam, Vagrant", "Slusallek, Philipp", "Klakow, Dietrich", "Mosbach, Marius" ]
What explains the success of cross-modal fine-tuning with ORCA?
insights-1.2
Poster
2403.13537
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.3.bib
https://aclanthology.org/2024.insights-1.3/
@inproceedings{gonzalez-gutierrez-etal-2024-fine, title = "Does Fine-tuning a Classifier Help in Low-budget Scenarios? Not Much", author = "Gonzalez - Gutierrez, Cesar and Primadhanty, Audi and Cazzaro, Francesco and Quattoni, Ariadna", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.3", doi = "10.18653/v1/2024.insights-1.3", pages = "17--24", abstract = "In recent years, the two-step approach for text classification based on pre-training plus fine-tuning has led to significant improvements in classification performance. In this paper, we study the low-budget scenario, and we ask whether it is justified to allocate the additional resources needed for fine-tuning complex models. To do so, we isolate the gains obtained from pre-training from those obtained from fine-tuning. We find out that, when the gains from pre-training are factored out, the performance attained by using complex transformer models leads to marginal improvements over simpler models. Therefore, in this scenario, utilizing simpler classifiers on top of pre-trained representations proves to be a viable alternative.", }
In recent years, the two-step approach for text classification based on pre-training plus fine-tuning has led to significant improvements in classification performance. In this paper, we study the low-budget scenario, and we ask whether it is justified to allocate the additional resources needed for fine-tuning complex models. To do so, we isolate the gains obtained from pre-training from those obtained from fine-tuning. We find out that, when the gains from pre-training are factored out, the performance attained by using complex transformer models leads to marginal improvements over simpler models. Therefore, in this scenario, utilizing simpler classifiers on top of pre-trained representations proves to be a viable alternative.
[ "Gonzalez - Gutierrez, Cesar", "Primadhanty, Audi", "Cazzaro, Francesco", "Quattoni, Ariadna" ]
Does Fine-tuning a Classifier Help in Low-budget Scenarios? Not Much
insights-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.4.bib
https://aclanthology.org/2024.insights-1.4/
@inproceedings{sanchez-carmona-etal-2024-well, title = "How Well Can a Genetic Algorithm Fine-tune Transformer Encoders? A First Approach", author = "Sanchez Carmona, Vicente Ivan and Jiang, Shanshan and Dong, Bin", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.4", doi = "10.18653/v1/2024.insights-1.4", pages = "25--33", abstract = "Genetic Algorithms (GAs) have been studied across different fields such as engineering or medicine to optimize diverse problems such as network routing, or medical image segmentation. Moreover, they have been used to automatically find optimal architectures for deep neural networks. However, to our knowledge, they have not been applied as a weight optimizer for the Transformer model. While gradient descent has been the main paradigm for this task, we believe that GAs have advantages to bring to the table. In this paper, we will show that even though GAs are capable of fine-tuning Transformer encoders, their generalization ability is considerably poorer than that from Adam; however, on a closer look, GAs ability to exploit knowledge from 2 different pretraining datasets surpasses Adam{'}s ability to do so.", }
Genetic Algorithms (GAs) have been studied across different fields such as engineering or medicine to optimize diverse problems such as network routing, or medical image segmentation. Moreover, they have been used to automatically find optimal architectures for deep neural networks. However, to our knowledge, they have not been applied as a weight optimizer for the Transformer model. While gradient descent has been the main paradigm for this task, we believe that GAs have advantages to bring to the table. In this paper, we will show that even though GAs are capable of fine-tuning Transformer encoders, their generalization ability is considerably poorer than that from Adam; however, on a closer look, GAs ability to exploit knowledge from 2 different pretraining datasets surpasses Adam{'}s ability to do so.
[ "Sanchez Carmona, Vicente Ivan", "Jiang, Shanshan", "Dong, Bin" ]
How Well Can a Genetic Algorithm Fine-tune Transformer Encoders? A First Approach
insights-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.5.bib
https://aclanthology.org/2024.insights-1.5/
@inproceedings{mickus-etal-2024-attention, title = "{I} Have an Attention Bridge to Sell You: Generalization Capabilities of Modular Translation Architectures", author = "Mickus, Timothee and Vazquez, Raul and Attieh, Joseph", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.5", doi = "10.18653/v1/2024.insights-1.5", pages = "34--40", abstract = "Modularity is a paradigm of machine translation with the potential of bringing forth models that are large at training time and small during inference. Within this field of study, modular approaches, and in particular attention bridges, have been argued to improve the generalization capabilities of models by fostering language-independent representations. In the present paper, we study whether modularity affects translation quality; as well as how well modular architectures generalize across different evaluation scenarios. For a given computational budget, we find non-modular architectures to be always comparable or preferable to all modular designs we study.", }
Modularity is a paradigm of machine translation with the potential of bringing forth models that are large at training time and small during inference. Within this field of study, modular approaches, and in particular attention bridges, have been argued to improve the generalization capabilities of models by fostering language-independent representations. In the present paper, we study whether modularity affects translation quality; as well as how well modular architectures generalize across different evaluation scenarios. For a given computational budget, we find non-modular architectures to be always comparable or preferable to all modular designs we study.
[ "Mickus, Timothee", "Vazquez, Raul", "Attieh, Joseph" ]
I Have an Attention Bridge to Sell You: Generalization Capabilities of Modular Translation Architectures
insights-1.5
Poster
2404.17918
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.6.bib
https://aclanthology.org/2024.insights-1.6/
@inproceedings{bui-etal-2024-knowledge, title = "Knowledge Distillation vs. Pretraining from Scratch under a Fixed (Computation) Budget", author = "Bui, Minh Duc and Schmidt, Fabian and Glava{\v{s}}, Goran and Von Der Wense, Katharina", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.6", doi = "10.18653/v1/2024.insights-1.6", pages = "41--47", abstract = "Compared to standard language model (LM) pretraining (i.e., from scratch), Knowledge Distillation (KD) entails an additional forward pass through a teacher model that is typically substantially larger than the target student model. As such, KD in LM pretraining materially slows down throughput of pretraining instances vis-a-vis pretraining from scratch. Scaling laws of LM pretraining suggest that smaller models can close the gap to larger counterparts if trained on more data (i.e., processing more tokens){---}and under a fixed computation budget, smaller models are able to process more data than larger models. We thus hypothesize that KD might, in fact, be suboptimal to pretraining from scratch for obtaining smaller LMs, when appropriately accounting for the compute budget. To test this, we compare pretraining from scratch against several KD strategies for masked language modeling (MLM) in a fair experimental setup, with respect to amount of computation as well as pretraining data. Downstream results on GLUE, however, do not confirm our hypothesis: while pretraining from scratch performs comparably to ordinary KD under a fixed computation budget, more sophisticated KD strategies, namely TinyBERT and MiniLM, outperform it by a notable margin. We further find that KD yields larger gains over pretraining from scratch when the data can be repeated under the fixed computation budget.", }
Compared to standard language model (LM) pretraining (i.e., from scratch), Knowledge Distillation (KD) entails an additional forward pass through a teacher model that is typically substantially larger than the target student model. As such, KD in LM pretraining materially slows down throughput of pretraining instances vis-a-vis pretraining from scratch. Scaling laws of LM pretraining suggest that smaller models can close the gap to larger counterparts if trained on more data (i.e., processing more tokens){---}and under a fixed computation budget, smaller models are able to process more data than larger models. We thus hypothesize that KD might, in fact, be suboptimal to pretraining from scratch for obtaining smaller LMs, when appropriately accounting for the compute budget. To test this, we compare pretraining from scratch against several KD strategies for masked language modeling (MLM) in a fair experimental setup, with respect to amount of computation as well as pretraining data. Downstream results on GLUE, however, do not confirm our hypothesis: while pretraining from scratch performs comparably to ordinary KD under a fixed computation budget, more sophisticated KD strategies, namely TinyBERT and MiniLM, outperform it by a notable margin. We further find that KD yields larger gains over pretraining from scratch when the data can be repeated under the fixed computation budget.
[ "Bui, Minh Duc", "Schmidt, Fabian", "Glava{\\v{s}}, Goran", "Von Der Wense, Katharina" ]
Knowledge Distillation vs. Pretraining from Scratch under a Fixed (Computation) Budget
insights-1.6
Poster
2404.19319
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.7.bib
https://aclanthology.org/2024.insights-1.7/
@inproceedings{cognetta-etal-2024-analysis, title = "An Analysis of {BPE} Vocabulary Trimming in Neural Machine Translation", author = "Cognetta, Marco and Hiraoka, Tatsuya and Sennrich, Rico and Pinter, Yuval and Okazaki, Naoaki", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.7", doi = "10.18653/v1/2024.insights-1.7", pages = "48--50", abstract = "We explore threshold vocabulary trimming in Byte-Pair Encoding subword tokenization, a tokenization postprocessing step that replaces rare subwords with their component subwords. The technique is available in popular tokenization libraries but has not been subjected to rigorous scientific scrutiny. While the removal of rare subwords is suggested as best practice in model implementations, both as a means to reduce model size and for improving model performance through robustness, our experiments indicate that, across a large space of hyperparameter settings, vocabulary trimming fails to consistently improve model performance, and is even prone to incurring heavy degradation.", }
We explore threshold vocabulary trimming in Byte-Pair Encoding subword tokenization, a tokenization postprocessing step that replaces rare subwords with their component subwords. The technique is available in popular tokenization libraries but has not been subjected to rigorous scientific scrutiny. While the removal of rare subwords is suggested as best practice in model implementations, both as a means to reduce model size and for improving model performance through robustness, our experiments indicate that, across a large space of hyperparameter settings, vocabulary trimming fails to consistently improve model performance, and is even prone to incurring heavy degradation.
[ "Cognetta, Marco", "Hiraoka, Tatsuya", "Sennrich, Rico", "Pinter, Yuval", "Okazaki, Naoaki" ]
An Analysis of BPE Vocabulary Trimming in Neural Machine Translation
insights-1.7
Poster
2404.00397
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.8.bib
https://aclanthology.org/2024.insights-1.8/
@inproceedings{armengol-estape-etal-2024-limits, title = "On the Limits of Multi-modal Meta-Learning with Auxiliary Task Modulation Using Conditional Batch Normalization", author = "Armengol - Estape, Jordi and Michalski, Vincent and Kumar, Ramnath and St-Charles, Pierre - Luc and Precup, Doina and Ebrahimi Kahou, Samira", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.8", doi = "10.18653/v1/2024.insights-1.8", pages = "51--59", abstract = "Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples. Recent studies show that cross-modal learning can improve representations for few-shot classification. More specifically, language is a rich modality that can be used to guide visual learning. In this work, we experiment with a multi-modal architecture for few-shot learning that consists of three components: a classifier, an auxiliary network, and a bridge network. While the classifier performs the main classification task, the auxiliary network learns to predict language representations from the same input, and the bridge network transforms high-level features of the auxiliary network into modulation parameters for layers of the few-shot classifier using conditional batch normalization. The bridge should encourage a form of lightweight semantic alignment between language and vision which could be useful for the classifier. However, after evaluating the proposed approach on two popular few-shot classification benchmarks we find that a) the improvements do not reproduce across benchmarks, and b) when they do, the improvements are due to the additional compute and parameters introduced by the bridge network. We contribute insights and recommendations for future work in multi-modal meta-learning, especially when using language representations.", }
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples. Recent studies show that cross-modal learning can improve representations for few-shot classification. More specifically, language is a rich modality that can be used to guide visual learning. In this work, we experiment with a multi-modal architecture for few-shot learning that consists of three components: a classifier, an auxiliary network, and a bridge network. While the classifier performs the main classification task, the auxiliary network learns to predict language representations from the same input, and the bridge network transforms high-level features of the auxiliary network into modulation parameters for layers of the few-shot classifier using conditional batch normalization. The bridge should encourage a form of lightweight semantic alignment between language and vision which could be useful for the classifier. However, after evaluating the proposed approach on two popular few-shot classification benchmarks we find that a) the improvements do not reproduce across benchmarks, and b) when they do, the improvements are due to the additional compute and parameters introduced by the bridge network. We contribute insights and recommendations for future work in multi-modal meta-learning, especially when using language representations.
[ "Armengol - Estape, Jordi", "Michalski, Vincent", "Kumar, Ramnath", "St-Charles, Pierre - Luc", "Precup, Doina", "Ebrahimi Kahou, Samira" ]
On the Limits of Multi-modal Meta-Learning with Auxiliary Task Modulation Using Conditional Batch Normalization
insights-1.8
Poster
2405.18751
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.9.bib
https://aclanthology.org/2024.insights-1.9/
@inproceedings{bafna-etal-2024-pointer, title = "Pointer-Generator Networks for Low-Resource Machine Translation: Don{'}t Copy That!", author = "Bafna, Niyati and Koehn, Philipp and Yarowsky, David", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.9", doi = "10.18653/v1/2024.insights-1.9", pages = "60--72", abstract = "While Transformer-based neural machine translation (NMT) is very effective in high-resource settings, many languages lack the necessary large parallel corpora to benefit from it. In the context of low-resource (LR) MT between two closely-related languages, a natural intuition is to seek benefits from structural {``}shortcuts{''}, such as copying subwords from the source to the target, given that such language pairs often share a considerable number of identical words, cognates, and borrowings. We test Pointer-Generator Networks for this purpose for six language pairs over a variety of resource ranges, and find weak improvements for most settings. However, analysis shows that the model does not show greater improvements for closely-related vs. more distant language pairs, or for lower resource ranges, and that the models do not exhibit the expected usage of the mechanism for shared subwords. Our discussion of the reasons for this behaviour highlights several general challenges for LR NMT, such as modern tokenization strategies, noisy real-world conditions, and linguistic complexities. We call for better scrutiny of linguistically motivated improvements to NMT given the blackbox nature of Transformer models, as well as for a focus on the above problems in the field.", }
While Transformer-based neural machine translation (NMT) is very effective in high-resource settings, many languages lack the necessary large parallel corpora to benefit from it. In the context of low-resource (LR) MT between two closely-related languages, a natural intuition is to seek benefits from structural {``}shortcuts{''}, such as copying subwords from the source to the target, given that such language pairs often share a considerable number of identical words, cognates, and borrowings. We test Pointer-Generator Networks for this purpose for six language pairs over a variety of resource ranges, and find weak improvements for most settings. However, analysis shows that the model does not show greater improvements for closely-related vs. more distant language pairs, or for lower resource ranges, and that the models do not exhibit the expected usage of the mechanism for shared subwords. Our discussion of the reasons for this behaviour highlights several general challenges for LR NMT, such as modern tokenization strategies, noisy real-world conditions, and linguistic complexities. We call for better scrutiny of linguistically motivated improvements to NMT given the blackbox nature of Transformer models, as well as for a focus on the above problems in the field.
[ "Bafna, Niyati", "Koehn, Philipp", "Yarowsky, David" ]
Pointer-Generator Networks for Low-Resource Machine Translation: Don't Copy That!
insights-1.9
Poster
2403.10963
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.10.bib
https://aclanthology.org/2024.insights-1.10/
@inproceedings{cunha-etal-2024-imaginary, title = "Imaginary Numbers! Evaluating Numerical Referring Expressions by Neural End-to-End Surface Realization Systems", author = "Cunha, Rossana and Chinonso, Osuji and Campos, Jo{\~a}o and Timoney, Brian and Davis, Brian and Cozman, Fabio and Pagano, Adriana and Castro Ferreira, Thiago", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.10", doi = "10.18653/v1/2024.insights-1.10", pages = "73--81", abstract = "Neural end-to-end surface realizers output more fluent texts than classical architectures. However, they tend to suffer from adequacy problems, in particular hallucinations in numerical referring expression generation. This poses a problem to language generation in sensitive domains, as is the case of robot journalism covering COVID-19 and Amazon deforestation. We propose an approach whereby numerical referring expressions are converted from digits to plain word form descriptions prior to being fed to state-of-the-art Large Language Models. We conduct automatic and human evaluations to report the best strategy to numerical superficial realization. Code and data are publicly available.", }
Neural end-to-end surface realizers output more fluent texts than classical architectures. However, they tend to suffer from adequacy problems, in particular hallucinations in numerical referring expression generation. This poses a problem to language generation in sensitive domains, as is the case of robot journalism covering COVID-19 and Amazon deforestation. We propose an approach whereby numerical referring expressions are converted from digits to plain word form descriptions prior to being fed to state-of-the-art Large Language Models. We conduct automatic and human evaluations to report the best strategy to numerical superficial realization. Code and data are publicly available.
[ "Cunha, Rossana", "Chinonso, Osuji", "Campos, Jo{\\~a}o", "Timoney, Brian", "Davis, Brian", "Cozman, Fabio", "Pagano, Adriana", "Castro Ferreira, Thiago" ]
Imaginary Numbers! Evaluating Numerical Referring Expressions by Neural End-to-End Surface Realization Systems
insights-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.11.bib
https://aclanthology.org/2024.insights-1.11/
@inproceedings{breidenstein-labeau-2024-using, title = "Using Locally Learnt Word Representations for better Textual Anomaly Detection", author = "Breidenstein, Alicia and Labeau, Matthieu", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.11", doi = "10.18653/v1/2024.insights-1.11", pages = "82--91", abstract = "The literature on general purpose textual Anomaly Detection is quite sparse, as most textual anomaly detection methods are implemented as out of domain detection in the context of pre-established classification tasks. Notably, in a field where pre-trained representations and models are of common use, the impact of the pre-training data on a task that lacks supervision has not been studied. In this paper, we use the simple setting of k-classes out anomaly detection and search for the best pairing of representation and classifier. We show that well-chosen embeddings allow a simple anomaly detection baseline such as OC-SVM to achieve similar results and even outperform deep state-of-the-art models.", }
The literature on general purpose textual Anomaly Detection is quite sparse, as most textual anomaly detection methods are implemented as out of domain detection in the context of pre-established classification tasks. Notably, in a field where pre-trained representations and models are of common use, the impact of the pre-training data on a task that lacks supervision has not been studied. In this paper, we use the simple setting of k-classes out anomaly detection and search for the best pairing of representation and classifier. We show that well-chosen embeddings allow a simple anomaly detection baseline such as OC-SVM to achieve similar results and even outperform deep state-of-the-art models.
[ "Breidenstein, Alicia", "Labeau, Matthieu" ]
Using Locally Learnt Word Representations for better Textual Anomaly Detection
insights-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.12.bib
https://aclanthology.org/2024.insights-1.12/
@inproceedings{nathan-etal-2024-probing, title = "Can probing classifiers reveal the learning by contact center large language models?: No, it doesn{'}t!", author = "Nathan, Varun and Kumar, Ayush and Ingle, Digvijay", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.12", doi = "10.18653/v1/2024.insights-1.12", pages = "92--100", abstract = "Fine-tuning large language models (LLMs) with domain-specific instruction dataset has emerged as an effective method to enhance their domain-specific understanding. Yet, there is limited work that examines the core characteristics acquired during this process. In this study, we benchmark the fundamental characteristics learned by contact-center (CC) domain specific instruction fine-tuned LLMs with out-of-the-box (OOB) LLMs via probing tasks encompassing conversational, channel, and automatic speech recognition (ASR) properties. We explore different LLM architectures (Flan-T5 and Llama) and sizes (3B, 7B, 11B, 13B). Our findings reveal remarkable effectiveness of CC-LLMs on the in-domain downstream tasks, with improvement in response acceptability by over 48{\%} compared to OOB-LLMs. However, we observe that the performance of probing classifiers are relatively similar and does not reflect the performance of in-domain downstream tasks. A similar observation is also noted on SentEval dataset that assess capabilities of models in terms of surface, syntactic, and semantic information through probing tasks. Our study challenges the premise that probing classifiers can reveal the fundamental characteristics learned by large language models and is reflective of the downstream task performance, via a case-study of LLMs tuned for contact center domain.", }
Fine-tuning large language models (LLMs) with domain-specific instruction dataset has emerged as an effective method to enhance their domain-specific understanding. Yet, there is limited work that examines the core characteristics acquired during this process. In this study, we benchmark the fundamental characteristics learned by contact-center (CC) domain specific instruction fine-tuned LLMs with out-of-the-box (OOB) LLMs via probing tasks encompassing conversational, channel, and automatic speech recognition (ASR) properties. We explore different LLM architectures (Flan-T5 and Llama) and sizes (3B, 7B, 11B, 13B). Our findings reveal remarkable effectiveness of CC-LLMs on the in-domain downstream tasks, with improvement in response acceptability by over 48{\%} compared to OOB-LLMs. However, we observe that the performance of probing classifiers are relatively similar and does not reflect the performance of in-domain downstream tasks. A similar observation is also noted on SentEval dataset that assess capabilities of models in terms of surface, syntactic, and semantic information through probing tasks. Our study challenges the premise that probing classifiers can reveal the fundamental characteristics learned by large language models and is reflective of the downstream task performance, via a case-study of LLMs tuned for contact center domain.
[ "Nathan, Varun", "Kumar, Ayush", "Ingle, Digvijay" ]
Can probing classifiers reveal the learning by contact center large language models?: No, it doesn't!
insights-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.13.bib
https://aclanthology.org/2024.insights-1.13/
@inproceedings{vijay-hershcovich-2024-abstract, title = "Can {A}bstract {M}eaning {R}epresentation Facilitate Fair Legal Judgement Predictions?", author = "Vijay, Supriti and Hershcovich, Daniel", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.13", doi = "10.18653/v1/2024.insights-1.13", pages = "101--109", abstract = "Legal judgment prediction encompasses the automated prediction of case outcomes by leveraging historical facts and opinions. While this approach holds the potential to enhance the efficiency of the legal system, it also raises critical concerns regarding the perpetuation of biases. Abstract Meaning Representation has shown promise as an intermediate text representation in various downstream NLP tasks due to its ability to capture semantically meaningful information in a graph-like structure. In this paper, we employ this ability of AMR in the legal judgement prediction task and assess to what extent it encodes biases, or conversely, abstracts away from them. Our study reveals that while AMR-based models exhibit worse overall performance than transformer-based models, they are less biased for attributes like age and defendant state compared to gender. By shedding light on these findings, this paper contributes to a more nuanced understanding of AMR{'}s potential benefits and limitations in legal NLP.", }
Legal judgment prediction encompasses the automated prediction of case outcomes by leveraging historical facts and opinions. While this approach holds the potential to enhance the efficiency of the legal system, it also raises critical concerns regarding the perpetuation of biases. Abstract Meaning Representation has shown promise as an intermediate text representation in various downstream NLP tasks due to its ability to capture semantically meaningful information in a graph-like structure. In this paper, we employ this ability of AMR in the legal judgement prediction task and assess to what extent it encodes biases, or conversely, abstracts away from them. Our study reveals that while AMR-based models exhibit worse overall performance than transformer-based models, they are less biased for attributes like age and defendant state compared to gender. By shedding light on these findings, this paper contributes to a more nuanced understanding of AMR{'}s potential benefits and limitations in legal NLP.
[ "Vijay, Supriti", "Hershcovich, Daniel" ]
Can Abstract Meaning Representation Facilitate Fair Legal Judgement Predictions?
insights-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.14.bib
https://aclanthology.org/2024.insights-1.14/
@inproceedings{jin-etal-2024-winoviz, title = "{WINOVIZ}: Probing Visual Properties of Objects Under Different States", author = "Jin, Woojeong and Srinivasan, Tejas and Thomason, Jesse and Ren, Xiang", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.14", doi = "10.18653/v1/2024.insights-1.14", pages = "110--123", abstract = "Humans interpret visual aspects of objects based on contexts. For example, a banana appears brown when rotten and green when unripe. Previous studies focused on language models{'} grasp of typical object properties. We introduce WINOVIZ, a text-only dataset with 1,380 examples of probing language models{'} reasoning about diverse visual properties under different contexts. Our task demands pragmatic and visual knowledge reasoning. We also present multi-hop data, a more challenging version requiring multi-step reasoning chains. Experimental findings include: a) GPT-4 excels overall but struggles with multi-hop data. b) Large models perform well in pragmatic reasoning but struggle with visual knowledge reasoning. c) Vision-language models outperform language-only models.", }
Humans interpret visual aspects of objects based on contexts. For example, a banana appears brown when rotten and green when unripe. Previous studies focused on language models{'} grasp of typical object properties. We introduce WINOVIZ, a text-only dataset with 1,380 examples of probing language models{'} reasoning about diverse visual properties under different contexts. Our task demands pragmatic and visual knowledge reasoning. We also present multi-hop data, a more challenging version requiring multi-step reasoning chains. Experimental findings include: a) GPT-4 excels overall but struggles with multi-hop data. b) Large models perform well in pragmatic reasoning but struggle with visual knowledge reasoning. c) Vision-language models outperform language-only models.
[ "Jin, Woojeong", "Srinivasan, Tejas", "Thomason, Jesse", "Ren, Xiang" ]
WINOVIZ: Probing Visual Properties of Objects Under Different States
insights-1.14
Poster
2402.13584
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.15.bib
https://aclanthology.org/2024.insights-1.15/
@inproceedings{srivatsa-etal-2024-harnessing, title = "Harnessing the Power of Multiple Minds: Lessons Learned from {LLM} Routing", author = "Srivatsa, Kv Aditya and Maurya, Kaushal and Kochmar, Ekaterina", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.15", doi = "10.18653/v1/2024.insights-1.15", pages = "124--134", abstract = "With the rapid development of LLMs, it is natural to ask how to harness their capabilities efficiently. In this paper, we explore whether it is feasible to direct each input query to a single most suitable LLM. To this end, we propose LLM routing for challenging reasoning tasks. Our extensive experiments suggest that such routing shows promise but is not feasible in all scenarios, so more robust approaches should be investigated to fill this gap.", }
With the rapid development of LLMs, it is natural to ask how to harness their capabilities efficiently. In this paper, we explore whether it is feasible to direct each input query to a single most suitable LLM. To this end, we propose LLM routing for challenging reasoning tasks. Our extensive experiments suggest that such routing shows promise but is not feasible in all scenarios, so more robust approaches should be investigated to fill this gap.
[ "Srivatsa, Kv Aditya", "Maurya, Kaushal", "Kochmar, Ekaterina" ]
Harnessing the Power of Multiple Minds: Lessons Learned from LLM Routing
insights-1.15
Poster
2405.00467
[ "https://github.com/kvadityasrivatsa/llm-routing" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.16.bib
https://aclanthology.org/2024.insights-1.16/
@inproceedings{devanathan-etal-2024-paradox, title = "The Paradox of Preference: A Study on {LLM} Alignment Algorithms and Data Acquisition Methods", author = "Devanathan, Rishikesh and Nathan, Varun and Kumar, Ayush", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.16", doi = "10.18653/v1/2024.insights-1.16", pages = "135--147", abstract = "This research investigates the impact of preference annotation acquisition methods on the performance of LLM alignment algorithms, including Direct Preference Optimization (DPO), Identity Preference Optimization (IPO), and Conservative DPO (cDPO), compared to Supervised Fine-Tuning (SFT) in NLP tasks. We analyze the influence of LLM and human-based preferences on algorithm performance, considering data volume and quality. Additionally, we assess DPO{'}s vulnerability to overfitting and IPO{'}s resilience against it, addressing four main research questions. Using the GAIR dataset and Zephyr-7b as the SFT model, we reveal unexpected negative outcomes. Specifically, DPO trained on LLM preferences outperforms human preferences, contrary to expectations. Moreover, there{'}s no correlation between preference data volume or quality and algorithm performance. Contrary to expectations, DPO shows no overfitting in both human and LLM preference datasets. Surprisingly, cDPO doesn{'}t fare better than DPO under flip noise. Our findings highlight the complexities of preference annotation methods and underscore the importance of scrutinizing negative results in NLP algorithm research.", }
This research investigates the impact of preference annotation acquisition methods on the performance of LLM alignment algorithms, including Direct Preference Optimization (DPO), Identity Preference Optimization (IPO), and Conservative DPO (cDPO), compared to Supervised Fine-Tuning (SFT) in NLP tasks. We analyze the influence of LLM and human-based preferences on algorithm performance, considering data volume and quality. Additionally, we assess DPO{'}s vulnerability to overfitting and IPO{'}s resilience against it, addressing four main research questions. Using the GAIR dataset and Zephyr-7b as the SFT model, we reveal unexpected negative outcomes. Specifically, DPO trained on LLM preferences outperforms human preferences, contrary to expectations. Moreover, there{'}s no correlation between preference data volume or quality and algorithm performance. Contrary to expectations, DPO shows no overfitting in both human and LLM preference datasets. Surprisingly, cDPO doesn{'}t fare better than DPO under flip noise. Our findings highlight the complexities of preference annotation methods and underscore the importance of scrutinizing negative results in NLP algorithm research.
[ "Devanathan, Rishikesh", "Nathan, Varun", "Kumar, Ayush" ]
The Paradox of Preference: A Study on LLM Alignment Algorithms and Data Acquisition Methods
insights-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.17.bib
https://aclanthology.org/2024.insights-1.17/
@inproceedings{bogoychev-etal-2024-ups, title = "The Ups and Downs of Large Language Model Inference with Vocabulary Trimming by Language Heuristics", author = "Bogoychev, Nikolay and Chen, Pinzhen and Haddow, Barry and Birch, Alexandra", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.17", doi = "10.18653/v1/2024.insights-1.17", pages = "148--153", abstract = "Deploying large language models (LLMs) encounters challenges due to intensive computational and memory requirements. Our research examines vocabulary trimming (VT) inspired by restricting embedding entries to the language of interest to bolster time and memory efficiency. While such modifications have been proven effective in tasks like machine translation, tailoring them to LLMs demands specific modifications given the diverse nature of LLM applications. We apply two language heuristics to trim the full vocabulary{---}Unicode-based script filtering and corpus-based selection{---}to different LLM families and sizes. The methods are straightforward, interpretable, and easy to implement. It is found that VT reduces the memory usage of small models by nearly 50{\%} and has an upper bound of 25{\%} improvement in generation speed. Yet, we reveal the limitations of these methods in that they do not perform consistently well for each language with diminishing returns in larger models.", }
Deploying large language models (LLMs) encounters challenges due to intensive computational and memory requirements. Our research examines vocabulary trimming (VT) inspired by restricting embedding entries to the language of interest to bolster time and memory efficiency. While such modifications have been proven effective in tasks like machine translation, tailoring them to LLMs demands specific modifications given the diverse nature of LLM applications. We apply two language heuristics to trim the full vocabulary{---}Unicode-based script filtering and corpus-based selection{---}to different LLM families and sizes. The methods are straightforward, interpretable, and easy to implement. It is found that VT reduces the memory usage of small models by nearly 50{\%} and has an upper bound of 25{\%} improvement in generation speed. Yet, we reveal the limitations of these methods in that they do not perform consistently well for each language with diminishing returns in larger models.
[ "Bogoychev, Nikolay", "Chen, Pinzhen", "Haddow, Barry", "Birch, Alex", "ra" ]
The Ups and Downs of Large Language Model Inference with Vocabulary Trimming by Language Heuristics
insights-1.17
Poster
2311.09709
[ "" ]
https://huggingface.co/papers/2311.09709
1
0
0
4
1
[]
[]
[]
https://aclanthology.org/2024.insights-1.18.bib
https://aclanthology.org/2024.insights-1.18/
@inproceedings{eichel-schulte-im-walde-2024-multi, title = "Multi-Task Learning with Adapters for Plausibility Prediction: Bridging the Gap or Falling into the Trenches?", author = "Eichel, Annerose and Schulte Im Walde, Sabine", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.18", doi = "10.18653/v1/2024.insights-1.18", pages = "154--168", abstract = "We present a multi-task learning approach to predicting semantic plausibility by leveraging 50+ adapters categorized into 17 tasks within an efficient training framework. Across four plausibility datasets in English of varying size and linguistic constructions, we compare how models provided with knowledge from a range of NLP tasks perform in contrast to models without external information. Our results show that plausibility prediction benefits from complementary knowledge (e.g., provided by syntactic tasks) are significant but non-substantial, while performance may be hurt when injecting knowledge from an unsuitable task. Similarly important, we find that knowledge transfer may be hindered by class imbalance, and demonstrate the positive yet minor effect of balancing training data, even at the expense of size.", }
We present a multi-task learning approach to predicting semantic plausibility by leveraging 50+ adapters categorized into 17 tasks within an efficient training framework. Across four plausibility datasets in English of varying size and linguistic constructions, we compare how models provided with knowledge from a range of NLP tasks perform in contrast to models without external information. Our results show that plausibility prediction benefits from complementary knowledge (e.g., provided by syntactic tasks) are significant but non-substantial, while performance may be hurt when injecting knowledge from an unsuitable task. Similarly important, we find that knowledge transfer may be hindered by class imbalance, and demonstrate the positive yet minor effect of balancing training data, even at the expense of size.
[ "Eichel, Annerose", "Schulte Im Walde, Sabine" ]
Multi-Task Learning with Adapters for Plausibility Prediction: Bridging the Gap or Falling into the Trenches?
insights-1.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.insights-1.19.bib
https://aclanthology.org/2024.insights-1.19/
@inproceedings{mohammadshahi-etal-2024-investigating, title = "Investigating Multi-Pivot Ensembling with Massively Multilingual Machine Translation Models", author = "Mohammadshahi, Alireza and Vamvas, Jannis and Sennrich, Rico", editor = "Tafreshi, Shabnam and Akula, Arjun and Sedoc, Jo{\~a}o and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna", booktitle = "Proceedings of the Fifth Workshop on Insights from Negative Results in NLP", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.insights-1.19", doi = "10.18653/v1/2024.insights-1.19", pages = "169--180", abstract = "Massively multilingual machine translation models allow for the translation of a large number of languages with a single model, but have limited performance on low- and very-low-resource translation directions. Pivoting via high-resource languages remains a strong strategy for low-resource directions, and in this paper we revisit ways of pivoting through multiple languages. Previous work has used a simple averaging of probability distributions from multiple paths, but we find that this performs worse than using a single pivot, and exacerbates the hallucination problem because the same hallucinations can be probable across different paths. We also propose MaxEns, a novel combination strategy that makes the output biased towards the most confident predictions, hypothesising that confident predictions are less prone to be hallucinations. We evaluate different strategies on the FLORES benchmark for 20 low-resource language directions, demonstrating that MaxEns improves translation quality for low-resource languages while reducing hallucination in translations, compared to both direct translation and an averaging approach. On average, multi-pivot strategies still lag behind using English as a single pivot language, raising the question of how to identify the best pivoting strategy for a given translation direction.", }
Massively multilingual machine translation models allow for the translation of a large number of languages with a single model, but have limited performance on low- and very-low-resource translation directions. Pivoting via high-resource languages remains a strong strategy for low-resource directions, and in this paper we revisit ways of pivoting through multiple languages. Previous work has used a simple averaging of probability distributions from multiple paths, but we find that this performs worse than using a single pivot, and exacerbates the hallucination problem because the same hallucinations can be probable across different paths. We also propose MaxEns, a novel combination strategy that makes the output biased towards the most confident predictions, hypothesising that confident predictions are less prone to be hallucinations. We evaluate different strategies on the FLORES benchmark for 20 low-resource language directions, demonstrating that MaxEns improves translation quality for low-resource languages while reducing hallucination in translations, compared to both direct translation and an averaging approach. On average, multi-pivot strategies still lag behind using English as a single pivot language, raising the question of how to identify the best pivoting strategy for a given translation direction.
[ "Mohammadshahi, Alireza", "Vamvas, Jannis", "Sennrich, Rico" ]
Investigating Multi-Pivot Ensembling with Massively Multilingual Machine Translation Models
insights-1.19
Poster
2311.07439
[ "https://github.com/zurichnlp/multipivotnmt" ]
https://huggingface.co/papers/2311.07439
1
1
0
3
1
[]
[]
[]
https://aclanthology.org/2024.nlpcss-1.1.bib
https://aclanthology.org/2024.nlpcss-1.1/
@inproceedings{vasilets-etal-2024-detecting, title = "Detecting Perspective-Getting in {W}ikipedia Discussions", author = "Vasilets, Evgeny and Broek, Tijs and Wegmann, Anna and Abadi, David and Nguyen, Dong", editor = "Card, Dallas and Field, Anjalie and Hovy, Dirk and Keith, Katherine", booktitle = "Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlpcss-1.1", doi = "10.18653/v1/2024.nlpcss-1.1", pages = "1--15", abstract = "Perspective-getting (i.e., the effort to obtain information about the other person{'}s perspective) can lead to more accurate interpersonal understanding. In this paper, we develop an approach to measure perspective-getting and apply it to English Wikipedia discussions. First, we develop a codebook based on perspective-getting theory to operationalize perspective-getting into two categories: asking questions about and attending the other{'}s perspective. Second, we use the codebook to annotate perspective-getting in Wikipedia discussion pages. Third, we fine-tune a RoBERTa model that achieves an average F-1 score of 0.76 on the two perspective-getting categories. Last, we test whether perspective-getting is associated with discussion outcomes. Perspective-getting was not higher in non-escalated discussions. However, discussions starting with a post attending the other{'}s perspective are followed by responses that are more likely to also attend the other{'}s perspective. Future research may use our model to study the influence of perspective-getting on the dynamics and outcomes of online discussions.", }
Perspective-getting (i.e., the effort to obtain information about the other person{'}s perspective) can lead to more accurate interpersonal understanding. In this paper, we develop an approach to measure perspective-getting and apply it to English Wikipedia discussions. First, we develop a codebook based on perspective-getting theory to operationalize perspective-getting into two categories: asking questions about and attending the other{'}s perspective. Second, we use the codebook to annotate perspective-getting in Wikipedia discussion pages. Third, we fine-tune a RoBERTa model that achieves an average F-1 score of 0.76 on the two perspective-getting categories. Last, we test whether perspective-getting is associated with discussion outcomes. Perspective-getting was not higher in non-escalated discussions. However, discussions starting with a post attending the other{'}s perspective are followed by responses that are more likely to also attend the other{'}s perspective. Future research may use our model to study the influence of perspective-getting on the dynamics and outcomes of online discussions.
[ "Vasilets, Evgeny", "Broek, Tijs", "Wegmann, Anna", "Abadi, David", "Nguyen, Dong" ]
Detecting Perspective-Getting in Wikipedia Discussions
nlpcss-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlpcss-1.2.bib
https://aclanthology.org/2024.nlpcss-1.2/
@inproceedings{vallejo-etal-2024-connecting, title = "Connecting the Dots in News Analysis: Bridging the Cross-Disciplinary Disparities in Media Bias and Framing", author = "Vallejo, Gisela and Baldwin, Timothy and Frermann, Lea", editor = "Card, Dallas and Field, Anjalie and Hovy, Dirk and Keith, Katherine", booktitle = "Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlpcss-1.2", doi = "10.18653/v1/2024.nlpcss-1.2", pages = "16--31", abstract = "The manifestation and effect of bias in news reporting have been central topics in the social sciences for decades, and have received increasing attention in the NLP community recently. While NLP can help to scale up analyses or contribute automatic procedures to investigate the impact of biased news in society, we argue that methodologies that are currently dominant fall short of capturing the complex questions and effects addressed in theoretical media studies. This is problematic because it diminishes the validity and safety of the resulting tools and applications. Here, we review and critically compare task formulations, methods and evaluation schemes in the social sciences and NLP. We discuss open questions and suggest possible directions to close identified gaps between theory and predictive models, and their evaluation. These include model transparency, considering document-external information, and cross-document reasoning.", }
The manifestation and effect of bias in news reporting have been central topics in the social sciences for decades, and have received increasing attention in the NLP community recently. While NLP can help to scale up analyses or contribute automatic procedures to investigate the impact of biased news in society, we argue that methodologies that are currently dominant fall short of capturing the complex questions and effects addressed in theoretical media studies. This is problematic because it diminishes the validity and safety of the resulting tools and applications. Here, we review and critically compare task formulations, methods and evaluation schemes in the social sciences and NLP. We discuss open questions and suggest possible directions to close identified gaps between theory and predictive models, and their evaluation. These include model transparency, considering document-external information, and cross-document reasoning.
[ "Vallejo, Gisela", "Baldwin, Timothy", "Frermann, Lea" ]
Connecting the Dots in News Analysis: Bridging the Cross-Disciplinary Disparities in Media Bias and Framing
nlpcss-1.2
Poster
2309.08069
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlpcss-1.3.bib
https://aclanthology.org/2024.nlpcss-1.3/
@inproceedings{curto-etal-2024-crime, title = "The Crime of Being Poor: Associations between Crime and Poverty on Social Media in Eight Countries", author = "Curto, Georgina and Kiritchenko, Svetlana and Fraser, Kathleen and Nejadgholi, Isar", editor = "Card, Dallas and Field, Anjalie and Hovy, Dirk and Keith, Katherine", booktitle = "Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlpcss-1.3", doi = "10.18653/v1/2024.nlpcss-1.3", pages = "32--45", abstract = "Negative public perceptions of people living in poverty can hamper policies and programs that aim to help the poor. One prominent example of social bias and discrimination against people in need is the persistent association of poverty with criminality. The phenomenon has two facets: first, the belief that poor people are more likely to engage in crime (e.g., stealing, mugging, violence) and second, the view that certain behaviors directly resulting from poverty (e.g., living outside, panhandling) warrant criminal punishment. In this paper, we use large language models (LLMs) to identify examples of crime{--}poverty association (CPA) in English social media texts. We analyze the online discourse on CPA across eight geographically-diverse countries, and find evidence that the CPA rates are higher within the sample obtained from the U.S. and Canada, as compared to the other countries such as South Africa, despite the latter having higher poverty, criminality, and inequality indexes. We further uncover and analyze the most common themes in CPA posts and find more negative and biased attitudes toward people living in poverty in posts from the U.S. and Canada. These results could partially be explained by cultural factors related to the tendency to overestimate the equality of opportunities and social mobility in the U.S. and Canada. These findings have consequences for policy-making and open a new path of research for poverty mitigation with the focus not only on the redistribution of wealth but also on the mitigation of bias and discrimination against people in need.", }
Negative public perceptions of people living in poverty can hamper policies and programs that aim to help the poor. One prominent example of social bias and discrimination against people in need is the persistent association of poverty with criminality. The phenomenon has two facets: first, the belief that poor people are more likely to engage in crime (e.g., stealing, mugging, violence) and second, the view that certain behaviors directly resulting from poverty (e.g., living outside, panhandling) warrant criminal punishment. In this paper, we use large language models (LLMs) to identify examples of crime{--}poverty association (CPA) in English social media texts. We analyze the online discourse on CPA across eight geographically-diverse countries, and find evidence that the CPA rates are higher within the sample obtained from the U.S. and Canada, as compared to the other countries such as South Africa, despite the latter having higher poverty, criminality, and inequality indexes. We further uncover and analyze the most common themes in CPA posts and find more negative and biased attitudes toward people living in poverty in posts from the U.S. and Canada. These results could partially be explained by cultural factors related to the tendency to overestimate the equality of opportunities and social mobility in the U.S. and Canada. These findings have consequences for policy-making and open a new path of research for poverty mitigation with the focus not only on the redistribution of wealth but also on the mitigation of bias and discrimination against people in need.
[ "Curto, Georgina", "Kiritchenko, Svetlana", "Fraser, Kathleen", "Nejadgholi, Isar" ]
The Crime of Being Poor: Associations between Crime and Poverty on Social Media in Eight Countries
nlpcss-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlpcss-1.4.bib
https://aclanthology.org/2024.nlpcss-1.4/
@inproceedings{acharya-etal-2024-discovering, title = "Discovering Implicit Meanings of Cultural Motifs from Text", author = "Acharya, Anurag and Estrada, Diego and Dahal, Shreeja and Yarlott, W. Victor H. and Gomez, Diana and Finlayson, Mark", editor = "Card, Dallas and Field, Anjalie and Hovy, Dirk and Keith, Katherine", booktitle = "Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlpcss-1.4", doi = "10.18653/v1/2024.nlpcss-1.4", pages = "46--56", abstract = "Motifs are distinctive, recurring, widely used idiom-like words or phrases, often originating in folklore and usually strongly anchored to a particular cultural or national group. Motifs are significant communicative devices across a wide range of media{---}including news, literature, and propaganda{---}because they can concisely imply a large set of culturally relevant associations. One difficulty of understanding motifs is that their meaning is usually implicit, so for an out-group person the meaning is inaccessible. We present the Motif Implicit Meaning Extractor (MIME), a proof-of-concept system designed to automatically identify a motif{'}s implicit meaning, as evidenced by textual uses of the motif across a large set data. MIME uses several sources (including motif indices, Wikipedia pages on the motifs, explicit explanations of motifs from in-group informants, and news/social media posts where the motif is used) and can generate a structured report of information about a motif understandable to an out-group person. In addition to a variety of examples and information drawn from structured sources, the report includes implicit information about a motif such as the type of reference (e.g., a person, an organization, etc.), it{'}s general connotation (strongly negative, slightly negative, neutral, etc.), and it{'}s associations (typically adjectives). We describe how MIME works and demonstrate its operation on a small set of manually curated motifs. We perform a qualitative evaluation of the output, and assess the difficulty of the problem, showing that explicit motif information provided by cultural informants is critical to high quality output, although mining motif usages in news and social media provides useful additional depth. A system such as MIME, appropriately scaled up, would potentially be quite useful to an out-group person trying to understand in-group usages of motifs, and has wide potential applications in domains such as literary criticism, cultural heritage, marketed and branding, and intelligence analysis.", }
Motifs are distinctive, recurring, widely used idiom-like words or phrases, often originating in folklore and usually strongly anchored to a particular cultural or national group. Motifs are significant communicative devices across a wide range of media{---}including news, literature, and propaganda{---}because they can concisely imply a large set of culturally relevant associations. One difficulty of understanding motifs is that their meaning is usually implicit, so for an out-group person the meaning is inaccessible. We present the Motif Implicit Meaning Extractor (MIME), a proof-of-concept system designed to automatically identify a motif{'}s implicit meaning, as evidenced by textual uses of the motif across a large set data. MIME uses several sources (including motif indices, Wikipedia pages on the motifs, explicit explanations of motifs from in-group informants, and news/social media posts where the motif is used) and can generate a structured report of information about a motif understandable to an out-group person. In addition to a variety of examples and information drawn from structured sources, the report includes implicit information about a motif such as the type of reference (e.g., a person, an organization, etc.), it{'}s general connotation (strongly negative, slightly negative, neutral, etc.), and it{'}s associations (typically adjectives). We describe how MIME works and demonstrate its operation on a small set of manually curated motifs. We perform a qualitative evaluation of the output, and assess the difficulty of the problem, showing that explicit motif information provided by cultural informants is critical to high quality output, although mining motif usages in news and social media provides useful additional depth. A system such as MIME, appropriately scaled up, would potentially be quite useful to an out-group person trying to understand in-group usages of motifs, and has wide potential applications in domains such as literary criticism, cultural heritage, marketed and branding, and intelligence analysis.
[ "Acharya, Anurag", "Estrada, Diego", "Dahal, Shreeja", "Yarlott, W. Victor H.", "Gomez, Diana", "Finlayson, Mark" ]
Discovering Implicit Meanings of Cultural Motifs from Text
nlpcss-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlpcss-1.5.bib
https://aclanthology.org/2024.nlpcss-1.5/
@inproceedings{pieuchon-etal-2024-large, title = "Can Large Language Models (or Humans) Disentangle Text?", author = "Pieuchon, Nicolas and Daoud, Adel and Jerzak, Connor and Johansson, Moa and Johansson, Richard", editor = "Card, Dallas and Field, Anjalie and Hovy, Dirk and Keith, Katherine", booktitle = "Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlpcss-1.5", doi = "10.18653/v1/2024.nlpcss-1.5", pages = "57--67", abstract = "We investigate the potential of large language models (LLMs) to disentangle text variables{---}to remove the textual traces of an undesired forbidden variable in a task sometimes known as text distillation and closely related to the fairness in AI and causal inference literature. We employ a range of various LLM approaches in an attempt to disentangle text by identifying and removing information about a target variable while preserving other relevant signals. We show that in the strong test of removing sentiment, the statistical association between the processed text and sentiment is still detectable to machine learning classifiers post-LLM-disentanglement. Furthermore, we find that human annotators also struggle to disentangle sentiment while preserving other semantic content. This suggests there may be limited separability between concept variables in some text contexts, highlighting limitations of methods relying on text-level transformations and also raising questions about the robustness of disentanglement methods that achieve statistical independence in representation space.", }
We investigate the potential of large language models (LLMs) to disentangle text variables{---}to remove the textual traces of an undesired forbidden variable in a task sometimes known as text distillation and closely related to the fairness in AI and causal inference literature. We employ a range of various LLM approaches in an attempt to disentangle text by identifying and removing information about a target variable while preserving other relevant signals. We show that in the strong test of removing sentiment, the statistical association between the processed text and sentiment is still detectable to machine learning classifiers post-LLM-disentanglement. Furthermore, we find that human annotators also struggle to disentangle sentiment while preserving other semantic content. This suggests there may be limited separability between concept variables in some text contexts, highlighting limitations of methods relying on text-level transformations and also raising questions about the robustness of disentanglement methods that achieve statistical independence in representation space.
[ "Pieuchon, Nicolas", "Daoud, Adel", "Jerzak, Connor", "Johansson, Moa", "Johansson, Richard" ]
Can Large Language Models (or Humans) Disentangle Text?
nlpcss-1.5
Poster
2403.16584
[ "https://github.com/aiandglobaldevelopmentlab/textdisentanglement" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlpcss-1.6.bib
https://aclanthology.org/2024.nlpcss-1.6/
@inproceedings{dumitru-etal-2024-retrieval, title = "Retrieval Augmented Generation of Subjective Explanations for Socioeconomic Scenarios", author = "Dumitru, Razvan-Gabriel and Alexeeva, Maria and Alcock, Keith and Ludgate, Nargiza and Jeong, Cheonkam and Abdurahaman, Zara Fatima and Puri, Prateek and Kirchhoff, Brian and Sadhu, Santadarshan and Surdeanu, Mihai", editor = "Card, Dallas and Field, Anjalie and Hovy, Dirk and Keith, Katherine", booktitle = "Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlpcss-1.6", doi = "10.18653/v1/2024.nlpcss-1.6", pages = "68--85", abstract = "We introduce a novel retrieval augmented generation approach that explicitly models causality and subjectivity. We use it to generate explanations for socioeconomic scenarios that capture beliefs of local populations. Through intrinsic and extrinsic evaluation, we show that our explanations, contextualized using causal and subjective information retrieved from local news sources, are rated higher than those produced by other large language models both in terms of mimicking the real population and the explanations quality. We also provide a discussion of the role subjectivity plays in evaluation of this natural language generation task.", }
We introduce a novel retrieval augmented generation approach that explicitly models causality and subjectivity. We use it to generate explanations for socioeconomic scenarios that capture beliefs of local populations. Through intrinsic and extrinsic evaluation, we show that our explanations, contextualized using causal and subjective information retrieved from local news sources, are rated higher than those produced by other large language models both in terms of mimicking the real population and the explanations quality. We also provide a discussion of the role subjectivity plays in evaluation of this natural language generation task.
[ "Dumitru, Razvan-Gabriel", "Alexeeva, Maria", "Alcock, Keith", "Ludgate, Nargiza", "Jeong, Cheonkam", "Abdurahaman, Zara Fatima", "Puri, Prateek", "Kirchhoff, Brian", "Sadhu, Santadarshan", "Surdeanu, Mihai" ]
Retrieval Augmented Generation of Subjective Explanations for Socioeconomic Scenarios
nlpcss-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlpcss-1.7.bib
https://aclanthology.org/2024.nlpcss-1.7/
@inproceedings{masis-oconnor-2024-earth, title = "Where on Earth Do Users Say They Are?: Geo-Entity Linking for Noisy Multilingual User Input", author = "Masis, Tessa and O{'}Connor, Brendan", editor = "Card, Dallas and Field, Anjalie and Hovy, Dirk and Keith, Katherine", booktitle = "Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlpcss-1.7", doi = "10.18653/v1/2024.nlpcss-1.7", pages = "86--98", abstract = "Geo-entity linking is the task of linking a location mention to the real-world geographic location. In this we explore the challenging task of geo-entity linking for noisy, multilingual social media data. There are few open-source multilingual geo-entity linking tools available and existing ones are often rule-based, which break easily in social media settings, or LLM-based, which are too expensive for large-scale datasets. We present a method which represents real-world locations as averaged embeddings from labeled user-input location names and allows for selective prediction via an interpretable confidence score. We show that our approach improves geo-entity linking on a global and multilingual social media dataset, and discuss progress and problems with evaluating at different geographic granularities.", }
Geo-entity linking is the task of linking a location mention to the real-world geographic location. In this we explore the challenging task of geo-entity linking for noisy, multilingual social media data. There are few open-source multilingual geo-entity linking tools available and existing ones are often rule-based, which break easily in social media settings, or LLM-based, which are too expensive for large-scale datasets. We present a method which represents real-world locations as averaged embeddings from labeled user-input location names and allows for selective prediction via an interpretable confidence score. We show that our approach improves geo-entity linking on a global and multilingual social media dataset, and discuss progress and problems with evaluating at different geographic granularities.
[ "Masis, Tessa", "O{'}Connor, Brendan" ]
Where on Earth Do Users Say They Are?: Geo-Entity Linking for Noisy Multilingual User Input
nlpcss-1.7
Poster
2404.18784
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlpcss-1.8.bib
https://aclanthology.org/2024.nlpcss-1.8/
@inproceedings{franklin-etal-2024-news, title = "News Deja Vu: Connecting Past and Present with Semantic Search", author = "Franklin, Brevin and Silcock, Emily and Arora, Abhishek and Bryan, Tom and Dell, Melissa", editor = "Card, Dallas and Field, Anjalie and Hovy, Dirk and Keith, Katherine", booktitle = "Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlpcss-1.8", doi = "10.18653/v1/2024.nlpcss-1.8", pages = "99--112", abstract = "Social scientists and the general public often analyze contemporary events by drawing parallels with the past, a process complicated by the vast, noisy, and unstructured nature of historical texts. For example, hundreds of millions of page scans from historical newspapers have been noisily transcribed. Traditional sparse methods for searching for relevant material in these vast corpora, e.g., with keywords, can be brittle given complex vocabularies and OCR noise. This study introduces News Deja Vu, a novel semantic search tool that leverages transformer large language models and a bi-encoder approach to identify historical news articles that are most similar to modern news queries. News Deja Vu first recognizes and masks entities, in order to focus on broader parallels rather than the specific named entities being discussed. Then, a contrastively trained, lightweight bi-encoder retrieves historical articles that are most similar semantically to a modern query, illustrating how phenomena that might seem unique to the present have varied historical precedents. Aimed at social scientists, the user-friendly News Deja Vu package is designed to be accessible for those who lack extensive familiarity with deep learning. It works with large text datasets, and we show how it can be deployed to a massive scale corpus of historical, open-source news articles. While human expertise remains important for drawing deeper insights, News Deja Vu provides a powerful tool for exploring parallels in how people have perceived past and present.", }
Social scientists and the general public often analyze contemporary events by drawing parallels with the past, a process complicated by the vast, noisy, and unstructured nature of historical texts. For example, hundreds of millions of page scans from historical newspapers have been noisily transcribed. Traditional sparse methods for searching for relevant material in these vast corpora, e.g., with keywords, can be brittle given complex vocabularies and OCR noise. This study introduces News Deja Vu, a novel semantic search tool that leverages transformer large language models and a bi-encoder approach to identify historical news articles that are most similar to modern news queries. News Deja Vu first recognizes and masks entities, in order to focus on broader parallels rather than the specific named entities being discussed. Then, a contrastively trained, lightweight bi-encoder retrieves historical articles that are most similar semantically to a modern query, illustrating how phenomena that might seem unique to the present have varied historical precedents. Aimed at social scientists, the user-friendly News Deja Vu package is designed to be accessible for those who lack extensive familiarity with deep learning. It works with large text datasets, and we show how it can be deployed to a massive scale corpus of historical, open-source news articles. While human expertise remains important for drawing deeper insights, News Deja Vu provides a powerful tool for exploring parallels in how people have perceived past and present.
[ "Franklin, Brevin", "Silcock, Emily", "Arora, Abhishek", "Bryan, Tom", "Dell, Melissa" ]
News Deja Vu: Connecting Past and Present with Semantic Search
nlpcss-1.8
Poster
2406.15593
[ "" ]
https://huggingface.co/papers/2406.15593
0
0
0
5
1
[ "dell-research-harvard/historical_newspaper_ner" ]
[]
[ "dell-research-harvard/newsdejavu" ]
https://aclanthology.org/2024.nlpcss-1.9.bib
https://aclanthology.org/2024.nlpcss-1.9/
@inproceedings{pangakis-wolken-2024-knowledge, title = "Knowledge Distillation in Automated Annotation: Supervised Text Classification with {LLM}-Generated Training Labels", author = "Pangakis, Nicholas and Wolken, Sam", editor = "Card, Dallas and Field, Anjalie and Hovy, Dirk and Keith, Katherine", booktitle = "Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlpcss-1.9", doi = "10.18653/v1/2024.nlpcss-1.9", pages = "113--131", abstract = "Computational social science (CSS) practitioners often rely on human-labeled data to fine-tune supervised text classifiers. We assess the potential for researchers to augment or replace human-generated training data with surrogate training labels from generative large language models (LLMs). We introduce a recommended workflow and test this LLM application by replicating 14 classification tasks and measuring performance. We employ a novel corpus of English-language text classification data sets from recent CSS articles in high-impact journals. Because these data sets are stored in password-protected archives, our analyses are less prone to issues of contamination. For each task, we compare supervised classifiers fine-tuned using GPT-4 labels against classifiers fine-tuned with human annotations and against labels from GPT-4 and Mistral-7B with few-shot in-context learning. Our findings indicate that supervised classification models fine-tuned on LLM-generated labels perform comparably to models fine-tuned with labels from human annotators. Fine-tuning models using LLM-generated labels can be a fast, efficient and cost-effective method of building supervised text classifiers.", }
Computational social science (CSS) practitioners often rely on human-labeled data to fine-tune supervised text classifiers. We assess the potential for researchers to augment or replace human-generated training data with surrogate training labels from generative large language models (LLMs). We introduce a recommended workflow and test this LLM application by replicating 14 classification tasks and measuring performance. We employ a novel corpus of English-language text classification data sets from recent CSS articles in high-impact journals. Because these data sets are stored in password-protected archives, our analyses are less prone to issues of contamination. For each task, we compare supervised classifiers fine-tuned using GPT-4 labels against classifiers fine-tuned with human annotations and against labels from GPT-4 and Mistral-7B with few-shot in-context learning. Our findings indicate that supervised classification models fine-tuned on LLM-generated labels perform comparably to models fine-tuned with labels from human annotators. Fine-tuning models using LLM-generated labels can be a fast, efficient and cost-effective method of building supervised text classifiers.
[ "Pangakis, Nicholas", "Wolken, Sam" ]
Knowledge Distillation in Automated Annotation: Supervised Text Classification with LLM-Generated Training Labels
nlpcss-1.9
Poster
2406.17633
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlpcss-1.10.bib
https://aclanthology.org/2024.nlpcss-1.10/
@inproceedings{wang-rambow-2024-clustering, title = "Clustering Document Parts: Detecting and Characterizing Influence Campaigns from Documents", author = "Wang, Zhengxiang and Rambow, Owen", editor = "Card, Dallas and Field, Anjalie and Hovy, Dirk and Keith, Katherine", booktitle = "Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlpcss-1.10", doi = "10.18653/v1/2024.nlpcss-1.10", pages = "132--143", abstract = "We propose a novel clustering pipeline to detect and characterize influence campaigns from documents. This approach clusters parts of document, detects clusters that likely reflect an influence campaign, and then identifies documents linked to an influence campaign via their association with the high-influence clusters. Our approach outperforms both the direct document-level classification and the direct document-level clustering approach in predicting if a document is part of an influence campaign. We propose various novel techniques to enhance our pipeline, including using an existing event factuality prediction system to obtain document parts, and aggregating multiple clustering experiments to improve the performance of both cluster and document classification. Classifying documents after clustering not only accurately extracts the parts of the documents that are relevant to influence campaigns, but also captures influence campaigns as a coordinated and holistic phenomenon. Our approach makes possible more fine-grained and interpretable characterizations of influence campaigns from documents.", }
We propose a novel clustering pipeline to detect and characterize influence campaigns from documents. This approach clusters parts of document, detects clusters that likely reflect an influence campaign, and then identifies documents linked to an influence campaign via their association with the high-influence clusters. Our approach outperforms both the direct document-level classification and the direct document-level clustering approach in predicting if a document is part of an influence campaign. We propose various novel techniques to enhance our pipeline, including using an existing event factuality prediction system to obtain document parts, and aggregating multiple clustering experiments to improve the performance of both cluster and document classification. Classifying documents after clustering not only accurately extracts the parts of the documents that are relevant to influence campaigns, but also captures influence campaigns as a coordinated and holistic phenomenon. Our approach makes possible more fine-grained and interpretable characterizations of influence campaigns from documents.
[ "Wang, Zhengxiang", "Rambow, Owen" ]
Clustering Document Parts: Detecting and Characterizing Influence Campaigns from Documents
nlpcss-1.10
Poster
2402.17151
[ "https://github.com/jaaack-wang/detect-influence-campaigns" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlpcss-1.11.bib
https://aclanthology.org/2024.nlpcss-1.11/
@inproceedings{leto-etal-2024-first, title = "A First Step towards Measuring Interdisciplinary Engagement in Scientific Publications: A Case Study on {NLP} + {CSS} Research", author = "Leto, Alexandria and Roy, Shamik and Hoyle, Alexander and Acuna, Daniel and Pacheco, Maria Leonor", editor = "Card, Dallas and Field, Anjalie and Hovy, Dirk and Keith, Katherine", booktitle = "Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlpcss-1.11", doi = "10.18653/v1/2024.nlpcss-1.11", pages = "144--158", abstract = "With the rise in the prevalence of cross-disciplinary research, there is a need to develop methods to characterize its practices. Current computational methods to evaluate interdisciplinary engagement{---}such as affiliation diversity, keywords, and citation patterns{---}are insufficient to model the degree of engagement between disciplines, as well as the way in which the complementary expertise of co-authors is harnessed. In this paper, we propose an automated framework to address some of these issues on a large scale. Our framework tracks interdisciplinary citations in scientific articles and models: 1) the section and position in which they appear, and 2) the argumentative role that they play in the writing. To showcase our framework, we perform a preliminary analysis of interdisciplinary engagement in published work at the intersection of natural language processing and computational social science in the last decade.", }
With the rise in the prevalence of cross-disciplinary research, there is a need to develop methods to characterize its practices. Current computational methods to evaluate interdisciplinary engagement{---}such as affiliation diversity, keywords, and citation patterns{---}are insufficient to model the degree of engagement between disciplines, as well as the way in which the complementary expertise of co-authors is harnessed. In this paper, we propose an automated framework to address some of these issues on a large scale. Our framework tracks interdisciplinary citations in scientific articles and models: 1) the section and position in which they appear, and 2) the argumentative role that they play in the writing. To showcase our framework, we perform a preliminary analysis of interdisciplinary engagement in published work at the intersection of natural language processing and computational social science in the last decade.
[ "Leto, Alex", "ria", "Roy, Shamik", "Hoyle, Alex", "er", "Acuna, Daniel", "Pacheco, Maria Leonor" ]
A First Step towards Measuring Interdisciplinary Engagement in Scientific Publications: A Case Study on NLP + CSS Research
nlpcss-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.1.bib
https://aclanthology.org/2024.semeval-1.1/
@inproceedings{aggarwal-sachdeva-2024-cunlp, title = "{CUNLP} at {S}em{E}val-2024 Task 8: Classify Human and {AI} Generated Text", author = "Aggarwal, Pranjal and Sachdeva, Deepanshu", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.1", doi = "10.18653/v1/2024.semeval-1.1", pages = "1--6", abstract = "This task is a sub-part of SemEval-2024 competition which aims to classify AI vs Human Generated Text. In this paper we have experimented on an approach to automatically classify an artificially generated text and a human written text. With the advent of generative models like GPT-3.5 and GPT-4 it has become increasingly necessary to classify between the two texts due to various applications like detecting plagiarism and in tasks like fake news detection that can heavily impact real world problems, for instance stock manipulation through AI generated news articles. To achieve this, we start by using some basic models like Logistic Regression and move our way up to more complex models like transformers and GPTs for classification. This is a binary classification task where the label 1 represents AI generated text and 0 represents human generated text. The dataset was given in JSON style format which was converted to comma separated file (CSV) for better processing using the pandas library in Python as CSV files provides more readability than JSON format files. Approaches like Bagging Classifier and Voting classifier were also used.", }
This task is a sub-part of SemEval-2024 competition which aims to classify AI vs Human Generated Text. In this paper we have experimented on an approach to automatically classify an artificially generated text and a human written text. With the advent of generative models like GPT-3.5 and GPT-4 it has become increasingly necessary to classify between the two texts due to various applications like detecting plagiarism and in tasks like fake news detection that can heavily impact real world problems, for instance stock manipulation through AI generated news articles. To achieve this, we start by using some basic models like Logistic Regression and move our way up to more complex models like transformers and GPTs for classification. This is a binary classification task where the label 1 represents AI generated text and 0 represents human generated text. The dataset was given in JSON style format which was converted to comma separated file (CSV) for better processing using the pandas library in Python as CSV files provides more readability than JSON format files. Approaches like Bagging Classifier and Voting classifier were also used.
[ "Aggarwal, Pranjal", "Sachdeva, Deepanshu" ]
CUNLP at SemEval-2024 Task 8: Classify Human and AI Generated Text
semeval-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.2.bib
https://aclanthology.org/2024.semeval-1.2/
@inproceedings{takahashi-etal-2024-ozemi, title = "{OZ}emi at {S}em{E}val-2024 Task 1: A Simplistic Approach to Textual Relatedness Evaluation Using Transformers and Machine Translation", author = "Takahashi, Hidetsune and Lu, Xingru and Ishijima, Sean and Seo, Deokgyu and Kim, Yongju and Park, Sehoon and Song, Min and Marante, Kathylene and Iso, Keitaro-luke and Tokura, Hirotaka and Ohman, Emily", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.2", doi = "10.18653/v1/2024.semeval-1.2", pages = "7--12", abstract = "In this system paper for SemEval-2024 Task 1 subtask A, we present our approach to evaluating the semantic relatedness of sentence pairs in nine languages. We use a mix of statistical methods combined with fine-tuned BERT transformer models for English and use the same model and machine-translated data for the other languages. This simplistic approach shows consistently reliable scores and achieves above-average rank in all languages.", }
In this system paper for SemEval-2024 Task 1 subtask A, we present our approach to evaluating the semantic relatedness of sentence pairs in nine languages. We use a mix of statistical methods combined with fine-tuned BERT transformer models for English and use the same model and machine-translated data for the other languages. This simplistic approach shows consistently reliable scores and achieves above-average rank in all languages.
[ "Takahashi, Hidetsune", "Lu, Xingru", "Ishijima, Sean", "Seo, Deokgyu", "Kim, Yongju", "Park, Sehoon", "Song, Min", "Marante, Kathylene", "Iso, Keitaro-luke", "Tokura, Hirotaka", "Ohman, Emily" ]
OZemi at SemEval-2024 Task 1: A Simplistic Approach to Textual Relatedness Evaluation Using Transformers and Machine Translation
semeval-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.3.bib
https://aclanthology.org/2024.semeval-1.3/
@inproceedings{tran-etal-2024-l3i, title = "L3i++ at {S}em{E}val-2024 Task 8: Can Fine-tuned Large Language Model Detect Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text?", author = "Tran, Hanh Thi Hong and Nguyen, Tien Nam and Doucet, Antoine and Pollak, Senja", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.3", doi = "10.18653/v1/2024.semeval-1.3", pages = "13--21", abstract = "This paper summarizes our participation in SemEval-2024 Task 8: Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection. In this task, we aim to solve two over three Subtasks: (1) Monolingual and Multilingual Binary Human-Written vs. Machine-Generated Text Classification; and (2) Multi-Way Machine-Generated Text Classification. We conducted a comprehensive comparative study across three methodological groups: Five metric-based models (Log-Likelihood, Rank, Log-Rank, Entropy, and MFDMetric), two fine-tuned sequence-labeling language models (RoBERTA and XLM-R); and a fine-tuned large-scale language model (LS-LLaMA). Our findings suggest that our LLM outperformed both traditional sequence-labeling LM benchmarks and metric-based approaches. Furthermore, our fine-tuned classifier excelled in detecting machine-generated multilingual texts and accurately classifying machine-generated texts within a specific category, (e.g., ChatGPT, bloomz, dolly). However, they do exhibit challenges in detecting them in other categories (e.g., cohere, and davinci). This is due to potential overlap in the distribution of the metric among various LLMs. Overall, we achieved a 6th rank in both Multilingual Binary Human-Written vs. Machine-Generated Text Classification and Multi-Way Machine-Generated Text Classification on the leaderboard.", }
This paper summarizes our participation in SemEval-2024 Task 8: Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection. In this task, we aim to solve two over three Subtasks: (1) Monolingual and Multilingual Binary Human-Written vs. Machine-Generated Text Classification; and (2) Multi-Way Machine-Generated Text Classification. We conducted a comprehensive comparative study across three methodological groups: Five metric-based models (Log-Likelihood, Rank, Log-Rank, Entropy, and MFDMetric), two fine-tuned sequence-labeling language models (RoBERTA and XLM-R); and a fine-tuned large-scale language model (LS-LLaMA). Our findings suggest that our LLM outperformed both traditional sequence-labeling LM benchmarks and metric-based approaches. Furthermore, our fine-tuned classifier excelled in detecting machine-generated multilingual texts and accurately classifying machine-generated texts within a specific category, (e.g., ChatGPT, bloomz, dolly). However, they do exhibit challenges in detecting them in other categories (e.g., cohere, and davinci). This is due to potential overlap in the distribution of the metric among various LLMs. Overall, we achieved a 6th rank in both Multilingual Binary Human-Written vs. Machine-Generated Text Classification and Multi-Way Machine-Generated Text Classification on the leaderboard.
[ "Tran, Hanh Thi Hong", "Nguyen, Tien Nam", "Doucet, Antoine", "Pollak, Senja" ]
L3i++ at SemEval-2024 Task 8: Can Fine-tuned Large Language Model Detect Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text?
semeval-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.4.bib
https://aclanthology.org/2024.semeval-1.4/
@inproceedings{rusnachenko-liang-2024-nicolay, title = "nicolay-r at {S}em{E}val-2024 Task 3: Using Flan-T5 for Reasoning Emotion Cause in Conversations with Chain-of-Thought on Emotion States", author = "Rusnachenko, Nicolay and Liang, Huizhi", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.4", doi = "10.18653/v1/2024.semeval-1.4", pages = "22--27", abstract = "Emotion expression is one of the essential traits of conversations. It may be self-related or caused by another speaker. The variety of reasons may serve as a source of the further emotion causes: conversation history, speaker{'}s emotional state, etc. Inspired by the most recent advances in Chain-of-Thought, in this work, we exploit the existing three-hop reasoning approach (THOR) to perform large language model instruction-tuning for answering: emotion states (THOR-state), and emotion caused by one speaker to the other (THOR-cause). We equip THORcause with the reasoning revision (RR) for devising a reasoning path in fine-tuning. In particular, we rely on the annotated speaker emotion states to revise reasoning path. Our final submission, based on Flan-T5-base (250M) and the rule-based span correction technique, preliminary tuned with THOR-state and fine-tuned with THOR-cause-rr on competition training data, results in 3rd and 4th places (F1-proportional) and 5th place (F1-strict) among 15 participating teams. Our THOR implementation fork is publicly available: https://github.com/nicolay-r/THOR-ECAC", }
Emotion expression is one of the essential traits of conversations. It may be self-related or caused by another speaker. The variety of reasons may serve as a source of the further emotion causes: conversation history, speaker{'}s emotional state, etc. Inspired by the most recent advances in Chain-of-Thought, in this work, we exploit the existing three-hop reasoning approach (THOR) to perform large language model instruction-tuning for answering: emotion states (THOR-state), and emotion caused by one speaker to the other (THOR-cause). We equip THORcause with the reasoning revision (RR) for devising a reasoning path in fine-tuning. In particular, we rely on the annotated speaker emotion states to revise reasoning path. Our final submission, based on Flan-T5-base (250M) and the rule-based span correction technique, preliminary tuned with THOR-state and fine-tuned with THOR-cause-rr on competition training data, results in 3rd and 4th places (F1-proportional) and 5th place (F1-strict) among 15 participating teams. Our THOR implementation fork is publicly available: https://github.com/nicolay-r/THOR-ECAC
[ "Rusnachenko, Nicolay", "Liang, Huizhi" ]
nicolay-r at SemEval-2024 Task 3: Using Flan-T5 for Reasoning Emotion Cause in Conversations with Chain-of-Thought on Emotion States
semeval-1.4
Poster
2404.03361
[ "https://github.com/nicolay-r/semeval2024-task3" ]
https://huggingface.co/papers/2404.03361
1
0
0
2
1
[ "nicolay-r/flan-t5-emotion-cause-thor-base" ]
[]
[]
https://aclanthology.org/2024.semeval-1.5.bib
https://aclanthology.org/2024.semeval-1.5/
@inproceedings{heavey-etal-2024-stfx, title = "{S}t{FX}-{NLP} at {S}em{E}val-2024 Task 9: {BRAINTEASER}: Three Unsupervised Riddle-Solvers", author = "Heavey, Ethan and Hughes, James and King, Milton", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.5", doi = "10.18653/v1/2024.semeval-1.5", pages = "28--33", abstract = "In this paper, we explore three unsupervised learning models that we applied to Task 9: BRAINTEASER of SemEval 2024. Two of these models incorporate word sense disambiguation and part-of-speech tagging, specifically leveraging SensEmBERT and the Stanford log-linear part-of-speech tagger. Our third model relies on a more traditional language modelling approach. The best performing model, a bag-of-words model leveraging word sense disambiguation and part-of-speech tagging, secured the 10th spot out of 11 places on both the sentence puzzle and word puzzle subtasks.", }
In this paper, we explore three unsupervised learning models that we applied to Task 9: BRAINTEASER of SemEval 2024. Two of these models incorporate word sense disambiguation and part-of-speech tagging, specifically leveraging SensEmBERT and the Stanford log-linear part-of-speech tagger. Our third model relies on a more traditional language modelling approach. The best performing model, a bag-of-words model leveraging word sense disambiguation and part-of-speech tagging, secured the 10th spot out of 11 places on both the sentence puzzle and word puzzle subtasks.
[ "Heavey, Ethan", "Hughes, James", "King, Milton" ]
StFX-NLP at SemEval-2024 Task 9: BRAINTEASER: Three Unsupervised Riddle-Solvers
semeval-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.6.bib
https://aclanthology.org/2024.semeval-1.6/
@inproceedings{crum-bethard-2024-hinoki, title = "hinoki at {S}em{E}val-2024 Task 7: Numeral-Aware Headline Generation ({E}nglish)", author = "Crum, Hinoki and Bethard, Steven", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.6", doi = "10.18653/v1/2024.semeval-1.6", pages = "34--39", abstract = "Numerical reasoning is challenging even for large pre-trained language models. We show that while T5 models are capable of generating relevant headlines with proper numerical values, they can also make mistakes in reading comprehension and miscalculate numerical values. To overcome these issues, we propose a two-step training process: first train models to read text and generate formal representations of calculations, then train models to read calculations and generate numerical values. On the SemEval 2024 Task 7 headline fill-in-the-blank task, our two-stage Flan-T5-based approach achieved 88{\%} accuracy. On the headline generation task, our T5-based approach achieved RougeL of 0.390, BERT F1 Score of 0.453, and MoverScore of 0.587.", }
Numerical reasoning is challenging even for large pre-trained language models. We show that while T5 models are capable of generating relevant headlines with proper numerical values, they can also make mistakes in reading comprehension and miscalculate numerical values. To overcome these issues, we propose a two-step training process: first train models to read text and generate formal representations of calculations, then train models to read calculations and generate numerical values. On the SemEval 2024 Task 7 headline fill-in-the-blank task, our two-stage Flan-T5-based approach achieved 88{\%} accuracy. On the headline generation task, our T5-based approach achieved RougeL of 0.390, BERT F1 Score of 0.453, and MoverScore of 0.587.
[ "Crum, Hinoki", "Bethard, Steven" ]
hinoki at SemEval-2024 Task 7: Numeral-Aware Headline Generation (English)
semeval-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.7.bib
https://aclanthology.org/2024.semeval-1.7/
@inproceedings{siino-2024-t5, title = "T5-Medical at {S}em{E}val-2024 Task 2: Using T5 Medical Embedding for Natural Language Inference on Clinical Trial Data", author = "Siino, Marco", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.7", doi = "10.18653/v1/2024.semeval-1.7", pages = "40--46", abstract = "In this work, we address the challenge of identifying the inference relation between a plain language statement and Clinical Trial Reports (CTRs) by using a T5-large model embedding. The task, hosted at SemEval-2024, involves the use of the NLI4CT dataset. Each instance in the dataset has one or two CTRs, along with an annotation from domain experts, a section marker, a statement, and an entailment/contradiction label. The goal is to determine if a statement entails or contradicts the given information within a trial description. Our submission consists of a T5-large model pre-trained on the medical domain. Then the pre-trained model embedding output provides the embedding representation of the text. Eventually, after a fine-tuning phase, the provided embeddings are used to determine the CTRs{'} and the statements{'} cosine similarity to perform the classification. On the official test set, our submitted approach is able to reach an F1 score of 0.63, and a faithfulness and consistency score of 0.30 and 0.50 respectively.", }
In this work, we address the challenge of identifying the inference relation between a plain language statement and Clinical Trial Reports (CTRs) by using a T5-large model embedding. The task, hosted at SemEval-2024, involves the use of the NLI4CT dataset. Each instance in the dataset has one or two CTRs, along with an annotation from domain experts, a section marker, a statement, and an entailment/contradiction label. The goal is to determine if a statement entails or contradicts the given information within a trial description. Our submission consists of a T5-large model pre-trained on the medical domain. Then the pre-trained model embedding output provides the embedding representation of the text. Eventually, after a fine-tuning phase, the provided embeddings are used to determine the CTRs{'} and the statements{'} cosine similarity to perform the classification. On the official test set, our submitted approach is able to reach an F1 score of 0.63, and a faithfulness and consistency score of 0.30 and 0.50 respectively.
[ "Siino, Marco" ]
T5-Medical at SemEval-2024 Task 2: Using T5 Medical Embedding for Natural Language Inference on Clinical Trial Data
semeval-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.8.bib
https://aclanthology.org/2024.semeval-1.8/
@inproceedings{fan-etal-2024-ctyun, title = "{CTYUN}-{AI} at {S}em{E}val-2024 Task 7: Boosting Numerical Understanding with Limited Data Through Effective Data Alignment", author = "Fan, Yuming and Yang, Dongming and He, Xu", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.8", doi = "10.18653/v1/2024.semeval-1.8", pages = "47--52", abstract = "Large language models (LLMs) have demonstrated remarkable capabilities in pushing the boundaries of natural language understanding. Nevertheless, the majority of existing open-source LLMs still fall short of meeting satisfactory standards when it comes to addressing numerical problems, especially as the enhancement of their numerical capabilities heavily relies on extensive data.To bridge the gap, we aim to improve the numerical understanding of LLMs by means of efficient data alignment, utilizing only a limited amount of necessary data.Specifically, we first use a data discovery strategy to obtain the most effective portion of numerical data from large datasets. Then, self-augmentation is performed to maximize the potential of the training data. Thirdly, answers of all traning samples are aligned based on some simple rules. Finally, our method achieves the first place in the competition, offering new insights and methodologies for numerical understanding research in LLMs.", }
Large language models (LLMs) have demonstrated remarkable capabilities in pushing the boundaries of natural language understanding. Nevertheless, the majority of existing open-source LLMs still fall short of meeting satisfactory standards when it comes to addressing numerical problems, especially as the enhancement of their numerical capabilities heavily relies on extensive data.To bridge the gap, we aim to improve the numerical understanding of LLMs by means of efficient data alignment, utilizing only a limited amount of necessary data.Specifically, we first use a data discovery strategy to obtain the most effective portion of numerical data from large datasets. Then, self-augmentation is performed to maximize the potential of the training data. Thirdly, answers of all traning samples are aligned based on some simple rules. Finally, our method achieves the first place in the competition, offering new insights and methodologies for numerical understanding research in LLMs.
[ "Fan, Yuming", "Yang, Dongming", "He, Xu" ]
CTYUN-AI at SemEval-2024 Task 7: Boosting Numerical Understanding with Limited Data Through Effective Data Alignment
semeval-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.9.bib
https://aclanthology.org/2024.semeval-1.9/
@inproceedings{siino-2024-mcrock, title = "{M}c{R}ock at {S}em{E}val-2024 Task 4: Mistral 7{B} for Multilingual Detection of Persuasion Techniques In Memes", author = "Siino, Marco", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.9", doi = "10.18653/v1/2024.semeval-1.9", pages = "53--59", abstract = "One of the most widely used content types in internet misinformation campaigns is memes. Since they can readily reach a big number of users on social media sites, they are most successful there. Memes used in a disinformation campaign include a variety of rhetorical and psychological strategies, including smearing, name-calling, and causal oversimplification, to achieve their goal of influencing the users. The shared task{'}s objective is to develop models for recognizing these strategies solely in a meme{'}s textual content (Subtask 1) and in a multimodal context where both the textual and visual material must be analysed simultaneously (Subtasks two and three). In this paper, we discuss the application of a Mistral 7B model to address the Subtask one in English. Find the persuasive strategy that a meme employs from a hierarchy of twenty based just on its {``}textual content.{''} Only a portion of the reward is awarded if the technique{'}s ancestor node is chosen. This classification issue is multilabel hierarchical. Our approach based on the use of a Mistral 7B model obtains a Hierarchical F1 of 0.42 a Hierarchical Precision of 0.30 and a Hierarchical Recall of 0.71. Our selected approach is able to outperform the baseline provided for the competition.", }
One of the most widely used content types in internet misinformation campaigns is memes. Since they can readily reach a big number of users on social media sites, they are most successful there. Memes used in a disinformation campaign include a variety of rhetorical and psychological strategies, including smearing, name-calling, and causal oversimplification, to achieve their goal of influencing the users. The shared task{'}s objective is to develop models for recognizing these strategies solely in a meme{'}s textual content (Subtask 1) and in a multimodal context where both the textual and visual material must be analysed simultaneously (Subtasks two and three). In this paper, we discuss the application of a Mistral 7B model to address the Subtask one in English. Find the persuasive strategy that a meme employs from a hierarchy of twenty based just on its {``}textual content.{''} Only a portion of the reward is awarded if the technique{'}s ancestor node is chosen. This classification issue is multilabel hierarchical. Our approach based on the use of a Mistral 7B model obtains a Hierarchical F1 of 0.42 a Hierarchical Precision of 0.30 and a Hierarchical Recall of 0.71. Our selected approach is able to outperform the baseline provided for the competition.
[ "Siino, Marco" ]
McRock at SemEval-2024 Task 4: Mistral 7B for Multilingual Detection of Persuasion Techniques In Memes
semeval-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.10.bib
https://aclanthology.org/2024.semeval-1.10/
@inproceedings{rasheed-zarkoosh-2024-mashee, title = "Mashee at {S}em{E}val-2024 Task 8: The Impact of Samples Quality on the Performance of In-Context Learning for Machine Text Classification", author = "Rasheed, Areeg Fahad and Zarkoosh, M.", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.10", doi = "10.18653/v1/2024.semeval-1.10", pages = "60--63", abstract = "Within few-shot learning, in-context learning(ICL) has become a potential method for lever-aging contextual information to improve modelperformance on small amounts of data or inresource-constrained environments where train-ing models on large datasets is prohibitive.However, the quality of the selected samplein a few shots severely limits the usefulnessof ICL. The primary goal of this paper is toenhance the performance of evaluation metricsfor in-context learning by selecting high-qualitysamples in few-shot learning scenarios. We em-ploy the chi-square test to identify high-qualitysamples and compare the results with those ob-tained using low-quality samples. Our findingsdemonstrate that utilizing high-quality samplesleads to improved performance with respect toall evaluated metrics.", }
Within few-shot learning, in-context learning(ICL) has become a potential method for lever-aging contextual information to improve modelperformance on small amounts of data or inresource-constrained environments where train-ing models on large datasets is prohibitive.However, the quality of the selected samplein a few shots severely limits the usefulnessof ICL. The primary goal of this paper is toenhance the performance of evaluation metricsfor in-context learning by selecting high-qualitysamples in few-shot learning scenarios. We em-ploy the chi-square test to identify high-qualitysamples and compare the results with those ob-tained using low-quality samples. Our findingsdemonstrate that utilizing high-quality samplesleads to improved performance with respect toall evaluated metrics.
[ "Rasheed, Areeg Fahad", "Zarkoosh, M." ]
Mashee at SemEval-2024 Task 8: The Impact of Samples Quality on the Performance of In-Context Learning for Machine Text Classification
semeval-1.10
Poster
2406.17790
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.11.bib
https://aclanthology.org/2024.semeval-1.11/
@inproceedings{dao-etal-2024-puer, title = "Puer at {S}em{E}val-2024 Task 4: Fine-tuning Pre-trained Language Models for Meme Persuasion Technique Detection", author = "Dao, Jiaxu and Li, Zhuoying and Su, Youbang and Gong, Wensheng", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.11", doi = "10.18653/v1/2024.semeval-1.11", pages = "64--69", abstract = "The paper summarizes our research on multilingual detection of persuasion techniques in memes for the SemEval-2024 Task 4. Our work focused on English-Subtask 1, implemented based on a roberta-large pre-trained model provided by the transforms tool that was fine-tuned into a corpus of social media posts. Our method significantly outperforms the officially released baseline method, and ranked 7th in English-Subtask 1 for the test set. This paper also compares the performances of different deep learning model architectures, such as BERT, ALBERT, and XLM-RoBERTa, on multilingual detection of persuasion techniques in memes. The experimental source code covered in the paper will later be sourced from Github.", }
The paper summarizes our research on multilingual detection of persuasion techniques in memes for the SemEval-2024 Task 4. Our work focused on English-Subtask 1, implemented based on a roberta-large pre-trained model provided by the transforms tool that was fine-tuned into a corpus of social media posts. Our method significantly outperforms the officially released baseline method, and ranked 7th in English-Subtask 1 for the test set. This paper also compares the performances of different deep learning model architectures, such as BERT, ALBERT, and XLM-RoBERTa, on multilingual detection of persuasion techniques in memes. The experimental source code covered in the paper will later be sourced from Github.
[ "Dao, Jiaxu", "Li, Zhuoying", "Su, Youbang", "Gong, Wensheng" ]
Puer at SemEval-2024 Task 4: Fine-tuning Pre-trained Language Models for Meme Persuasion Technique Detection
semeval-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.12.bib
https://aclanthology.org/2024.semeval-1.12/
@inproceedings{dao-etal-2024-puer-semeval, title = "Puer at {S}em{E}val-2024 Task 2: A {B}io{L}ink{BERT} Approach to Biomedical Natural Language Inference", author = "Dao, Jiaxu and Li, Zhuoying and Tang, Xiuzhong and Lan, Xiaoli and Wang, Junde", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.12", doi = "10.18653/v1/2024.semeval-1.12", pages = "70--75", abstract = "This paper delineates our investigation into the application of BioLinkBERT for enhancing clinical trials, presented at SemEval-2024 Task 2. Centering on the medical biomedical NLI task, our approach utilized the BioLinkBERT-large model, refined with a pioneering mixed loss function that amalgamates contrastive learning and cross-entropy loss. This methodology demonstrably surpassed the established benchmark, securing an impressive F1 score of 0.72 and positioning our work prominently in the field. Additionally, we conducted a comparative analysis of various deep learning architectures, including BERT, ALBERT, and XLM-RoBERTa, within the context of medical text mining. The findings not only showcase our method{'}s superior performance but also chart a course for future research in biomedical data processing. Our experiment source code is available on GitHub at: https://github.com/daojiaxu/semeval2024{\_}task2.", }
This paper delineates our investigation into the application of BioLinkBERT for enhancing clinical trials, presented at SemEval-2024 Task 2. Centering on the medical biomedical NLI task, our approach utilized the BioLinkBERT-large model, refined with a pioneering mixed loss function that amalgamates contrastive learning and cross-entropy loss. This methodology demonstrably surpassed the established benchmark, securing an impressive F1 score of 0.72 and positioning our work prominently in the field. Additionally, we conducted a comparative analysis of various deep learning architectures, including BERT, ALBERT, and XLM-RoBERTa, within the context of medical text mining. The findings not only showcase our method{'}s superior performance but also chart a course for future research in biomedical data processing. Our experiment source code is available on GitHub at: https://github.com/daojiaxu/semeval2024{\_}task2.
[ "Dao, Jiaxu", "Li, Zhuoying", "Tang, Xiuzhong", "Lan, Xiaoli", "Wang, Junde" ]
Puer at SemEval-2024 Task 2: A BioLinkBERT Approach to Biomedical Natural Language Inference
semeval-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.13.bib
https://aclanthology.org/2024.semeval-1.13/
@inproceedings{kiet-thin-2024-nrk, title = "{NRK} at {S}em{E}val-2024 Task 1: Semantic Textual Relatedness through Domain Adaptation and Ensemble Learning on {BERT}-based models", author = "Kiet, Nguyen Tuan and Thin, Dang Van", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.13", doi = "10.18653/v1/2024.semeval-1.13", pages = "76--81", abstract = "This paper describes the system of the team NRK for Task A in the SemEval-2024 Task 1: Semantic Textual Relatedness (STR). We focus on exploring the performance of ensemble architectures based on the voting technique and different pre-trained transformer-based language models, including the multilingual and monolingual BERTology models. The experimental results show that our system has achieved competitive performance in some languages in Track A: Supervised, where our submissions rank in the Top 3 and Top 4 for Algerian Arabic and Amharic languages. Our source code is released on the GitHub site.", }
This paper describes the system of the team NRK for Task A in the SemEval-2024 Task 1: Semantic Textual Relatedness (STR). We focus on exploring the performance of ensemble architectures based on the voting technique and different pre-trained transformer-based language models, including the multilingual and monolingual BERTology models. The experimental results show that our system has achieved competitive performance in some languages in Track A: Supervised, where our submissions rank in the Top 3 and Top 4 for Algerian Arabic and Amharic languages. Our source code is released on the GitHub site.
[ "Kiet, Nguyen Tuan", "Thin, Dang Van" ]
NRK at SemEval-2024 Task 1: Semantic Textual Relatedness through Domain Adaptation and Ensemble Learning on BERT-based models
semeval-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.14.bib
https://aclanthology.org/2024.semeval-1.14/
@inproceedings{siino-2024-brainllama, title = "{B}rain{L}lama at {S}em{E}val-2024 Task 6: Prompting Llama to detect hallucinations and related observable overgeneration mistakes", author = "Siino, Marco", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.14", doi = "10.18653/v1/2024.semeval-1.14", pages = "82--87", abstract = "Participants in the SemEval-2024 Task 6 were tasked with executing binary classification aimed at discerning instances of fluent overgeneration hallucinations across two distinct setups: the model-aware and model-agnostic tracks. That is, participants must detect grammatically sound outputs which contain incorrect or unsupported semantic information, regardless of whether they had access to the model responsible for producing the output or not, within the model-aware and model-agnostic tracks. Two tracks were proposed for the task: a model-aware track, where organizers provided a checkpoint to a model publicly available on HuggingFace for every data point considered, and a model-agnostic track where the organizers do not. In this paper, we discuss the application of a Llama model to address both the tracks. Find the persuasive strategy that a meme employs from a hierarchy of twenty based just on its {``}textual content.{''} Only a portion of the reward is awarded if the technique{'}s ancestor node is chosen. This classification issue is multilabel hierarchical. Our approach reaches an accuracy of 0.62 on the agnostic track and of 0.67 on the aware track.", }
Participants in the SemEval-2024 Task 6 were tasked with executing binary classification aimed at discerning instances of fluent overgeneration hallucinations across two distinct setups: the model-aware and model-agnostic tracks. That is, participants must detect grammatically sound outputs which contain incorrect or unsupported semantic information, regardless of whether they had access to the model responsible for producing the output or not, within the model-aware and model-agnostic tracks. Two tracks were proposed for the task: a model-aware track, where organizers provided a checkpoint to a model publicly available on HuggingFace for every data point considered, and a model-agnostic track where the organizers do not. In this paper, we discuss the application of a Llama model to address both the tracks. Find the persuasive strategy that a meme employs from a hierarchy of twenty based just on its {``}textual content.{''} Only a portion of the reward is awarded if the technique{'}s ancestor node is chosen. This classification issue is multilabel hierarchical. Our approach reaches an accuracy of 0.62 on the agnostic track and of 0.67 on the aware track.
[ "Siino, Marco" ]
BrainLlama at SemEval-2024 Task 6: Prompting Llama to detect hallucinations and related observable overgeneration mistakes
semeval-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.15.bib
https://aclanthology.org/2024.semeval-1.15/
@inproceedings{wang-etal-2024-dke, title = "{DKE}-Research at {S}em{E}val-2024 Task 2: Incorporating Data Augmentation with Generative Models and Biomedical Knowledge to Enhance Inference Robustness", author = "Wang, Yuqi and Wang, Zeqiang and Wang, Wei and Chen, Qi and Huang, Kaizhu and Nguyen, Anh and De, Suparna", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.15", doi = "10.18653/v1/2024.semeval-1.15", pages = "88--94", abstract = "Safe and reliable natural language inference is critical for extracting insights from clinical trial reports but poses challenges due to biases in large pre-trained language models. This paper presents a novel data augmentation technique to improve model robustness for biomedical natural language inference in clinical trials. By generating synthetic examples through semantic perturbations and domain-specific vocabulary replacement and adding a new task for numerical and quantitative reasoning, we introduce greater diversity and reduce shortcut learning. Our approach, combined with multi-task learning and the DeBERTa architecture, achieved significant performance gains on the NLI4CT 2024 benchmark compared to the original language models. Ablation studies validate the contribution of each augmentation method in improving robustness. Our best-performing model ranked 12th in terms of faithfulness and 8th in terms of consistency, respectively, out of the 32 participants.", }
Safe and reliable natural language inference is critical for extracting insights from clinical trial reports but poses challenges due to biases in large pre-trained language models. This paper presents a novel data augmentation technique to improve model robustness for biomedical natural language inference in clinical trials. By generating synthetic examples through semantic perturbations and domain-specific vocabulary replacement and adding a new task for numerical and quantitative reasoning, we introduce greater diversity and reduce shortcut learning. Our approach, combined with multi-task learning and the DeBERTa architecture, achieved significant performance gains on the NLI4CT 2024 benchmark compared to the original language models. Ablation studies validate the contribution of each augmentation method in improving robustness. Our best-performing model ranked 12th in terms of faithfulness and 8th in terms of consistency, respectively, out of the 32 participants.
[ "Wang, Yuqi", "Wang, Zeqiang", "Wang, Wei", "Chen, Qi", "Huang, Kaizhu", "Nguyen, Anh", "De, Suparna" ]
DKE-Research at SemEval-2024 Task 2: Incorporating Data Augmentation with Generative Models and Biomedical Knowledge to Enhance Inference Robustness
semeval-1.15
Poster
2404.09206
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.16.bib
https://aclanthology.org/2024.semeval-1.16/
@inproceedings{bestgen-2024-satlab, title = "{SATL}ab at {S}em{E}val-2024 Task 1: A Fully Instance-Specific Approach for Semantic Textual Relatedness Prediction", author = "Bestgen, Yves", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.16", doi = "10.18653/v1/2024.semeval-1.16", pages = "95--100", abstract = "This paper presents the SATLab participation in SemEval 2024 Task 1 on Semantic Textual Relatedness. The proposed system predicts semantic relatedness by means of the Euclidean distance between the character ngram frequencies in the two sentences to evaluate. It employs no external resources, nor information from other instances present in the material. The system performs well, coming first in five of the twelve languages. However, there is little difference between the best systems.", }
This paper presents the SATLab participation in SemEval 2024 Task 1 on Semantic Textual Relatedness. The proposed system predicts semantic relatedness by means of the Euclidean distance between the character ngram frequencies in the two sentences to evaluate. It employs no external resources, nor information from other instances present in the material. The system performs well, coming first in five of the twelve languages. However, there is little difference between the best systems.
[ "Bestgen, Yves" ]
SATLab at SemEval-2024 Task 1: A Fully Instance-Specific Approach for Semantic Textual Relatedness Prediction
semeval-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.17.bib
https://aclanthology.org/2024.semeval-1.17/
@inproceedings{sarvazyan-etal-2024-genaios, title = "Genaios at {S}em{E}val-2024 Task 8: Detecting Machine-Generated Text by Mixing Language Model Probabilistic Features", author = "Sarvazyan, Areg Mikael and Gonz{\'a}lez, Jos{\'e} {\'A}ngel and Franco-salvador, Marc", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.17", doi = "10.18653/v1/2024.semeval-1.17", pages = "101--107", abstract = "This paper describes the participation of the Genaios team in the monolingual track of Subtask A at SemEval-2024 Task 8. Our best system, LLMixtic, is a Transformer Encoder that mixes token-level probabilistic features extracted from four LLaMA-2 models. We obtained the best results in the official ranking (96.88{\%} accuracy), showing a false positive ratio of 4.38{\%} and a false negative ratio of 1.97{\%} on the test set. We further study LLMixtic through ablation, probabilistic, and attention analyses, finding that (i) performance improves as more LLMs and probabilistic features are included, (ii) LLMixtic puts most attention on the features of the last tokens, (iii) it fails on samples where human text probabilities become consistently higher than for generated text, and (iv) LLMixtic{'}s false negatives exhibit a bias towards text with newlines.", }
This paper describes the participation of the Genaios team in the monolingual track of Subtask A at SemEval-2024 Task 8. Our best system, LLMixtic, is a Transformer Encoder that mixes token-level probabilistic features extracted from four LLaMA-2 models. We obtained the best results in the official ranking (96.88{\%} accuracy), showing a false positive ratio of 4.38{\%} and a false negative ratio of 1.97{\%} on the test set. We further study LLMixtic through ablation, probabilistic, and attention analyses, finding that (i) performance improves as more LLMs and probabilistic features are included, (ii) LLMixtic puts most attention on the features of the last tokens, (iii) it fails on samples where human text probabilities become consistently higher than for generated text, and (iv) LLMixtic{'}s false negatives exhibit a bias towards text with newlines.
[ "Sarvazyan, Areg Mikael", "Gonz{\\'a}lez, Jos{\\'e} {\\'A}ngel", "Franco-salvador, Marc" ]
Genaios at SemEval-2024 Task 8: Detecting Machine-Generated Text by Mixing Language Model Probabilistic Features
semeval-1.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.18.bib
https://aclanthology.org/2024.semeval-1.18/
@inproceedings{opper-narayanaswamy-2024-self, title = "Self-{S}tr{AE} at {S}em{E}val-2024 Task 1: Making Self-Structuring {A}uto{E}ncoders Learn More With Less", author = "Opper, Mattia and Narayanaswamy, Siddharth", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.18", doi = "10.18653/v1/2024.semeval-1.18", pages = "108--115", abstract = "We present two simple improvements to the Self-Structuring AutoEncoder (Self-StrAE). Firstly, we show that including reconstruction to the vocabulary as an auxiliary objective improves representation quality. Secondly, we demonstrate that increasing the number of independent channels leads to significant improvements in embedding quality, while simultaneously reducing the number of parameters. Surprisingly, we demonstrate that this trend can be followed to the extreme, even to point of reducing the total number of non-embedding parameters to seven. Our system can be pre-trained from scratch with as little as 10M tokens of input data, and proves effective across English, Spanish and Afrikaans.", }
We present two simple improvements to the Self-Structuring AutoEncoder (Self-StrAE). Firstly, we show that including reconstruction to the vocabulary as an auxiliary objective improves representation quality. Secondly, we demonstrate that increasing the number of independent channels leads to significant improvements in embedding quality, while simultaneously reducing the number of parameters. Surprisingly, we demonstrate that this trend can be followed to the extreme, even to point of reducing the total number of non-embedding parameters to seven. Our system can be pre-trained from scratch with as little as 10M tokens of input data, and proves effective across English, Spanish and Afrikaans.
[ "Opper, Mattia", "Narayanaswamy, Siddharth" ]
Self-StrAE at SemEval-2024 Task 1: Making Self-Structuring AutoEncoders Learn More With Less
semeval-1.18
Poster
2404.01860
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.19.bib
https://aclanthology.org/2024.semeval-1.19/
@inproceedings{chakraborty-2024-rgat, title = "{RGAT} at {S}em{E}val-2024 Task 2: Biomedical Natural Language Inference using Graph Attention Network", author = "Chakraborty, Abir", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.19", doi = "10.18653/v1/2024.semeval-1.19", pages = "116--122", abstract = "In this work, we (team RGAT) describe our approaches for the SemEval 2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials (NLI4CT). The objective of this task is multi-evidence natural language inference based on different sections of clinical trial reports. We have explored various approaches, (a) dependency tree of the input query as additional features in a Graph Attention Network (GAT) along with the token and parts-of-speech features, (b) sequence-to-sequence approach using various models and synthetic data and finally, (c) in-context learning using large language models (LLMs) like GPT-4. Amongs these three approaches the best result is obtained from the LLM with 0.76 F1-score (the highest being 0.78), 0.86 in faithfulness and 0.74 in consistence.", }
In this work, we (team RGAT) describe our approaches for the SemEval 2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials (NLI4CT). The objective of this task is multi-evidence natural language inference based on different sections of clinical trial reports. We have explored various approaches, (a) dependency tree of the input query as additional features in a Graph Attention Network (GAT) along with the token and parts-of-speech features, (b) sequence-to-sequence approach using various models and synthetic data and finally, (c) in-context learning using large language models (LLMs) like GPT-4. Amongs these three approaches the best result is obtained from the LLM with 0.76 F1-score (the highest being 0.78), 0.86 in faithfulness and 0.74 in consistence.
[ "Chakraborty, Abir" ]
RGAT at SemEval-2024 Task 2: Biomedical Natural Language Inference using Graph Attention Network
semeval-1.19
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.20.bib
https://aclanthology.org/2024.semeval-1.20/
@inproceedings{sherratt-etal-2024-bda, title = "{BDA} at {S}em{E}val-2024 Task 4: Detection of Persuasion in Memes Across Languages with Ensemble Learning and External Knowledge", author = "Sherratt, Victoria and Dogan, Sedat and Wuraola, Ifeoluwa and Bryan-smith, Lydia and Onwuchekwa, Oyinkansola and Dethlefs, Nina", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.20", doi = "10.18653/v1/2024.semeval-1.20", pages = "123--132", abstract = "This paper outlines our multimodal ensemble learning system for identifying persuasion techniques in memes. We contribute an approach which utilises the novel inclusion of consistent named visual entities extracted using Google Vision{'}s API as an external knowledge source, joined to our multimodal ensemble via late fusion. As well as detailing our experiments in ensemble combinations, fusion methods and data augmentation, we explore the impact of including external data and summarise post-evaluation improvements to our architecture based on analysis of the task results.", }
This paper outlines our multimodal ensemble learning system for identifying persuasion techniques in memes. We contribute an approach which utilises the novel inclusion of consistent named visual entities extracted using Google Vision{'}s API as an external knowledge source, joined to our multimodal ensemble via late fusion. As well as detailing our experiments in ensemble combinations, fusion methods and data augmentation, we explore the impact of including external data and summarise post-evaluation improvements to our architecture based on analysis of the task results.
[ "Sherratt, Victoria", "Dogan, Sedat", "Wuraola, Ifeoluwa", "Bryan-smith, Lydia", "Onwuchekwa, Oyinkansola", "Dethlefs, Nina" ]
BDA at SemEval-2024 Task 4: Detection of Persuasion in Memes Across Languages with Ensemble Learning and External Knowledge
semeval-1.20
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.21.bib
https://aclanthology.org/2024.semeval-1.21/
@inproceedings{chowdhury-ptaszynski-2024-nowhash, title = "nowhash at {S}em{E}val-2024 Task 4: Exploiting Fusion of Transformers for Detecting Persuasion Techniques in Multilingual Memes", author = "Chowdhury, Abu Nowhash and Ptaszynski, Michal", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.21", doi = "10.18653/v1/2024.semeval-1.21", pages = "133--138", abstract = "Nowadays, memes are considered one of the most prominent forms of medium to disseminate information on social media. Memes are typically constructed in multilingual settings using visuals with texts. Sometimes people use memes to influence mass audiences through rhetorical and psychological techniques, such as causal oversimplification, name-calling, and smear. It is a challenging task to identify those techniques considering memes{'} multimodal characteristics. To address these challenges, SemEval-2024 Task 4 introduced a shared task focusing on detecting persuasion techniques in multilingual memes. This paper presents our participation in subtasks 1 and 2(b). We use a finetuned language-agnostic BERT sentence embedding (LaBSE) model to extract effective contextual features from meme text to address the challenge of identifying persuasion techniques in subtask 1. For subtask 2(b), We finetune the vision transformer and XLM-RoBERTa to extract effective contextual information from meme image and text data. Finally, we unify those features and employ a single feed-forward linear layer on top to obtain the prediction label. Experimental results on the SemEval 2024 Task 4 benchmark dataset manifested the potency of our proposed methods for subtasks 1 and 2(b).", }
Nowadays, memes are considered one of the most prominent forms of medium to disseminate information on social media. Memes are typically constructed in multilingual settings using visuals with texts. Sometimes people use memes to influence mass audiences through rhetorical and psychological techniques, such as causal oversimplification, name-calling, and smear. It is a challenging task to identify those techniques considering memes{'} multimodal characteristics. To address these challenges, SemEval-2024 Task 4 introduced a shared task focusing on detecting persuasion techniques in multilingual memes. This paper presents our participation in subtasks 1 and 2(b). We use a finetuned language-agnostic BERT sentence embedding (LaBSE) model to extract effective contextual features from meme text to address the challenge of identifying persuasion techniques in subtask 1. For subtask 2(b), We finetune the vision transformer and XLM-RoBERTa to extract effective contextual information from meme image and text data. Finally, we unify those features and employ a single feed-forward linear layer on top to obtain the prediction label. Experimental results on the SemEval 2024 Task 4 benchmark dataset manifested the potency of our proposed methods for subtasks 1 and 2(b).
[ "Chowdhury, Abu Nowhash", "Ptaszynski, Michal" ]
nowhash at SemEval-2024 Task 4: Exploiting Fusion of Transformers for Detecting Persuasion Techniques in Multilingual Memes
semeval-1.21
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.22.bib
https://aclanthology.org/2024.semeval-1.22/
@inproceedings{rahimi-etal-2024-hallusafe, title = "{H}allu{S}afe at {S}em{E}val-2024 Task 6: An {NLI}-based Approach to Make {LLM}s Safer by Better Detecting Hallucinations and Overgeneration Mistakes", author = "Rahimi, Zahra and Amirzadeh, Hamidreza and Sohrabi, Alireza and Taghavi, Zeinab and Sameti, Hossein", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.22", doi = "10.18653/v1/2024.semeval-1.22", pages = "139--147", abstract = "The advancement of large language models (LLMs), their ability to produce eloquent and fluent content, and their vast knowledge have resulted in their usage in various tasks and applications. Despite generating fluent content, this content can contain fabricated or false information. This problem is known as hallucination and has reduced the confidence in the output of LLMs. In this work, we have used Natural Language Inference to train classifiers for hallucination detection to tackle SemEval-2024 Task 6-SHROOM (Mickus et al., 2024) which is defined in three sub-tasks: Paraphrase Generation, Machine Translation, and Definition Modeling. We have also conducted experiments on LLMs to evaluate their ability to detect hallucinated outputs. We have achieved 75.93{\%} and 78.33{\%} accuracy for the modelaware and model-agnostic tracks, respectively. The shared links of our models and the codes are available on GitHub.", }
The advancement of large language models (LLMs), their ability to produce eloquent and fluent content, and their vast knowledge have resulted in their usage in various tasks and applications. Despite generating fluent content, this content can contain fabricated or false information. This problem is known as hallucination and has reduced the confidence in the output of LLMs. In this work, we have used Natural Language Inference to train classifiers for hallucination detection to tackle SemEval-2024 Task 6-SHROOM (Mickus et al., 2024) which is defined in three sub-tasks: Paraphrase Generation, Machine Translation, and Definition Modeling. We have also conducted experiments on LLMs to evaluate their ability to detect hallucinated outputs. We have achieved 75.93{\%} and 78.33{\%} accuracy for the modelaware and model-agnostic tracks, respectively. The shared links of our models and the codes are available on GitHub.
[ "Rahimi, Zahra", "Amirzadeh, Hamidreza", "Sohrabi, Alireza", "Taghavi, Zeinab", "Sameti, Hossein" ]
HalluSafe at SemEval-2024 Task 6: An NLI-based Approach to Make LLMs Safer by Better Detecting Hallucinations and Overgeneration Mistakes
semeval-1.22
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.23.bib
https://aclanthology.org/2024.semeval-1.23/
@inproceedings{rahimi-etal-2024-nimz, title = "{NIMZ} at {S}em{E}val-2024 Task 9: Evaluating Methods in Solving Brainteasers Defying Commonsense", author = "Rahimi, Zahra and Shirzady, Mohammad Moein and Taghavi, Zeinab and Sameti, Hossein", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.23", doi = "10.18653/v1/2024.semeval-1.23", pages = "148--154", abstract = "The goal and dream of the artificial intelligence field have long been the development of intelligent systems or agents that mimic human behavior and thinking. Creativity is an essential trait in humans that is closely related to lateral thinking. The remarkable advancements in Language Models have led to extensive research on question-answering and explicit and implicit reasoning involving vertical thinking. However, there is an increasing need to shift focus towards research and development of models that can think laterally. One must step outside the traditional frame of commonsense concepts in lateral thinking to conclude. Task 9 of SemEval-2024 is Brainteaser (Jiang et al.,2024), which requires lateral thinking to answer riddle-like multiple-choice questions. In our study, we assessed the performance of various models for the Brainteaser task. We achieved an overall accuracy of 75{\%} for the Sentence Puzzle subtask and 66.7{\%} for the Word Puzzle subtask. All the codes, along with the links to our saved models, are available on our GitHub.", }
The goal and dream of the artificial intelligence field have long been the development of intelligent systems or agents that mimic human behavior and thinking. Creativity is an essential trait in humans that is closely related to lateral thinking. The remarkable advancements in Language Models have led to extensive research on question-answering and explicit and implicit reasoning involving vertical thinking. However, there is an increasing need to shift focus towards research and development of models that can think laterally. One must step outside the traditional frame of commonsense concepts in lateral thinking to conclude. Task 9 of SemEval-2024 is Brainteaser (Jiang et al.,2024), which requires lateral thinking to answer riddle-like multiple-choice questions. In our study, we assessed the performance of various models for the Brainteaser task. We achieved an overall accuracy of 75{\%} for the Sentence Puzzle subtask and 66.7{\%} for the Word Puzzle subtask. All the codes, along with the links to our saved models, are available on our GitHub.
[ "Rahimi, Zahra", "Shirzady, Mohammad Moein", "Taghavi, Zeinab", "Sameti, Hossein" ]
NIMZ at SemEval-2024 Task 9: Evaluating Methods in Solving Brainteasers Defying Commonsense
semeval-1.23
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.24.bib
https://aclanthology.org/2024.semeval-1.24/
@inproceedings{siino-2024-mistral, title = "Mistral at {S}em{E}val-2024 Task 5: Mistral 7{B} for argument reasoning in Civil Procedure", author = "Siino, Marco", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.24", doi = "10.18653/v1/2024.semeval-1.24", pages = "155--162", abstract = "At the SemEval-2024 Task 5, the organizers introduce a novel natural language processing (NLP) challenge and dataset within the realm of the United States civil procedure. Each datum within the dataset comprises a comprehensive overview of a legal case, a specific inquiry associated with it, and a potential argument in support of a solution, supplemented with an in-depth rationale elucidating the applicability of the argument within the given context. Derived from a text designed for legal education purposes, this dataset presents a multifaceted benchmarking task for contemporary legal language models. Our manuscript delineates the approach we adopted for participation in this competition. Specifically, we detail the use of a Mistral 7B model to answer the question provided. Our only and best submission reach an F1-score equal to 0.5597 and an Accuracy of 0.5714, outperforming the baseline provided for the task.", }
At the SemEval-2024 Task 5, the organizers introduce a novel natural language processing (NLP) challenge and dataset within the realm of the United States civil procedure. Each datum within the dataset comprises a comprehensive overview of a legal case, a specific inquiry associated with it, and a potential argument in support of a solution, supplemented with an in-depth rationale elucidating the applicability of the argument within the given context. Derived from a text designed for legal education purposes, this dataset presents a multifaceted benchmarking task for contemporary legal language models. Our manuscript delineates the approach we adopted for participation in this competition. Specifically, we detail the use of a Mistral 7B model to answer the question provided. Our only and best submission reach an F1-score equal to 0.5597 and an Accuracy of 0.5714, outperforming the baseline provided for the task.
[ "Siino, Marco" ]
Mistral at SemEval-2024 Task 5: Mistral 7B for argument reasoning in Civil Procedure
semeval-1.24
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.25.bib
https://aclanthology.org/2024.semeval-1.25/
@inproceedings{xiong-etal-2024-ncl, title = "{NCL}-{U}o{R} at {S}em{E}val-2024 Task 8: Fine-tuning Large Language Models for Multigenerator, Multidomain, and Multilingual Machine-Generated Text Detection", author = "Xiong, Feng and Markchom, Thanet and Zheng, Ziwei and Jung, Subin and Ojha, Varun and Liang, Huizhi", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.25", doi = "10.18653/v1/2024.semeval-1.25", pages = "163--169", abstract = "SemEval-2024 Task 8 introduces the challenge of identifying machine-generated texts from diverse Large Language Models (LLMs) in various languages and domains. The task comprises three subtasks: binary classification in monolingual and multilingual (Subtask A), multi-class classification (Subtask B), and mixed text detection (Subtask C). This paper focuses on Subtask A {\&} B. To tackle this task, this paper proposes two methods: 1) using traditional machine learning (ML) with natural language preprocessing (NLP) for feature extraction, and 2) fine-tuning LLMs for text classification. For fine-tuning, we use the train datasets provided by the task organizers. The results show that transformer models like LoRA-RoBERTa and XLM-RoBERTa outperform traditional ML models, particularly in multilingual subtasks. However, traditional ML models performed better than transformer models for the monolingual task, demonstrating the importance of considering the specific characteristics of each subtask when selecting an appropriate approach.", }
SemEval-2024 Task 8 introduces the challenge of identifying machine-generated texts from diverse Large Language Models (LLMs) in various languages and domains. The task comprises three subtasks: binary classification in monolingual and multilingual (Subtask A), multi-class classification (Subtask B), and mixed text detection (Subtask C). This paper focuses on Subtask A {\&} B. To tackle this task, this paper proposes two methods: 1) using traditional machine learning (ML) with natural language preprocessing (NLP) for feature extraction, and 2) fine-tuning LLMs for text classification. For fine-tuning, we use the train datasets provided by the task organizers. The results show that transformer models like LoRA-RoBERTa and XLM-RoBERTa outperform traditional ML models, particularly in multilingual subtasks. However, traditional ML models performed better than transformer models for the monolingual task, demonstrating the importance of considering the specific characteristics of each subtask when selecting an appropriate approach.
[ "Xiong, Feng", "Markchom, Thanet", "Zheng, Ziwei", "Jung, Subin", "Ojha, Varun", "Liang, Huizhi" ]
NCL-UoR at SemEval-2024 Task 8: Fine-tuning Large Language Models for Multigenerator, Multidomain, and Multilingual Machine-Generated Text Detection
semeval-1.25
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.26.bib
https://aclanthology.org/2024.semeval-1.26/
@inproceedings{akkasi-etal-2024-iml, title = "i{ML} at {S}em{E}val-2024 Task 2: Safe Biomedical Natural Language Interference for Clinical Trials with {LLM} Based Ensemble Inferencing", author = "Akkasi, Abbas and Khan, Adnan and Shaaban, Mai A. and Komeili, Majid and Yaqub, Mohammad", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.26", doi = "10.18653/v1/2024.semeval-1.26", pages = "170--174", abstract = "We engaged in the shared task 2 at SenEval-2024, employing a diverse set of solutions with a particular emphasis on leveraging a Large Language Model (LLM) based zero-shot inference approach to address the challenge.", }
We engaged in the shared task 2 at SenEval-2024, employing a diverse set of solutions with a particular emphasis on leveraging a Large Language Model (LLM) based zero-shot inference approach to address the challenge.
[ "Akkasi, Abbas", "Khan, Adnan", "Shaaban, Mai A.", "Komeili, Majid", "Yaqub, Mohammad" ]
iML at SemEval-2024 Task 2: Safe Biomedical Natural Language Interference for Clinical Trials with LLM Based Ensemble Inferencing
semeval-1.26
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.27.bib
https://aclanthology.org/2024.semeval-1.27/
@inproceedings{nayak-kosseim-2024-clac, title = "{CL}a{C} at {S}em{E}val-2024 Task 4: Decoding Persuasion in Memes {--} An Ensemble of Language Models with Paraphrase Augmentation", author = "Nayak, Kota Shamanth Ramanath and Kosseim, Leila", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.27", doi = "10.18653/v1/2024.semeval-1.27", pages = "175--180", abstract = "This paper describes our approach to SemEval-2024 Task 4 subtask 1, focusing on hierarchical multi-label detection of persuasion techniques in meme texts. Our approach was based on fine-tuning individual language models (BERT, XLM-RoBERTa, and mBERT) and leveraging a mean-based ensemble model. Additional strategies included dataset augmentation through the TC dataset and paraphrase generation as well as the fine-tuning of individual classification thresholds for each class. During testing, our system outperformed the baseline in all languages except for Arabic, where no significant improvement was reached. Analysis of the results seem to indicate that our dataset augmentation strategy and per-class threshold fine-tuning may have introduced noise and exacerbated the dataset imbalance.", }
This paper describes our approach to SemEval-2024 Task 4 subtask 1, focusing on hierarchical multi-label detection of persuasion techniques in meme texts. Our approach was based on fine-tuning individual language models (BERT, XLM-RoBERTa, and mBERT) and leveraging a mean-based ensemble model. Additional strategies included dataset augmentation through the TC dataset and paraphrase generation as well as the fine-tuning of individual classification thresholds for each class. During testing, our system outperformed the baseline in all languages except for Arabic, where no significant improvement was reached. Analysis of the results seem to indicate that our dataset augmentation strategy and per-class threshold fine-tuning may have introduced noise and exacerbated the dataset imbalance.
[ "Nayak, Kota Shamanth Ramanath", "Kosseim, Leila" ]
CLaC at SemEval-2024 Task 4: Decoding Persuasion in Memes – An Ensemble of Language Models with Paraphrase Augmentation
semeval-1.27
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.28.bib
https://aclanthology.org/2024.semeval-1.28/
@inproceedings{zhu-2024-rdproj, title = "{RD}proj at {S}em{E}val-2024 Task 4: An Ensemble Learning Approach for Multilingual Detection of Persuasion Techniques in Memes", author = "Zhu, Yuhang", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.28", doi = "10.18653/v1/2024.semeval-1.28", pages = "181--187", abstract = "This paper introduces our bagging-based ensemble learning approach for the SemEval-2024 Task 4 Subtask 1, focusing on multilingual persuasion detection within meme texts. This task aims to identify persuasion techniques employed within meme texts, which is a hierarchical multilabel classification task. The given text may apply multiple techniques, and persuasion techniques have a hierarchical structure. However, only a few prior persuasion detection systems have utilized the hierarchical structure of persuasion techniques. In that case, we designed a multilingual bagging-based ensemble approach, incorporating a soft voting ensemble strategy to effectively exploit persuasion techniques{'} hierarchical structure. Our methodology achieved the second position in Bulgarian and North Macedonian, third in Arabic, and eleventh in English.", }
This paper introduces our bagging-based ensemble learning approach for the SemEval-2024 Task 4 Subtask 1, focusing on multilingual persuasion detection within meme texts. This task aims to identify persuasion techniques employed within meme texts, which is a hierarchical multilabel classification task. The given text may apply multiple techniques, and persuasion techniques have a hierarchical structure. However, only a few prior persuasion detection systems have utilized the hierarchical structure of persuasion techniques. In that case, we designed a multilingual bagging-based ensemble approach, incorporating a soft voting ensemble strategy to effectively exploit persuasion techniques{'} hierarchical structure. Our methodology achieved the second position in Bulgarian and North Macedonian, third in Arabic, and eleventh in English.
[ "Zhu, Yuhang" ]
RDproj at SemEval-2024 Task 4: An Ensemble Learning Approach for Multilingual Detection of Persuasion Techniques in Memes
semeval-1.28
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.29.bib
https://aclanthology.org/2024.semeval-1.29/
@inproceedings{salahudeen-etal-2024-hausanlp, title = "{H}ausa{NLP} at {S}em{E}val-2024 Task 1: Textual Relatedness Analysis for Semantic Representation of Sentences", author = "Salahudeen, Saheed Abdullahi and Lawan, Falalu Ibrahim and Aliyu, Yusuf and Abubakar, Amina and Aliyu, Lukman and Rabiu, Nur and Ahmad, Mahmoud and Shuaibu, Aliyu Rabiu and Musa, Alamin", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.29", doi = "10.18653/v1/2024.semeval-1.29", pages = "188--192", abstract = "Semantic Text Relatedness (STR), a measure of meaning similarity between text elements, has become a key focus in the field of Natural Language Processing (NLP). We describe SemEval-2024 task 1 on Semantic Textual Relatedness featuring three tracks: supervised learning, unsupervised learning and cross-lingual learning across African and Asian languages including Afrikaans, Algerian Arabic, Amharic, Hausa, Hindi, Indonesian, Kinyarwanda, Marathi, Moroccan Arabic, Modern Standard Arabic, Punjabi, Spanish, and Telugu. Our goal is to analyse the semantic representation of sentences textual relatedness trained on mBert, all-MiniLM-L6-v2 and Bert-Based-uncased. The effectiveness of these models is evaluated using the Spearman Correlation metric, which assesses the strength of the relationship between paired data. The finding reveals the viability of transformer models in multilingual STR tasks.", }
Semantic Text Relatedness (STR), a measure of meaning similarity between text elements, has become a key focus in the field of Natural Language Processing (NLP). We describe SemEval-2024 task 1 on Semantic Textual Relatedness featuring three tracks: supervised learning, unsupervised learning and cross-lingual learning across African and Asian languages including Afrikaans, Algerian Arabic, Amharic, Hausa, Hindi, Indonesian, Kinyarwanda, Marathi, Moroccan Arabic, Modern Standard Arabic, Punjabi, Spanish, and Telugu. Our goal is to analyse the semantic representation of sentences textual relatedness trained on mBert, all-MiniLM-L6-v2 and Bert-Based-uncased. The effectiveness of these models is evaluated using the Spearman Correlation metric, which assesses the strength of the relationship between paired data. The finding reveals the viability of transformer models in multilingual STR tasks.
[ "Salahudeen, Saheed Abdullahi", "Lawan, Falalu Ibrahim", "Aliyu, Yusuf", "Abubakar, Amina", "Aliyu, Lukman", "Rabiu, Nur", "Ahmad, Mahmoud", "Shuaibu, Aliyu Rabiu", "Musa, Alamin" ]
HausaNLP at SemEval-2024 Task 1: Textual Relatedness Analysis for Semantic Representation of Sentences
semeval-1.29
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]