bibtex_url
stringlengths 41
52
| proceedings
stringlengths 38
49
| bibtext
stringlengths 788
3.49k
| abstract
stringlengths 0
2.12k
| authors
sequencelengths 1
58
| title
stringlengths 16
181
| id
stringlengths 7
18
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 170
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
56
| num_comments
int64 -1
9
| n_authors
int64 -1
57
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
99
| Datasets
sequencelengths 0
5
| Spaces
sequencelengths 0
57
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.semeval-1.130.bib | https://aclanthology.org/2024.semeval-1.130/ | @inproceedings{babu-g-etal-2024-techssn,
title = "{TECHSSN} at {S}em{E}val-2024 Task 1: Multilingual Analysis for Semantic Textual Relatedness using Boosted Transformer Models",
author = "Babu G, Shreejith and
V, Ravindran and
Jetti, Aashika and
Sivanaiah, Rajalakshmi and
Deborah, Angel",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.130",
doi = "10.18653/v1/2024.semeval-1.130",
pages = "907--912",
abstract = "This paper presents our approach to SemEval- 2024 Task 1: Semantic Textual Relatedness (STR). Out of the 14 languages provided, we specifically focused on English and Telugu. Our proposal employs advanced natural language processing techniques and leverages the Sentence Transformers library for sentence embeddings. For English, a Gradient Boosting Regressor trained on DistilBERT embeddingsachieves competitive results, while for Telugu, a multilingual model coupled with hyperparameter tuning yields enhanced performance. The paper discusses the significance of semantic relatedness in various languages, highlighting the challenges and nuances encountered. Our findings contribute to the understanding of semantic textual relatedness across diverse linguistic landscapes, providing valuable insights for future research in multilingual natural language processing.",
}
| This paper presents our approach to SemEval- 2024 Task 1: Semantic Textual Relatedness (STR). Out of the 14 languages provided, we specifically focused on English and Telugu. Our proposal employs advanced natural language processing techniques and leverages the Sentence Transformers library for sentence embeddings. For English, a Gradient Boosting Regressor trained on DistilBERT embeddingsachieves competitive results, while for Telugu, a multilingual model coupled with hyperparameter tuning yields enhanced performance. The paper discusses the significance of semantic relatedness in various languages, highlighting the challenges and nuances encountered. Our findings contribute to the understanding of semantic textual relatedness across diverse linguistic landscapes, providing valuable insights for future research in multilingual natural language processing. | [
"Babu G, Shreejith",
"V, Ravindran",
"Jetti, Aashika",
"Sivanaiah, Rajalakshmi",
"Deborah, Angel"
] | TECHSSN at SemEval-2024 Task 1: Multilingual Analysis for Semantic Textual Relatedness using Boosted Transformer Models | semeval-1.130 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.131.bib | https://aclanthology.org/2024.semeval-1.131/ | @inproceedings{bahad-etal-2024-noot,
title = "Noot Noot at {S}em{E}val-2024 Task 7: Numerical Reasoning and Headline Generation",
author = "Bahad, Sankalp and
Bhaskar, Yash and
Krishnamurthy, Parameswari",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.131",
doi = "10.18653/v1/2024.semeval-1.131",
pages = "913--917",
abstract = "Natural language processing (NLP) modelshave achieved remarkable progress in recentyears, particularly in tasks related to semanticanalysis. However, many existing benchmarksprimarily focus on lexical and syntactic un-derstanding, often overlooking the importanceof numerical reasoning abilities. In this pa-per, we argue for the necessity of incorporatingnumeral-awareness into NLP evaluations andpropose two distinct tasks to assess this capabil-ity: Numerical Reasoning and Headline Gener-ation. We present datasets curated for each taskand evaluate various approaches using both au-tomatic and human evaluation metrics. Ourresults demonstrate the diverse strategies em-ployed by participating teams and highlight thepromising performance of emerging modelslike Mixtral 8x7b instruct. We discuss the im-plications of our findings and suggest avenuesfor future research in advancing numeral-awarelanguage understanding and generation.",
}
| Natural language processing (NLP) modelshave achieved remarkable progress in recentyears, particularly in tasks related to semanticanalysis. However, many existing benchmarksprimarily focus on lexical and syntactic un-derstanding, often overlooking the importanceof numerical reasoning abilities. In this pa-per, we argue for the necessity of incorporatingnumeral-awareness into NLP evaluations andpropose two distinct tasks to assess this capabil-ity: Numerical Reasoning and Headline Gener-ation. We present datasets curated for each taskand evaluate various approaches using both au-tomatic and human evaluation metrics. Ourresults demonstrate the diverse strategies em-ployed by participating teams and highlight thepromising performance of emerging modelslike Mixtral 8x7b instruct. We discuss the im-plications of our findings and suggest avenuesfor future research in advancing numeral-awarelanguage understanding and generation. | [
"Bahad, Sankalp",
"Bhaskar, Yash",
"Krishnamurthy, Parameswari"
] | Noot Noot at SemEval-2024 Task 7: Numerical Reasoning and Headline Generation | semeval-1.131 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.132.bib | https://aclanthology.org/2024.semeval-1.132/ | @inproceedings{bahad-etal-2024-fine-tuning,
title = "Fine-tuning Language Models for {AI} vs Human Generated Text detection",
author = "Bahad, Sankalp and
Bhaskar, Yash and
Krishnamurthy, Parameswari",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.132",
doi = "10.18653/v1/2024.semeval-1.132",
pages = "918--921",
abstract = "In this paper, we introduce a machine-generated text detection system designed totackle the challenges posed by the prolifera-tion of large language models (LLMs). Withthe rise of LLMs such as ChatGPT and GPT-4,there is a growing concern regarding the po-tential misuse of machine-generated content,including misinformation dissemination. Oursystem addresses this issue by automating theidentification of machine-generated text acrossmultiple subtasks: binary human-written vs.machine-generated text classification, multi-way machine-generated text classification, andhuman-machine mixed text detection. We em-ploy the RoBERTa Base model and fine-tuneit on a diverse dataset encompassing variousdomains, languages, and sources. Throughrigorous evaluation, we demonstrate the effec-tiveness of our system in accurately detectingmachine-generated text, contributing to effortsaimed at mitigating its potential misuse.",
}
| In this paper, we introduce a machine-generated text detection system designed totackle the challenges posed by the prolifera-tion of large language models (LLMs). Withthe rise of LLMs such as ChatGPT and GPT-4,there is a growing concern regarding the po-tential misuse of machine-generated content,including misinformation dissemination. Oursystem addresses this issue by automating theidentification of machine-generated text acrossmultiple subtasks: binary human-written vs.machine-generated text classification, multi-way machine-generated text classification, andhuman-machine mixed text detection. We em-ploy the RoBERTa Base model and fine-tuneit on a diverse dataset encompassing variousdomains, languages, and sources. Throughrigorous evaluation, we demonstrate the effec-tiveness of our system in accurately detectingmachine-generated text, contributing to effortsaimed at mitigating its potential misuse. | [
"Bahad, Sankalp",
"Bhaskar, Yash",
"Krishnamurthy, Parameswari"
] | Fine-tuning Language Models for AI vs Human Generated Text detection | semeval-1.132 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.133.bib | https://aclanthology.org/2024.semeval-1.133/ | @inproceedings{sabzevari-etal-2024-eagerlearners,
title = "eagerlearners at {S}em{E}val2024 Task 5: The Legal Argument Reasoning Task in Civil Procedure",
author = "Sabzevari, Hoorieh and
Rostamkhani, Mohammadmostafa and
Eetemadi, Sauleh",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.133",
doi = "10.18653/v1/2024.semeval-1.133",
pages = "922--926",
abstract = "This study investigates the performance of the zero-shot method in classifying data using three large language models, alongside two models with large input token sizes and the two pre-trained models on legal data. Our main dataset comes from the domain of U.S. civil procedure. It includes summaries of legal cases, specific questions, potential answers, and detailed explanations for why each solution is relevant, all sourced from a book aimed at law students. By comparing different methods, we aimed to understand how effectively they handle the complexities found in legal datasets. Our findings show how well the zero-shot method of large language models can understand complicated data. We achieved our highest F1 score of 64{\%} in these experiments.",
}
| This study investigates the performance of the zero-shot method in classifying data using three large language models, alongside two models with large input token sizes and the two pre-trained models on legal data. Our main dataset comes from the domain of U.S. civil procedure. It includes summaries of legal cases, specific questions, potential answers, and detailed explanations for why each solution is relevant, all sourced from a book aimed at law students. By comparing different methods, we aimed to understand how effectively they handle the complexities found in legal datasets. Our findings show how well the zero-shot method of large language models can understand complicated data. We achieved our highest F1 score of 64{\%} in these experiments. | [
"Sabzevari, Hoorieh",
"Rostamkhani, Mohammadmostafa",
"Eetemadi, Sauleh"
] | eagerlearners at SemEval2024 Task 5: The Legal Argument Reasoning Task in Civil Procedure | semeval-1.133 | Poster | 2406.16490 | [
"https://github.com/lhoorie/SemEval2024-Task5"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.134.bib | https://aclanthology.org/2024.semeval-1.134/ | @inproceedings{urlana-etal-2024-trustai,
title = "{T}rust{AI} at {S}em{E}val-2024 Task 8: A Comprehensive Analysis of Multi-domain Machine Generated Text Detection Techniques",
author = "Urlana, Ashok and
Saibewar, Aditya and
Garlapati, Bala Mallikarjunarao and
Vinayak Kumar, Charaka and
Singh, Ajeet and
Chalamala, Srinivasa Rao",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.134",
doi = "10.18653/v1/2024.semeval-1.134",
pages = "927--934",
abstract = "The Large Language Models (LLMs) exhibit remarkable ability to generate fluent content across a wide spectrum of user queries. However, this capability has raised concerns regarding misinformation and personal information leakage. In this paper, we present our methods for the SemEval2024 Task8, aiming to detect machine-generated text across various domains in both mono-lingual and multi-lingual contexts. Our study comprehensively analyzes various methods to detect machine-generated text, including statistical, neural, and pre-trained model approaches. We also detail our experimental setup and perform a in-depth error analysis to evaluate the effectiveness of these methods. Our methods obtain an accuracy of 86.9{\%} on the test set of subtask-A mono and 83.7{\%} for subtask-B. Furthermore, we also highlight the challenges and essential factors for consideration in future studies.",
}
| The Large Language Models (LLMs) exhibit remarkable ability to generate fluent content across a wide spectrum of user queries. However, this capability has raised concerns regarding misinformation and personal information leakage. In this paper, we present our methods for the SemEval2024 Task8, aiming to detect machine-generated text across various domains in both mono-lingual and multi-lingual contexts. Our study comprehensively analyzes various methods to detect machine-generated text, including statistical, neural, and pre-trained model approaches. We also detail our experimental setup and perform a in-depth error analysis to evaluate the effectiveness of these methods. Our methods obtain an accuracy of 86.9{\%} on the test set of subtask-A mono and 83.7{\%} for subtask-B. Furthermore, we also highlight the challenges and essential factors for consideration in future studies. | [
"Urlana, Ashok",
"Saibewar, Aditya",
"Garlapati, Bala Mallikarjunarao",
"Vinayak Kumar, Charaka",
"Singh, Ajeet",
"Chalamala, Srinivasa Rao"
] | TrustAI at SemEval-2024 Task 8: A Comprehensive Analysis of Multi-domain Machine Generated Text Detection Techniques | semeval-1.134 | Poster | 2403.16592 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.135.bib | https://aclanthology.org/2024.semeval-1.135/ | @inproceedings{eponon-ramos-perez-2024-pinealai,
title = "Pinealai at {S}em{E}val-2024 Task 1: Exploring Semantic Relatedness Prediction using Syntactic, {TF}-{IDF}, and Distance-Based Features.",
author = "Eponon, Alex and
Ramos Perez, Luis",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.135",
doi = "10.18653/v1/2024.semeval-1.135",
pages = "935--939",
abstract = "The central aim of this experiment is to establish a system proficient in predicting semantic relatedness between pairs of English texts. Additionally, the study seeks to delve into diverse features capable of enhancing the ability of models to identify semantic relatedness within given sentences. Several strategies have been used that combine TF-IDF, syntactic features, and similarity measures to train machine learning to predict semantic relatedness between pairs of sentences. The results obtained were above the baseline with an approximate Spearman score of 0.84.",
}
| The central aim of this experiment is to establish a system proficient in predicting semantic relatedness between pairs of English texts. Additionally, the study seeks to delve into diverse features capable of enhancing the ability of models to identify semantic relatedness within given sentences. Several strategies have been used that combine TF-IDF, syntactic features, and similarity measures to train machine learning to predict semantic relatedness between pairs of sentences. The results obtained were above the baseline with an approximate Spearman score of 0.84. | [
"Eponon, Alex",
"Ramos Perez, Luis"
] | Pinealai at SemEval-2024 Task 1: Exploring Semantic Relatedness Prediction using Syntactic, TF-IDF, and Distance-Based Features. | semeval-1.135 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.136.bib | https://aclanthology.org/2024.semeval-1.136/ | @inproceedings{he-etal-2024-infrrd,
title = "Infrrd.ai at {S}em{E}val-2024 Task 7: {RAG}-based end-to-end training to generate headlines and numbers",
author = "He, Jianglong and
Tallam, Saiteja and
Nakshathri, Srirama and
Amarnath, Navaneeth and
Kr, Pratiba and
Kumar, Deepak",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.136",
doi = "10.18653/v1/2024.semeval-1.136",
pages = "940--951",
abstract = "We propose a training algorithm based on retrieval-augmented generation (RAG) to obtain the most similar training samples. The training samples obtained are used as a reference to perform contextual learning-based fine-tuning of large language models (LLMs). We use the proposed method to generate headlines and extract numerical values from unstructured text. Models are made aware of the presence of numbers in the unstructured text with extended markup language (XML) tags specifically designed to capture the numbers. The headlines of unstructured text are preprocessed to wrap the number and then presented to the model. A number of mathematical operations are also passed as references to cover the chain-of-thought (COT) approach. Therefore, the model can calculate the final value passed to a mathematical operation. We perform the validation of numbers as a post-processing step to verify whether the numerical value calculated by the model is correct or not. The automatic validation of numbers in the generated headline helped the model achieve the best results in human evaluation among the methods involved.",
}
| We propose a training algorithm based on retrieval-augmented generation (RAG) to obtain the most similar training samples. The training samples obtained are used as a reference to perform contextual learning-based fine-tuning of large language models (LLMs). We use the proposed method to generate headlines and extract numerical values from unstructured text. Models are made aware of the presence of numbers in the unstructured text with extended markup language (XML) tags specifically designed to capture the numbers. The headlines of unstructured text are preprocessed to wrap the number and then presented to the model. A number of mathematical operations are also passed as references to cover the chain-of-thought (COT) approach. Therefore, the model can calculate the final value passed to a mathematical operation. We perform the validation of numbers as a post-processing step to verify whether the numerical value calculated by the model is correct or not. The automatic validation of numbers in the generated headline helped the model achieve the best results in human evaluation among the methods involved. | [
"He, Jianglong",
"Tallam, Saiteja",
"Nakshathri, Srirama",
"Amarnath, Navaneeth",
"Kr, Pratiba",
"Kumar, Deepak"
] | Infrrd.ai at SemEval-2024 Task 7: RAG-based end-to-end training to generate headlines and numbers | semeval-1.136 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.137.bib | https://aclanthology.org/2024.semeval-1.137/ | @inproceedings{choudhury-etal-2024-alphaintellect,
title = "{A}lpha{I}ntellect at {S}em{E}val-2024 Task 6: Detection of Hallucinations in Generated Text",
author = "Choudhury, Sohan and
Saha, Priyam and
Ray, Subharthi and
Das, Shankha and
Das, Dipankar",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.137",
doi = "10.18653/v1/2024.semeval-1.137",
pages = "952--958",
abstract = "One major issue in natural language generation (NLG) models is detecting hallucinations (semantically inaccurate outputs). This study investigates a hallucination detection system designed for three distinct NLG tasks: definition modeling, paraphrase generation, and machine translation. The system uses feedforward neural networks for classification and SentenceTransformer models for similarity scores and sentence embeddings. Even though the SemEval-2024 benchmark shows good results, there is still room for improvement. Promising paths toward improving performance include considering multi-task learning methods, including strategies for handling out-of-domain data minimizing bias, and investigating sophisticated architectures.",
}
| One major issue in natural language generation (NLG) models is detecting hallucinations (semantically inaccurate outputs). This study investigates a hallucination detection system designed for three distinct NLG tasks: definition modeling, paraphrase generation, and machine translation. The system uses feedforward neural networks for classification and SentenceTransformer models for similarity scores and sentence embeddings. Even though the SemEval-2024 benchmark shows good results, there is still room for improvement. Promising paths toward improving performance include considering multi-task learning methods, including strategies for handling out-of-domain data minimizing bias, and investigating sophisticated architectures. | [
"Choudhury, Sohan",
"Saha, Priyam",
"Ray, Subharthi",
"Das, Shankha",
"Das, Dipankar"
] | AlphaIntellect at SemEval-2024 Task 6: Detection of Hallucinations in Generated Text | semeval-1.137 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.138.bib | https://aclanthology.org/2024.semeval-1.138/ | @inproceedings{aali-etal-2024-ysp,
title = "{YSP} at {S}em{E}val-2024 Task 1: Enhancing Sentence Relatedness Assessment using {S}iamese Networks",
author = "Aali, Yasamin and
Hamidian, Sardar and
Farinneya, Parsa",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.138",
doi = "10.18653/v1/2024.semeval-1.138",
pages = "959--963",
abstract = "In this paper we present the system for Track A in the SemEval-2024 Task 1: Semantic Textual Relatedness for African and Asian Languages (STR). The proposed system integrates a Siamese Network architecture with pre-trained language models, including BERT, RoBERTa, and the Universal Sentence Encoder (USE). Through rigorous experimentation and analysis, we evaluate the performance of these models across multiple languages. Our findings reveal that the Universal Sentence Encoder excels in capturing semantic similarities, outperforming BERT and RoBERTa in most scenarios. Particularly notable is the USE{'}s exceptional performance in English and Marathi. These results emphasize the importance of selecting appropriate pre-trained models based on linguistic considerations and task requirements.",
}
| In this paper we present the system for Track A in the SemEval-2024 Task 1: Semantic Textual Relatedness for African and Asian Languages (STR). The proposed system integrates a Siamese Network architecture with pre-trained language models, including BERT, RoBERTa, and the Universal Sentence Encoder (USE). Through rigorous experimentation and analysis, we evaluate the performance of these models across multiple languages. Our findings reveal that the Universal Sentence Encoder excels in capturing semantic similarities, outperforming BERT and RoBERTa in most scenarios. Particularly notable is the USE{'}s exceptional performance in English and Marathi. These results emphasize the importance of selecting appropriate pre-trained models based on linguistic considerations and task requirements. | [
"Aali, Yasamin",
"Hamidian, Sardar",
"Farinneya, Parsa"
] | YSP at SemEval-2024 Task 1: Enhancing Sentence Relatedness Assessment using Siamese Networks | semeval-1.138 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.139.bib | https://aclanthology.org/2024.semeval-1.139/ | @inproceedings{bahad-etal-2024-nootnoot,
title = "{N}oot{N}oot At {S}em{E}val-2024 Task 6: Hallucinations and Related Observable Overgeneration Mistakes Detection",
author = "Bahad, Sankalp and
Bhaskar, Yash and
Krishnamurthy, Parameswari",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.139",
doi = "10.18653/v1/2024.semeval-1.139",
pages = "964--968",
abstract = "Semantic hallucinations in neural language gen-eration systems pose a significant challenge tothe reliability and accuracy of natural languageprocessing applications. Current neural mod-els often produce fluent but incorrect outputs,undermining the usefulness of generated text.In this study, we address the task of detectingsemantic hallucinations through the SHROOM(Semantic Hallucinations Real Or Mistakes)dataset, encompassing data from diverse NLGtasks such as definition modeling, machinetranslation, and paraphrase generation. We in-vestigate three methodologies: fine-tuning onlabelled training data, fine-tuning on labelledvalidation data, and a zero-shot approach usingthe Mixtral 8x7b instruct model. Our resultsdemonstrate the effectiveness of these method-ologies in identifying semantic hallucinations,with the zero-shot approach showing compet-itive performance without additional training.Our findings highlight the importance of robustdetection mechanisms for ensuring the accu-racy and reliability of neural language genera-tion systems.",
}
| Semantic hallucinations in neural language gen-eration systems pose a significant challenge tothe reliability and accuracy of natural languageprocessing applications. Current neural mod-els often produce fluent but incorrect outputs,undermining the usefulness of generated text.In this study, we address the task of detectingsemantic hallucinations through the SHROOM(Semantic Hallucinations Real Or Mistakes)dataset, encompassing data from diverse NLGtasks such as definition modeling, machinetranslation, and paraphrase generation. We in-vestigate three methodologies: fine-tuning onlabelled training data, fine-tuning on labelledvalidation data, and a zero-shot approach usingthe Mixtral 8x7b instruct model. Our resultsdemonstrate the effectiveness of these method-ologies in identifying semantic hallucinations,with the zero-shot approach showing compet-itive performance without additional training.Our findings highlight the importance of robustdetection mechanisms for ensuring the accu-racy and reliability of neural language genera-tion systems. | [
"Bahad, Sankalp",
"Bhaskar, Yash",
"Krishnamurthy, Parameswari"
] | NootNoot At SemEval-2024 Task 6: Hallucinations and Related Observable Overgeneration Mistakes Detection | semeval-1.139 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.140.bib | https://aclanthology.org/2024.semeval-1.140/ | @inproceedings{singhal-bedi-2024-transformers-semeval,
title = "Transformers at {S}em{E}val-2024 Task 5: Legal Argument Reasoning Task in Civil Procedure using {R}o{BERT}a",
author = "Singhal, Kriti and
Bedi, Jatin",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.140",
doi = "10.18653/v1/2024.semeval-1.140",
pages = "969--972",
abstract = "Legal argument reasoning task in civil procedure is a new NLP task utilizing a dataset from the domain of the U.S. civil procedure. The task aims at identifying whether the solution to a question in the legal domain is correct or not. This paper describes the team {``}Transformers{''} submission to the Legal Argument Reasoning Task in Civil Procedure shared task at SemEval-2024 Task 5. We use a BERT-based architecture for the shared task. The highest F1-score score and accuracy achieved was 0.6172 and 0.6531 respectively. We secured the 13th rank in the Legal Argument Reasoning Task in Civil Procedure shared task.",
}
| Legal argument reasoning task in civil procedure is a new NLP task utilizing a dataset from the domain of the U.S. civil procedure. The task aims at identifying whether the solution to a question in the legal domain is correct or not. This paper describes the team {``}Transformers{''} submission to the Legal Argument Reasoning Task in Civil Procedure shared task at SemEval-2024 Task 5. We use a BERT-based architecture for the shared task. The highest F1-score score and accuracy achieved was 0.6172 and 0.6531 respectively. We secured the 13th rank in the Legal Argument Reasoning Task in Civil Procedure shared task. | [
"Singhal, Kriti",
"Bedi, Jatin"
] | Transformers at SemEval-2024 Task 5: Legal Argument Reasoning Task in Civil Procedure using RoBERTa | semeval-1.140 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.141.bib | https://aclanthology.org/2024.semeval-1.141/ | @inproceedings{chen-etal-2024-ynu,
title = "{YNU}-{HPCC} at {S}em{E}val-2024 Task 7: Instruction Fine-tuning Models for Numerical Understanding and Generation",
author = "Chen, Kaiyuan and
Wang, Jin and
Zhang, Xuejie",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.141",
doi = "10.18653/v1/2024.semeval-1.141",
pages = "973--979",
abstract = "This paper presents our systems for Task 7, Numeral-Aware Language Understanding and Generation of SemEval 2024. As participants of Task 7, we engage in all subtasks and implement corresponding systems for each subtask. All subtasks cover three aspects: Quantitative understanding (English), Reading Comprehension of the Numbers in the text (Chinese), and Numeral-Aware Headline Generation (English). Our approach explores employing instruction-tuned models (Flan-T5) or text-to-text models (T5) to accomplish the respective subtasks. We implement the instruction fine-tuning with or without demonstrations and employ similarity-based retrieval or manual methods to construct demonstrations for each example in instruction fine-tuning. Moreover, we reformulate the model{'}s output into a chain-of-thought format with calculation expressions to enhance its reasoning performance for reasoning subtasks. The competitive results in all subtasks demonstrate the effectiveness of our systems.",
}
| This paper presents our systems for Task 7, Numeral-Aware Language Understanding and Generation of SemEval 2024. As participants of Task 7, we engage in all subtasks and implement corresponding systems for each subtask. All subtasks cover three aspects: Quantitative understanding (English), Reading Comprehension of the Numbers in the text (Chinese), and Numeral-Aware Headline Generation (English). Our approach explores employing instruction-tuned models (Flan-T5) or text-to-text models (T5) to accomplish the respective subtasks. We implement the instruction fine-tuning with or without demonstrations and employ similarity-based retrieval or manual methods to construct demonstrations for each example in instruction fine-tuning. Moreover, we reformulate the model{'}s output into a chain-of-thought format with calculation expressions to enhance its reasoning performance for reasoning subtasks. The competitive results in all subtasks demonstrate the effectiveness of our systems. | [
"Chen, Kaiyuan",
"Wang, Jin",
"Zhang, Xuejie"
] | YNU-HPCC at SemEval-2024 Task 7: Instruction Fine-tuning Models for Numerical Understanding and Generation | semeval-1.141 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.142.bib | https://aclanthology.org/2024.semeval-1.142/ | @inproceedings{sonavane-etal-2024-cailmd,
title = "{CAILMD}-23 at {S}em{E}val-2024 Task 1: Multilingual Evaluation of Semantic Textual Relatedness",
author = "Sonavane, Srushti and
Endait, Sharvi and
Sinare, Ridhima and
Rohera, Pritika and
Naik, Advait and
Kadam, Dipali",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.142",
doi = "10.18653/v1/2024.semeval-1.142",
pages = "980--985",
abstract = "The explosive growth of online content demands robust Natural Language Processing (NLP) techniques that can capture nuanced meanings and cultural context across diverse languages. Semantic Textual Relatedness (STR) goes beyond superficial word overlap, considering linguistic elements and non-linguistic factors like topic, sentiment, and perspective. Despite its pivotal role, prior NLP research has predominantly focused on English, limiting its applicability across languages. Addressing this gap, our paper dives into capturing deeper connections between sentences beyond simple word overlap. Going beyond English-centric NLP research, we explore STR in Marathi, Hindi, Spanish, and English, unlocking the potential for information retrieval, machine translation, and more. Leveraging the SemEval-2024 shared task, we explore various language models across three learning paradigms: supervised, unsupervised, and cross-lingual. Our comprehensive methodology gains promising results, demonstrating the effectiveness of our approach. This work aims to not only showcase our achievements but also inspire further research in multilingual STR, particularly for low-resourced languages.",
}
| The explosive growth of online content demands robust Natural Language Processing (NLP) techniques that can capture nuanced meanings and cultural context across diverse languages. Semantic Textual Relatedness (STR) goes beyond superficial word overlap, considering linguistic elements and non-linguistic factors like topic, sentiment, and perspective. Despite its pivotal role, prior NLP research has predominantly focused on English, limiting its applicability across languages. Addressing this gap, our paper dives into capturing deeper connections between sentences beyond simple word overlap. Going beyond English-centric NLP research, we explore STR in Marathi, Hindi, Spanish, and English, unlocking the potential for information retrieval, machine translation, and more. Leveraging the SemEval-2024 shared task, we explore various language models across three learning paradigms: supervised, unsupervised, and cross-lingual. Our comprehensive methodology gains promising results, demonstrating the effectiveness of our approach. This work aims to not only showcase our achievements but also inspire further research in multilingual STR, particularly for low-resourced languages. | [
"Sonavane, Srushti",
"Endait, Sharvi",
"Sinare, Ridhima",
"Rohera, Pritika",
"Naik, Advait",
"Kadam, Dipali"
] | CAILMD-23 at SemEval-2024 Task 1: Multilingual Evaluation of Semantic Textual Relatedness | semeval-1.142 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.143.bib | https://aclanthology.org/2024.semeval-1.143/ | @inproceedings{aguiar-etal-2024-seme,
title = "{SEME} at {S}em{E}val-2024 Task 2: Comparing Masked and Generative Language Models on Natural Language Inference for Clinical Trials",
author = "Aguiar, Mathilde and
Zweigenbaum, Pierre and
Naderi, Nona",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.143",
doi = "10.18653/v1/2024.semeval-1.143",
pages = "986--996",
abstract = "This paper describes our submission to Task 2 of SemEval-2024: Safe Biomedical Natural Language Inference for Clinical Trials. The Multi-evidence Natural Language Inference for Clinical Trial Data (NLI4CT) consists of a Textual Entailment (TE) task focused on the evaluation of the consistency and faithfulness of Natural Language Inference (NLI) models applied to Clinical Trial Reports (CTR). We test 2 distinct approaches, one based on finetuning and ensembling Masked Language Models and the other based on prompting Large Language Models using templates, in particular, using Chain-Of-Thought and Contrastive Chain-Of-Thought. Prompting Flan-T5-large in a 2-shot setting leads to our best system that achieves 0.57 F1 score, 0.64 Faithfulness, and 0.56 Consistency.",
}
| This paper describes our submission to Task 2 of SemEval-2024: Safe Biomedical Natural Language Inference for Clinical Trials. The Multi-evidence Natural Language Inference for Clinical Trial Data (NLI4CT) consists of a Textual Entailment (TE) task focused on the evaluation of the consistency and faithfulness of Natural Language Inference (NLI) models applied to Clinical Trial Reports (CTR). We test 2 distinct approaches, one based on finetuning and ensembling Masked Language Models and the other based on prompting Large Language Models using templates, in particular, using Chain-Of-Thought and Contrastive Chain-Of-Thought. Prompting Flan-T5-large in a 2-shot setting leads to our best system that achieves 0.57 F1 score, 0.64 Faithfulness, and 0.56 Consistency. | [
"Aguiar, Mathilde",
"Zweigenbaum, Pierre",
"Naderi, Nona"
] | SEME at SemEval-2024 Task 2: Comparing Masked and Generative Language Models on Natural Language Inference for Clinical Trials | semeval-1.143 | Poster | 2404.03977 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.144.bib | https://aclanthology.org/2024.semeval-1.144/ | @inproceedings{benedetto-etal-2024-maindz,
title = "{MAINDZ} at {S}em{E}val-2024 Task 5: {CLUEDO} - Choosing Legal o{U}tcome by Explaining Decision through Oversight",
author = "Benedetto, Irene and
Koudounas, Alkis and
Vaiani, Lorenzo and
Pastor, Eliana and
Cagliero, Luca and
Tarasconi, Francesco",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.144",
doi = "10.18653/v1/2024.semeval-1.144",
pages = "997--1005",
abstract = "Large language models (LLMs) have recently obtained strong performance on complex reasoning tasks. However, their capabilities in specialized domains like law remain relatively unexplored. We present CLUEDO, a system to tackle a novel legal reasoning task that involves determining if a provided answer correctly addresses a legal question derived from U.S. civil procedure cases. CLUEDO utilizes multiple collaborator models that are trained using multiple-choice prompting to choose the right label and generate explanations. These collaborators are overseen by a final {``}detective{''} model that identifies the most accurate answer in a zero-shot manner. Our approach achieves an F1 macro score of 0.74 on the development set and 0.76 on the test set, outperforming individual models. Unlike the powerful GPT-4, CLUEDO provides more stable predictions thanks to the ensemble approach. Our results showcase the promise of tailored frameworks to enhance legal reasoning capabilities in LLMs.",
}
| Large language models (LLMs) have recently obtained strong performance on complex reasoning tasks. However, their capabilities in specialized domains like law remain relatively unexplored. We present CLUEDO, a system to tackle a novel legal reasoning task that involves determining if a provided answer correctly addresses a legal question derived from U.S. civil procedure cases. CLUEDO utilizes multiple collaborator models that are trained using multiple-choice prompting to choose the right label and generate explanations. These collaborators are overseen by a final {``}detective{''} model that identifies the most accurate answer in a zero-shot manner. Our approach achieves an F1 macro score of 0.74 on the development set and 0.76 on the test set, outperforming individual models. Unlike the powerful GPT-4, CLUEDO provides more stable predictions thanks to the ensemble approach. Our results showcase the promise of tailored frameworks to enhance legal reasoning capabilities in LLMs. | [
"Benedetto, Irene",
"Koudounas, Alkis",
"Vaiani, Lorenzo",
"Pastor, Eliana",
"Cagliero, Luca",
"Tarasconi, Francesco"
] | MAINDZ at SemEval-2024 Task 5: CLUEDO - Choosing Legal oUtcome by Explaining Decision through Oversight | semeval-1.144 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.145.bib | https://aclanthology.org/2024.semeval-1.145/ | @inproceedings{darwinkel-etal-2024-groningen,
title = "{G}roningen Group {E} at {S}em{E}val-2024 Task 8: Detecting machine-generated texts through pre-trained language models augmented with explicit linguistic-stylistic features",
author = "Darwinkel, Patrick and
Van Vaals, Sijbren and
Van Der Holt, Marieke and
Van Houten, Jarno",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.145",
doi = "10.18653/v1/2024.semeval-1.145",
pages = "1006--1014",
abstract = "Our approach to detecting machine-generated text for the SemEval-2024 Task 8 combines a wide range of linguistic-stylistic features with pre-trained language models (PLM). Experiments using random forests and PLMs resulted in an augmented DistilBERT system for subtask A and B and an augmented Longformer for subtask C. These systems achieved accuracies of 0.63 and 0.77 for the mono- and multilingual tracks of subtask A, 0.64 for subtask B and a MAE of 26.07 for subtask C. Although lower than the task organizer{'}s baselines, we demonstrate that linguistic-stylistic features are predictors for whether a text was authored by a model (and if so, which one).",
}
| Our approach to detecting machine-generated text for the SemEval-2024 Task 8 combines a wide range of linguistic-stylistic features with pre-trained language models (PLM). Experiments using random forests and PLMs resulted in an augmented DistilBERT system for subtask A and B and an augmented Longformer for subtask C. These systems achieved accuracies of 0.63 and 0.77 for the mono- and multilingual tracks of subtask A, 0.64 for subtask B and a MAE of 26.07 for subtask C. Although lower than the task organizer{'}s baselines, we demonstrate that linguistic-stylistic features are predictors for whether a text was authored by a model (and if so, which one). | [
"Darwinkel, Patrick",
"Van Vaals, Sijbren",
"Van Der Holt, Marieke",
"Van Houten, Jarno"
] | Groningen Group E at SemEval-2024 Task 8: Detecting machine-generated texts through pre-trained language models augmented with explicit linguistic-stylistic features | semeval-1.145 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.146.bib | https://aclanthology.org/2024.semeval-1.146/ | @inproceedings{khurshid-das-2024-magnum,
title = "Magnum {JUCSE} at {S}em{E}val-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes",
author = "Khurshid, Adnan and
Das, Dipankar",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.146",
doi = "10.18653/v1/2024.semeval-1.146",
pages = "1015--1018",
abstract = "This paper focuses on the task of detecting persuasion techniques organised in a hierarchy within meme text in multiple languages like English, North Macedonian, Arabic and Bulgarian, exploring the ways in which textual elements contribute to the dissemination of persuasive messages.The main strategy of the system is to train a binary classifier for each node in the hierarchy and predict labels in a top down fashion by seeing the confidence value of the prediction at any node. For each unique label in the hierarchy, a dataset is created from the original dataset which is then used to train the binary classifier for that label",
}
| This paper focuses on the task of detecting persuasion techniques organised in a hierarchy within meme text in multiple languages like English, North Macedonian, Arabic and Bulgarian, exploring the ways in which textual elements contribute to the dissemination of persuasive messages.The main strategy of the system is to train a binary classifier for each node in the hierarchy and predict labels in a top down fashion by seeing the confidence value of the prediction at any node. For each unique label in the hierarchy, a dataset is created from the original dataset which is then used to train the binary classifier for that label | [
"Khurshid, Adnan",
"Das, Dipankar"
] | Magnum JUCSE at SemEval-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes | semeval-1.146 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.147.bib | https://aclanthology.org/2024.semeval-1.147/ | @inproceedings{zhang-coltekin-2024-tubingen,
title = {{T}{\"u}bingen-{CL} at {S}em{E}val-2024 Task 1: Ensemble Learning for Semantic Relatedness Estimation},
author = {Zhang, Leixin and
{\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}},
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.147",
doi = "10.18653/v1/2024.semeval-1.147",
pages = "1019--1025",
abstract = "The paper introduces our system for SemEval-2024 Task 1, which aims to predict the relatedness of sentence pairs. Operating under the hypothesis that semantic relatedness is a broader concept that extends beyond mere similarity of sentences, our approach seeks to identify useful features for relatedness estimation. We employ an ensemble approach integrating various systems, including statistical textual features and outputs of deep learning models to predict relatedness scores. The findings suggest that semantic relatedness can be inferred from various sources and ensemble models outperform many individual systems in estimating semantic relatedness.",
}
| The paper introduces our system for SemEval-2024 Task 1, which aims to predict the relatedness of sentence pairs. Operating under the hypothesis that semantic relatedness is a broader concept that extends beyond mere similarity of sentences, our approach seeks to identify useful features for relatedness estimation. We employ an ensemble approach integrating various systems, including statistical textual features and outputs of deep learning models to predict relatedness scores. The findings suggest that semantic relatedness can be inferred from various sources and ensemble models outperform many individual systems in estimating semantic relatedness. | [
"Zhang, Leixin",
"{\\c{C}}{\\\"o}ltekin, {\\c{C}}a{\\u{g}}r{\\i}"
] | Tübingen-CL at SemEval-2024 Task 1: Ensemble Learning for Semantic Relatedness Estimation | semeval-1.147 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.148.bib | https://aclanthology.org/2024.semeval-1.148/ | @inproceedings{pulipaka-etal-2024-semeval,
title = "{S}em{E}val Task 8: A Comparison of Traditional and Neural Models for Detecting Machine Authored Text",
author = {Pulipaka, Srikar Kashyap and
Mhalgi, Shrirang and
Larson, Joseph and
K{\"u}bler, Sandra},
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.148",
doi = "10.18653/v1/2024.semeval-1.148",
pages = "1026--1031",
abstract = {Since Large Language Models have reached a stage where it is becoming more and more difficult to distinguish between human and machine written text, there is an increasing need for automated systems to distinguish between them. As part of SemEval Task 8, Subtask A: Binary Human-Written vs. Machine-Generated Text Classification, we explore a variety of machine learning classifiers, from traditional statistical methods, such as Na{\"\i}ve Bayes and Decision Trees, to fine-tuned transformer models, suchas RoBERTa and ALBERT. Our findings show that using a fine-tuned RoBERTa model with optimizedhyperparameters yields the best accuracy. However, the improvement does not translate to the test set because of the differences in distribution in the development and test sets.},
}
| Since Large Language Models have reached a stage where it is becoming more and more difficult to distinguish between human and machine written text, there is an increasing need for automated systems to distinguish between them. As part of SemEval Task 8, Subtask A: Binary Human-Written vs. Machine-Generated Text Classification, we explore a variety of machine learning classifiers, from traditional statistical methods, such as Na{\"\i}ve Bayes and Decision Trees, to fine-tuned transformer models, suchas RoBERTa and ALBERT. Our findings show that using a fine-tuned RoBERTa model with optimizedhyperparameters yields the best accuracy. However, the improvement does not translate to the test set because of the differences in distribution in the development and test sets. | [
"Pulipaka, Srikar Kashyap",
"Mhalgi, Shrirang",
"Larson, Joseph",
"K{\\\"u}bler, S",
"ra"
] | SemEval Task 8: A Comparison of Traditional and Neural Models for Detecting Machine Authored Text | semeval-1.148 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.149.bib | https://aclanthology.org/2024.semeval-1.149/ | @inproceedings{nita-pais-2024-racai,
title = "{RACAI} at {S}em{E}val-2024 Task 10: Combining algorithms for code-mixed Emotion Recognition in Conversation",
author = "Ni{\textcommabelow{t}}{\u{a}}, Sara and
P{\u{a}}i{\textcommabelow{s}}, Vasile",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.149",
doi = "10.18653/v1/2024.semeval-1.149",
pages = "1032--1037",
abstract = "Code-mixed emotion recognition constitutes a challenge for NLP research due to the text{'}s deviation from the traditional grammatical structure of the original languages. This paper describes the system submitted by the RACAI Team for the SemEval 2024 Task 10 - EDiReF subtasks 1: Emotion Recognition in Conversation (ERC) in Hindi-English code-mixed conversations. We propose a system that combines a transformer-based model with two simple neural networks.",
}
| Code-mixed emotion recognition constitutes a challenge for NLP research due to the text{'}s deviation from the traditional grammatical structure of the original languages. This paper describes the system submitted by the RACAI Team for the SemEval 2024 Task 10 - EDiReF subtasks 1: Emotion Recognition in Conversation (ERC) in Hindi-English code-mixed conversations. We propose a system that combines a transformer-based model with two simple neural networks. | [
"Ni{\\textcommabelow{t}}{\\u{a}}, Sara",
"P{\\u{a}}i{\\textcommabelow{s}}, Vasile"
] | RACAI at SemEval-2024 Task 10: Combining algorithms for code-mixed Emotion Recognition in Conversation | semeval-1.149 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.150.bib | https://aclanthology.org/2024.semeval-1.150/ | @inproceedings{rostamkhani-etal-2024-rosha,
title = "{ROSHA} at {S}em{E}val-2024 Task 9: {BRAINTEASER} A Novel Task Defying Common Sense",
author = "Rostamkhani, Mohammadmostafa and
Mousavinia, Shayan and
Eetemadi, Sauleh",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.150",
doi = "10.18653/v1/2024.semeval-1.150",
pages = "1038--1042",
abstract = "In our exploration of SemEval 2024 Task 9, specifically the challenging BRAINTEASER: A Novel Task Defying Common Sense, we employed various strategies for the BRAINTEASER QA task, which encompasses both sentence and word puzzles. In the initial approach, we applied the XLM-RoBERTa model both to the original training dataset and concurrently to the original dataset alongside the BiRdQA dataset and the original dataset alongside RiddleSense for comprehensive model training.Another strategy involved expanding each word within our BiRdQA dataset into a full sentence. This unique perspective aimed to enhance the semantic impact of individual words in our training regimen for word puzzle (WP) riddles. Utilizing ChatGPT-3.5, we extended each word into an extensive sentence, applying this process to all options within each riddle.Furthermore, we explored the implementation of RECONCILE (Round-table conference) using three prominent large language models{---}ChatGPT, Gemini, and the Mixtral-8x7B Large Language Model (LLM). As a final approach, we leveraged GPT-4 results. Remarkably, our most successful experiment yielded noteworthy results, achieving a score of 0.900 for sentence puzzles (S{\_}ori) and 0.906 for word puzzles (W{\_}ori).",
}
| In our exploration of SemEval 2024 Task 9, specifically the challenging BRAINTEASER: A Novel Task Defying Common Sense, we employed various strategies for the BRAINTEASER QA task, which encompasses both sentence and word puzzles. In the initial approach, we applied the XLM-RoBERTa model both to the original training dataset and concurrently to the original dataset alongside the BiRdQA dataset and the original dataset alongside RiddleSense for comprehensive model training.Another strategy involved expanding each word within our BiRdQA dataset into a full sentence. This unique perspective aimed to enhance the semantic impact of individual words in our training regimen for word puzzle (WP) riddles. Utilizing ChatGPT-3.5, we extended each word into an extensive sentence, applying this process to all options within each riddle.Furthermore, we explored the implementation of RECONCILE (Round-table conference) using three prominent large language models{---}ChatGPT, Gemini, and the Mixtral-8x7B Large Language Model (LLM). As a final approach, we leveraged GPT-4 results. Remarkably, our most successful experiment yielded noteworthy results, achieving a score of 0.900 for sentence puzzles (S{\_}ori) and 0.906 for word puzzles (W{\_}ori). | [
"Rostamkhani, Mohammadmostafa",
"Mousavinia, Shayan",
"Eetemadi, Sauleh"
] | ROSHA at SemEval-2024 Task 9: BRAINTEASER A Novel Task Defying Common Sense | semeval-1.150 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.151.bib | https://aclanthology.org/2024.semeval-1.151/ | @inproceedings{ebrahimi-etal-2024-sharif-str,
title = "Sharif-{STR} at {S}em{E}val-2024 Task 1: Transformer as a Regression Model for Fine-Grained Scoring of Textual Semantic Relations",
author = "Ebrahimi, Seyedeh Fatemeh and
Akhavan Azari, Karim and
Iravani, Amirmasoud and
Alizadeh, Hadi and
Taghavi, Zeinab and
Sameti, Hossein",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.151",
doi = "10.18653/v1/2024.semeval-1.151",
pages = "1043--1052",
abstract = "This paper explores semantic textual relatedness (STR) using fine-tuning techniques on the RoBERTa transformer model, focusing on sentence-level STR within Track A (Supervised). The study evaluates the effectiveness of this approach across different languages, with promising results in English and Spanish but encountering challenges in Arabic.",
}
| This paper explores semantic textual relatedness (STR) using fine-tuning techniques on the RoBERTa transformer model, focusing on sentence-level STR within Track A (Supervised). The study evaluates the effectiveness of this approach across different languages, with promising results in English and Spanish but encountering challenges in Arabic. | [
"Ebrahimi, Seyedeh Fatemeh",
"Akhavan Azari, Karim",
"Iravani, Amirmasoud",
"Alizadeh, Hadi",
"Taghavi, Zeinab",
"Sameti, Hossein"
] | Sharif-STR at SemEval-2024 Task 1: Transformer as a Regression Model for Fine-Grained Scoring of Textual Semantic Relations | semeval-1.151 | Poster | 2407.12426 | [
"https://github.com/sharif-slpl/sharif-str"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.152.bib | https://aclanthology.org/2024.semeval-1.152/ | @inproceedings{maslaris-arampatzis-2024-duth,
title = "{DUT}h at {S}em{E}val 2024 Task 5: A multi-task learning approach for the Legal Argument Reasoning Task in Civil Procedure",
author = "Maslaris, Ioannis and
Arampatzis, Avi",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.152",
doi = "10.18653/v1/2024.semeval-1.152",
pages = "1053--1057",
abstract = "Text-generative models have proven to be good reasoners. Although reasoning abilities are mostly observed in larger language models, a number of strategies try to transfer this skill to smaller language models. This paper presents our approach to SemEval 2024 Task-5: The Legal Argument Reasoning Task in Civil Procedure. This shared task aims to develop a system that efficiently handles a multiple-choice question-answering task in the context of the US civil procedure domain. The dataset provides a human-generated rationale for each answer. Given the complexity of legal issues, this task certainly challenges the reasoning abilities of LLMs and AI systems in general. Our work explores fine-tuning an LLM as a correct/incorrect answer classifier. In this context, we are making use of multi-task learning toincorporate the rationales into the fine-tuning process.",
}
| Text-generative models have proven to be good reasoners. Although reasoning abilities are mostly observed in larger language models, a number of strategies try to transfer this skill to smaller language models. This paper presents our approach to SemEval 2024 Task-5: The Legal Argument Reasoning Task in Civil Procedure. This shared task aims to develop a system that efficiently handles a multiple-choice question-answering task in the context of the US civil procedure domain. The dataset provides a human-generated rationale for each answer. Given the complexity of legal issues, this task certainly challenges the reasoning abilities of LLMs and AI systems in general. Our work explores fine-tuning an LLM as a correct/incorrect answer classifier. In this context, we are making use of multi-task learning toincorporate the rationales into the fine-tuning process. | [
"Maslaris, Ioannis",
"Arampatzis, Avi"
] | DUTh at SemEval 2024 Task 5: A multi-task learning approach for the Legal Argument Reasoning Task in Civil Procedure | semeval-1.152 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.153.bib | https://aclanthology.org/2024.semeval-1.153/ | @inproceedings{kalantari-etal-2024-mamet,
title = "{MAMET} at {S}em{E}val-2024 Task 7: Supervised Enhanced Reasoning Agent Model",
author = "Kalantari, Mahmood and
Feghhi, Mehdi and
Khany Alamooti, Taha",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.153",
doi = "10.18653/v1/2024.semeval-1.153",
pages = "1058--1063",
abstract = "In the intersection of language understanding and numerical reasoning, a formidable challenge arises in natural language processing (NLP). Our study delves into the realm of NumEval, focusing on numeral-aware language understanding and generation using the QP, QQA and QNLI datasets. We harness the potential of the Orca2 model, Fine-tuning it in both normal and Chain-of-Thought modes with prompt tuning to enhance accuracy. Despite initial conjectures, our findings reveal intriguing disparities in model performance. While standard training methodologies yield commendable accuracy rates. The core contribution of this work lies in its elucidation of the intricate interplay between dataset sequencing and model performance. We expected to achieve a general model with the Fine Tuning model on the QP and QNLI datasets respectively, which has good accuracy in all three datasets. However, this goal was not achieved, and in order to achieve this goal, we introduce our structure.",
}
| In the intersection of language understanding and numerical reasoning, a formidable challenge arises in natural language processing (NLP). Our study delves into the realm of NumEval, focusing on numeral-aware language understanding and generation using the QP, QQA and QNLI datasets. We harness the potential of the Orca2 model, Fine-tuning it in both normal and Chain-of-Thought modes with prompt tuning to enhance accuracy. Despite initial conjectures, our findings reveal intriguing disparities in model performance. While standard training methodologies yield commendable accuracy rates. The core contribution of this work lies in its elucidation of the intricate interplay between dataset sequencing and model performance. We expected to achieve a general model with the Fine Tuning model on the QP and QNLI datasets respectively, which has good accuracy in all three datasets. However, this goal was not achieved, and in order to achieve this goal, we introduce our structure. | [
"Kalantari, Mahmood",
"Feghhi, Mehdi",
"Khany Alamooti, Taha"
] | MAMET at SemEval-2024 Task 7: Supervised Enhanced Reasoning Agent Model | semeval-1.153 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.154.bib | https://aclanthology.org/2024.semeval-1.154/ | @inproceedings{iordanidou-etal-2024-duth,
title = "{DUT}h at {S}em{E}val-2024 Task 6: Comparing Pre-trained Models on Sentence Similarity Evaluation for Detecting of Hallucinations and Related Observable Overgeneration Mistakes",
author = "Iordanidou, Ioanna and
Maslaris, Ioannis and
Arampatzis, Avi",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.154",
doi = "10.18653/v1/2024.semeval-1.154",
pages = "1064--1070",
abstract = "In this paper, we present our approach toSemEval-2024 Task 6: SHROOM, a Sharedtask on Hallucinations and Related ObservableOvergeneration Mistakes, which aims to determine weather AI generated text is semanticallycorrect or incorrect. This work is a comparative study of Large Language Models (LLMs)in the context of the task, shedding light ontheir effectiveness and nuances. We present asystem that leverages pre-trained LLMs, suchas LaBSE, T5, and DistilUSE, for binary classification of given sentences into {`}Hallucination{'}or {`}Not Hallucination{'} classes by evaluatingthe model{'}s output against the reference correct text. Moreover, beyond utilizing labeleddatasets, our methodology integrates syntheticlabel creation in unlabeled datasets, followedby the prediction of test labels.",
}
| In this paper, we present our approach toSemEval-2024 Task 6: SHROOM, a Sharedtask on Hallucinations and Related ObservableOvergeneration Mistakes, which aims to determine weather AI generated text is semanticallycorrect or incorrect. This work is a comparative study of Large Language Models (LLMs)in the context of the task, shedding light ontheir effectiveness and nuances. We present asystem that leverages pre-trained LLMs, suchas LaBSE, T5, and DistilUSE, for binary classification of given sentences into {`}Hallucination{'}or {`}Not Hallucination{'} classes by evaluatingthe model{'}s output against the reference correct text. Moreover, beyond utilizing labeleddatasets, our methodology integrates syntheticlabel creation in unlabeled datasets, followedby the prediction of test labels. | [
"Iordanidou, Ioanna",
"Maslaris, Ioannis",
"Arampatzis, Avi"
] | DUTh at SemEval-2024 Task 6: Comparing Pre-trained Models on Sentence Similarity Evaluation for Detecting of Hallucinations and Related Observable Overgeneration Mistakes | semeval-1.154 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.155.bib | https://aclanthology.org/2024.semeval-1.155/ | @inproceedings{ortiz-barajas-etal-2024-mbzuai,
title = "{MBZUAI}-{UNAM} at {S}em{E}val-2024 Task 1: Sentence-{CROBI}, a Simple Cross-Bi-Encoder-Based Neural Network Architecture for Semantic Textual Relatedness",
author = "Ortiz Barajas, Jesus German and
Bel-enguix, Gemma and
Gom{\'e}z-adorno, Helena",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.155",
doi = "10.18653/v1/2024.semeval-1.155",
pages = "1071--1079",
abstract = "The Semantic Textual Relatedness (STR) shared task aims at detecting the degree of semantic relatedness between pairs of sentences on low-resource languages from Afroasiatic, Indoeuropean, Austronesian, Dravidian, and Nigercongo families. We use the Sentence-CROBI architecture to tackle this problem. The model is adapted from its original purpose of paraphrase detection to explore its capacities in a related task with limited resources and in multilingual and monolingual settings. Our approach combines the vector representation of cross-encoders and bi-encoders and possesses high adaptable capacity by combining several pre-trained models. Our system obtained good results on the low-resource languages of the dataset using a multilingual fine-tuning approach.",
}
| The Semantic Textual Relatedness (STR) shared task aims at detecting the degree of semantic relatedness between pairs of sentences on low-resource languages from Afroasiatic, Indoeuropean, Austronesian, Dravidian, and Nigercongo families. We use the Sentence-CROBI architecture to tackle this problem. The model is adapted from its original purpose of paraphrase detection to explore its capacities in a related task with limited resources and in multilingual and monolingual settings. Our approach combines the vector representation of cross-encoders and bi-encoders and possesses high adaptable capacity by combining several pre-trained models. Our system obtained good results on the low-resource languages of the dataset using a multilingual fine-tuning approach. | [
"Ortiz Barajas, Jesus German",
"Bel-enguix, Gemma",
"Gom{\\'e}z-adorno, Helena"
] | MBZUAI-UNAM at SemEval-2024 Task 1: Sentence-CROBI, a Simple Cross-Bi-Encoder-Based Neural Network Architecture for Semantic Textual Relatedness | semeval-1.155 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.156.bib | https://aclanthology.org/2024.semeval-1.156/ | @inproceedings{kyriakou-etal-2024-duth,
title = "{DUT}h at {S}em{E}val 2024 Task 8: Comparing classic Machine Learning Algorithms and {LLM} based methods for Multigenerator, Multidomain and Multilingual Machine-Generated Text Detection",
author = "Kyriakou, Theodora and
Maslaris, Ioannis and
Arampatzis, Avi",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.156",
doi = "10.18653/v1/2024.semeval-1.156",
pages = "1080--1086",
abstract = "Text-generative models evolve rapidly nowadays. Although, they are very useful tools for a lot of people, they have also raised concerns for different reasons. This paper presents our work for SemEval2024 Task-8 on 2 out of the 3 subtasks. This shared task aims at finding automatic models for making AI vs. human written text classification easier. Our team, after trying different preprocessing, several Machine Learning algorithms, and some LLMs, ended up with mBERT, XLM-RoBERTa, and BERT for the tasks we submitted. We present both positive and negative methods, so that future researchers are informed about what works and what doesn{'}t.",
}
| Text-generative models evolve rapidly nowadays. Although, they are very useful tools for a lot of people, they have also raised concerns for different reasons. This paper presents our work for SemEval2024 Task-8 on 2 out of the 3 subtasks. This shared task aims at finding automatic models for making AI vs. human written text classification easier. Our team, after trying different preprocessing, several Machine Learning algorithms, and some LLMs, ended up with mBERT, XLM-RoBERTa, and BERT for the tasks we submitted. We present both positive and negative methods, so that future researchers are informed about what works and what doesn{'}t. | [
"Kyriakou, Theodora",
"Maslaris, Ioannis",
"Arampatzis, Avi"
] | DUTh at SemEval 2024 Task 8: Comparing classic Machine Learning Algorithms and LLM based methods for Multigenerator, Multidomain and Multilingual Machine-Generated Text Detection | semeval-1.156 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.157.bib | https://aclanthology.org/2024.semeval-1.157/ | @inproceedings{alinejad-moosavi-monazzah-2024-sina,
title = "Sina Alinejad at {S}em{E}val-2024 Task 7: Numeral Prediction using gpt3.5",
author = "Alinejad, Sina and
Moosavi Monazzah, Erfan",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.157",
doi = "10.18653/v1/2024.semeval-1.157",
pages = "1087--1091",
abstract = "",
}
| [
"Alinejad, Sina",
"Moosavi Monazzah, Erfan"
] | Sina Alinejad at SemEval-2024 Task 7: Numeral Prediction using gpt3.5 | semeval-1.157 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|||
https://aclanthology.org/2024.semeval-1.158.bib | https://aclanthology.org/2024.semeval-1.158/ | @inproceedings{osoolian-etal-2024-iustnlplab,
title = "{IUSTNLPLAB} at {S}em{E}val-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes",
author = "Osoolian, Mohammad and
Moosavi Monazzah, Erfan and
Eetemadi, Sauleh",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.158",
doi = "10.18653/v1/2024.semeval-1.158",
pages = "1092--1096",
abstract = "This paper outlines our approach to SemEval-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes, specifically addressing subtask 1. The study focuses on model fine-tuning using language models, including BERT, GPT-2, and RoBERTa, with the experiment results demonstrating optimal performance with GPT-2. Our system submission achieved a competitive ranking of 17th out of 33 teams in subtask 1, showcasing the effectiveness of the employed methodology in the context of persuasive technique identification within meme texts.",
}
| This paper outlines our approach to SemEval-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes, specifically addressing subtask 1. The study focuses on model fine-tuning using language models, including BERT, GPT-2, and RoBERTa, with the experiment results demonstrating optimal performance with GPT-2. Our system submission achieved a competitive ranking of 17th out of 33 teams in subtask 1, showcasing the effectiveness of the employed methodology in the context of persuasive technique identification within meme texts. | [
"Osoolian, Mohammad",
"Moosavi Monazzah, Erfan",
"Eetemadi, Sauleh"
] | IUSTNLPLAB at SemEval-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes | semeval-1.158 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.159.bib | https://aclanthology.org/2024.semeval-1.159/ | @inproceedings{levchenko-etal-2024-pweitinlp,
title = "{PWEITINLP} at {S}em{E}val-2024 Task 3: Two Step Emotion Cause Analysis",
author = "Levchenko, Sofiia and
Wolert, Rafa{\l} and
Andruszkiewicz, Piotr",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.159",
doi = "10.18653/v1/2024.semeval-1.159",
pages = "1097--1105",
abstract = "ECPE (emotion cause pair extraction) task was introduced to solve the shortcomings of ECE (emotion cause extraction). Models with sequential data processing abilities or complex architecture can be utilized to solve this task. Our contribution to solving Subtask 1: Textual Emotion-Cause Pair Extraction in Conversations defined in the SemEval-2024 Task 3: The Competition of Multimodal Emotion Cause Analysis in Conversations is to create a two-step solution to the ECPE task utilizing GPT-3 for emotion classification and SpanBERT for extracting the cause utterances.",
}
| ECPE (emotion cause pair extraction) task was introduced to solve the shortcomings of ECE (emotion cause extraction). Models with sequential data processing abilities or complex architecture can be utilized to solve this task. Our contribution to solving Subtask 1: Textual Emotion-Cause Pair Extraction in Conversations defined in the SemEval-2024 Task 3: The Competition of Multimodal Emotion Cause Analysis in Conversations is to create a two-step solution to the ECPE task utilizing GPT-3 for emotion classification and SpanBERT for extracting the cause utterances. | [
"Levchenko, Sofiia",
"Wolert, Rafa{\\l}",
"Andruszkiewicz, Piotr"
] | PWEITINLP at SemEval-2024 Task 3: Two Step Emotion Cause Analysis | semeval-1.159 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.160.bib | https://aclanthology.org/2024.semeval-1.160/ | @inproceedings{abbaspour-etal-2024-iust,
title = "{IUST}-{NLPLAB} at {S}em{E}val-2024 Task 9: {BRAINTEASER} By {MPN}et (Sentence Puzzle)",
author = "Abbaspour, Mohammad Hossein and
Moosavi Monazzah, Erfan and
Eetemadi, Sauleh",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.160",
doi = "10.18653/v1/2024.semeval-1.160",
pages = "1106--1109",
abstract = "This study addresses a task encompassing two distinct subtasks: Sentence-puzzle and Word-puzzle. Our primary focus lies within the Sentence-puzzle subtask, which involves discerning the correct answer from a set of three options for a given riddle constructed from sentence fragments. We propose four distinct methodologies tailored to address this subtask effectively. Firstly, we introduce a zero-shot approach leveraging the capabilities of the GPT-3.5 model. Additionally, we present three fine-tuning methodologies utilizing MPNet as the underlying architecture, each employing a different loss function. We conduct comprehensive evaluations of these methodologies on the designated task dataset and meticulously document the obtained results. Furthermore, we conduct an in-depth analysis to ascertain the respective strengths and weaknesses of each method. Through this analysis, we aim to provide valuable insights into the challenges inherent to this task domain.",
}
| This study addresses a task encompassing two distinct subtasks: Sentence-puzzle and Word-puzzle. Our primary focus lies within the Sentence-puzzle subtask, which involves discerning the correct answer from a set of three options for a given riddle constructed from sentence fragments. We propose four distinct methodologies tailored to address this subtask effectively. Firstly, we introduce a zero-shot approach leveraging the capabilities of the GPT-3.5 model. Additionally, we present three fine-tuning methodologies utilizing MPNet as the underlying architecture, each employing a different loss function. We conduct comprehensive evaluations of these methodologies on the designated task dataset and meticulously document the obtained results. Furthermore, we conduct an in-depth analysis to ascertain the respective strengths and weaknesses of each method. Through this analysis, we aim to provide valuable insights into the challenges inherent to this task domain. | [
"Abbaspour, Mohammad Hossein",
"Moosavi Monazzah, Erfan",
"Eetemadi, Sauleh"
] | IUST-NLPLAB at SemEval-2024 Task 9: BRAINTEASER By MPNet (Sentence Puzzle) | semeval-1.160 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.161.bib | https://aclanthology.org/2024.semeval-1.161/ | @inproceedings{valdez-etal-2024-iimasnlp,
title = "iimas{NLP} at {S}em{E}val-2024 Task 8: Unveiling structure-aware language models for automatic generated text identification",
author = "Valdez, Andric and
M{\'a}rquez, Fernando and
Pantale{\'o}n, Jorge and
G{\'o}mez, Helena and
Bel-enguix, Gemma",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.161",
doi = "10.18653/v1/2024.semeval-1.161",
pages = "1110--1114",
abstract = "Large language models (LLMs) are artificial intelligence systems that can generate text, translate languages, and answer questions in a human-like way. While these advances are impressive, there is concern that LLMs could also be used to generate fake or misleading content. In this work, as a part of our participation in SemEval-2024 Task-8, we investigate the ability of LLMs to identify whether a given text was written by a human or by a specific AI. We believe that human and machine writing style patterns are different from each other, so integrating features at different language levels can help in this classification task. For this reason, we evaluate several LLMs that aim to extract valuable multilevel information (such as lexical, semantic, and syntactic) from the text in their training processing. Our best scores on Sub- taskA (monolingual) and SubtaskB were 71.5{\%} and 38.2{\%} in accuracy, respectively (both using the ConvBERT LLM); for both subtasks, the baseline (RoBERTa) achieved an accuracy of 74{\%}.",
}
| Large language models (LLMs) are artificial intelligence systems that can generate text, translate languages, and answer questions in a human-like way. While these advances are impressive, there is concern that LLMs could also be used to generate fake or misleading content. In this work, as a part of our participation in SemEval-2024 Task-8, we investigate the ability of LLMs to identify whether a given text was written by a human or by a specific AI. We believe that human and machine writing style patterns are different from each other, so integrating features at different language levels can help in this classification task. For this reason, we evaluate several LLMs that aim to extract valuable multilevel information (such as lexical, semantic, and syntactic) from the text in their training processing. Our best scores on Sub- taskA (monolingual) and SubtaskB were 71.5{\%} and 38.2{\%} in accuracy, respectively (both using the ConvBERT LLM); for both subtasks, the baseline (RoBERTa) achieved an accuracy of 74{\%}. | [
"Valdez, Andric",
"M{\\'a}rquez, Fern",
"o",
"Pantale{\\'o}n, Jorge",
"G{\\'o}mez, Helena",
"Bel-enguix, Gemma"
] | iimasNLP at SemEval-2024 Task 8: Unveiling structure-aware language models for automatic generated text identification | semeval-1.161 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.162.bib | https://aclanthology.org/2024.semeval-1.162/ | @inproceedings{moctezuma-etal-2024-ingeotec,
title = "{INGEOTEC} at {S}em{E}val-2024 Task 10: Bag of Words Classifiers",
author = "Moctezuma, Daniela and
Tellez, Eric and
Ortiz Bejar, Jose and
Paredes, Mireya",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.162",
doi = "10.18653/v1/2024.semeval-1.162",
pages = "1115--1120",
abstract = "The Emotion Recognition in Conversation subtask aims to predict the emotions of the utterance of a conversation. In its most basic form, one can treat each utterance separately without considering that it is part of a conversation. Using this simplification, one can use any text classification algorithm to tackle this problem. This contribution follows this approach by solving the problem with different text classifiers based on Bag of Words. Nonetheless, the best approach takes advantage of the dynamics of the conversation; however, this algorithm is not statistically different than a Bag of Words with a Linear Support Vector Machine.",
}
| The Emotion Recognition in Conversation subtask aims to predict the emotions of the utterance of a conversation. In its most basic form, one can treat each utterance separately without considering that it is part of a conversation. Using this simplification, one can use any text classification algorithm to tackle this problem. This contribution follows this approach by solving the problem with different text classifiers based on Bag of Words. Nonetheless, the best approach takes advantage of the dynamics of the conversation; however, this algorithm is not statistically different than a Bag of Words with a Linear Support Vector Machine. | [
"Moctezuma, Daniela",
"Tellez, Eric",
"Ortiz Bejar, Jose",
"Paredes, Mireya"
] | INGEOTEC at SemEval-2024 Task 10: Bag of Words Classifiers | semeval-1.162 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.163.bib | https://aclanthology.org/2024.semeval-1.163/ | @inproceedings{reyes-etal-2024-iimas,
title = "{IIMAS} at {S}em{E}val-2024 Task 9: A Comparative Approach for Brainteaser Solutions",
author = "Reyes, Cecilia and
Ramos-flores, Orlando and
Mart{\'\i}nez-maqueda, Diego",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.163",
doi = "10.18653/v1/2024.semeval-1.163",
pages = "1121--1126",
abstract = "In this document, we detail our participation experience in SemEval-2024 Task 9: BRAINTEASER-A Novel Task Defying Common Sense. We tackled this challenge by applying fine-tuning techniques with pre-trained models (BERT and RoBERTa Winogrande), while also augmenting the dataset with the LLMs ChatGPT and Gemini. We achieved an accuracy of 0.93 with our best model, along with an F1 score of 0.87 for the Entailment class, 0.94 for the Contradiction class, and 0.96 for the Neutral class",
}
| In this document, we detail our participation experience in SemEval-2024 Task 9: BRAINTEASER-A Novel Task Defying Common Sense. We tackled this challenge by applying fine-tuning techniques with pre-trained models (BERT and RoBERTa Winogrande), while also augmenting the dataset with the LLMs ChatGPT and Gemini. We achieved an accuracy of 0.93 with our best model, along with an F1 score of 0.87 for the Entailment class, 0.94 for the Contradiction class, and 0.96 for the Neutral class | [
"Reyes, Cecilia",
"Ramos-flores, Orl",
"o",
"Mart{\\'\\i}nez-maqueda, Diego"
] | IIMAS at SemEval-2024 Task 9: A Comparative Approach for Brainteaser Solutions | semeval-1.163 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.164.bib | https://aclanthology.org/2024.semeval-1.164/ | @inproceedings{kazakov-etal-2024-petkaz,
title = "{P}et{K}az at {S}em{E}val-2024 Task 3: Advancing Emotion Classification with an {LLM} for Emotion-Cause Pair Extraction in Conversations",
author = "Kazakov, Roman and
Petukhova, Kseniia and
Kochmar, Ekaterina",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.164",
doi = "10.18653/v1/2024.semeval-1.164",
pages = "1127--1134",
abstract = "In this paper, we present our submission to the SemEval-2023 Task 3 {``}The Competition of Multimodal Emotion Cause Analysis in Conversations{''}, focusing on extracting emotion-cause pairs from dialogs. Specifically, our approach relies on combining fine-tuned GPT-3.5 for emotion classification and using a BiLSTM-based neural network to detect causes. We score 2nd in the ranking for Subtask 1, demonstrating the effectiveness of our approach through one of the highest weighted-average proportional F1 scores recorded at 0.264.",
}
| In this paper, we present our submission to the SemEval-2023 Task 3 {``}The Competition of Multimodal Emotion Cause Analysis in Conversations{''}, focusing on extracting emotion-cause pairs from dialogs. Specifically, our approach relies on combining fine-tuned GPT-3.5 for emotion classification and using a BiLSTM-based neural network to detect causes. We score 2nd in the ranking for Subtask 1, demonstrating the effectiveness of our approach through one of the highest weighted-average proportional F1 scores recorded at 0.264. | [
"Kazakov, Roman",
"Petukhova, Kseniia",
"Kochmar, Ekaterina"
] | PetKaz at SemEval-2024 Task 3: Advancing Emotion Classification with an LLM for Emotion-Cause Pair Extraction in Conversations | semeval-1.164 | Poster | 2404.05502 | [
"https://github.com/sachertort/petkaz-semeval-ecac"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.165.bib | https://aclanthology.org/2024.semeval-1.165/ | @inproceedings{kumar-etal-2024-scalar,
title = "{SC}a{LAR} at {S}em{E}val-2024 Task 8: Unmasking the machine : Exploring the power of {R}o{BERT}a Ensemble for Detecting Machine Generated Text",
author = "Kumar, Anand and
B, Abhin and
Murali, Sidhaarth",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.165",
doi = "10.18653/v1/2024.semeval-1.165",
pages = "1135--1139",
abstract = "SemEval SubtaskB, a shared task that is concerned with the detection of text generated by one out of the 5 different models - davinci, bloomz, chatGPT, cohere and dolly. This is an important task considering the boom of generative models in the current day scenario and their ability to draft mails, formal documents, write and qualify exams and many more which keep evolving every passing day. The purpose of classifying text as generated by which pre-trained model helps in analyzing how each of the training data has affected the ability of the model in performing a certain given task. In the proposed approach, data augmentation was done in order to handle lengthier sentences and also labelling them with the same parent label. Upon the augmented data three RoBERTa models were trained on different segments of data which were then ensembled using a voting classifier based on their R2 score to achieve a higher accuracy than the individual models itself. The proposed model achieved an overall validation accuracy of 97.05{\%} and testing accuracy of 76.25{\%}. and our standing was 18th position on the leaderboard.",
}
| SemEval SubtaskB, a shared task that is concerned with the detection of text generated by one out of the 5 different models - davinci, bloomz, chatGPT, cohere and dolly. This is an important task considering the boom of generative models in the current day scenario and their ability to draft mails, formal documents, write and qualify exams and many more which keep evolving every passing day. The purpose of classifying text as generated by which pre-trained model helps in analyzing how each of the training data has affected the ability of the model in performing a certain given task. In the proposed approach, data augmentation was done in order to handle lengthier sentences and also labelling them with the same parent label. Upon the augmented data three RoBERTa models were trained on different segments of data which were then ensembled using a voting classifier based on their R2 score to achieve a higher accuracy than the individual models itself. The proposed model achieved an overall validation accuracy of 97.05{\%} and testing accuracy of 76.25{\%}. and our standing was 18th position on the leaderboard. | [
"Kumar, An",
"",
"B, Abhin",
"Murali, Sidhaarth"
] | SCaLAR at SemEval-2024 Task 8: Unmasking the machine : Exploring the power of RoBERTa Ensemble for Detecting Machine Generated Text | semeval-1.165 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.166.bib | https://aclanthology.org/2024.semeval-1.166/ | @inproceedings{petukhova-etal-2024-petkaz,
title = "{P}et{K}az at {S}em{E}val-2024 Task 8: Can Linguistics Capture the Specifics of {LLM}-generated Text?",
author = "Petukhova, Kseniia and
Kazakov, Roman and
Kochmar, Ekaterina",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.166",
doi = "10.18653/v1/2024.semeval-1.166",
pages = "1140--1147",
abstract = "In this paper, we present our submission to the SemEval-2024 Task 8 {``}Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection{''}, focusing on the detection of machine-generated texts (MGTs) in English. Specifically, our approach relies on combining embeddings from the RoBERTa-base with diversity features and uses a resampled training set. We score 16th from 139 in the ranking for Subtask A, and our results show that our approach is generalizable across unseen models and domains, achieving an accuracy of 0.91.",
}
| In this paper, we present our submission to the SemEval-2024 Task 8 {``}Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection{''}, focusing on the detection of machine-generated texts (MGTs) in English. Specifically, our approach relies on combining embeddings from the RoBERTa-base with diversity features and uses a resampled training set. We score 16th from 139 in the ranking for Subtask A, and our results show that our approach is generalizable across unseen models and domains, achieving an accuracy of 0.91. | [
"Petukhova, Kseniia",
"Kazakov, Roman",
"Kochmar, Ekaterina"
] | PetKaz at SemEval-2024 Task 8: Can Linguistics Capture the Specifics of LLM-generated Text? | semeval-1.166 | Poster | 2404.05483 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.167.bib | https://aclanthology.org/2024.semeval-1.167/ | @inproceedings{fallah-etal-2024-slpl,
title = "{SLPL} {SHROOM} at {S}em{E}val2024 Task 06 : A comprehensive study on models ability to detect hallucination",
author = "Fallah, Pouya and
Gooran, Soroush and
Jafarinasab, Mohammad and
Sadeghi, Pouya and
Farnia, Reza and
Tarabkhah, Amirreza and
Taghavi, Zeinab Sadat and
Sameti, Hossein",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.167",
doi = "10.18653/v1/2024.semeval-1.167",
pages = "1148--1154",
abstract = "Language models, particularly generative models, are susceptible to hallucinations, generating outputs that contradict factual knowledgeor the source text. This study explores methodsfor detecting hallucinations in three SemEval2024 Task 6 tasks: Machine Translation, Definition Modeling, and Paraphrase Generation.We evaluate two methods: semantic similaritybetween the generated text and factual references, and an ensemble of language modelsthat judge each other{'}s outputs. Our resultsshow that semantic similarity achieves moderate accuracy and correlation scores in trial data,while the ensemble method offers insights intothe complexities of hallucination detection butfalls short of expectations. This work highlights the challenges of hallucination detectionand underscores the need for further researchin this critical area.",
}
| Language models, particularly generative models, are susceptible to hallucinations, generating outputs that contradict factual knowledgeor the source text. This study explores methodsfor detecting hallucinations in three SemEval2024 Task 6 tasks: Machine Translation, Definition Modeling, and Paraphrase Generation.We evaluate two methods: semantic similaritybetween the generated text and factual references, and an ensemble of language modelsthat judge each other{'}s outputs. Our resultsshow that semantic similarity achieves moderate accuracy and correlation scores in trial data,while the ensemble method offers insights intothe complexities of hallucination detection butfalls short of expectations. This work highlights the challenges of hallucination detectionand underscores the need for further researchin this critical area. | [
"Fallah, Pouya",
"Gooran, Soroush",
"Jafarinasab, Mohammad",
"Sadeghi, Pouya",
"Farnia, Reza",
"Tarabkhah, Amirreza",
"Taghavi, Zeinab Sadat",
"Sameti, Hossein"
] | SLPL SHROOM at SemEval2024 Task 06 : A comprehensive study on models ability to detect hallucination | semeval-1.167 | Poster | [
"https://github.com/sharif-slpl/se-2024-task-06-shroom"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.168.bib | https://aclanthology.org/2024.semeval-1.168/ | @inproceedings{moctezuma-etal-2024-ingeotec-semeval,
title = "{INGEOTEC} at {S}em{E}val-2024 Task 1: Bag of Words and Transformers",
author = "Moctezuma, Daniela and
Tellez, Eric and
Graff, Mario",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.168",
doi = "10.18653/v1/2024.semeval-1.168",
pages = "1155--1159",
abstract = "Understanding the meaning of a written message is crucial in solving problems related to Natural Language Processing; the relatedness of two or more messages is a semantic problem tackled with supervised and unsupervised learning. This paper outlines our submissions to the Semantic Textual Relatedness (STR) challenge at SemEval 2024, which is devoted to evaluating the degree of semantic similarity and relatedness between two sentences across multiple languages. We use two main strategies in our submissions. The first approach is based on the Bag-of-Word scheme, while the second one uses pre-trained Transformers for text representation. We found some attractive results, especially in cases where different models adjust better to certain languages over others.",
}
| Understanding the meaning of a written message is crucial in solving problems related to Natural Language Processing; the relatedness of two or more messages is a semantic problem tackled with supervised and unsupervised learning. This paper outlines our submissions to the Semantic Textual Relatedness (STR) challenge at SemEval 2024, which is devoted to evaluating the degree of semantic similarity and relatedness between two sentences across multiple languages. We use two main strategies in our submissions. The first approach is based on the Bag-of-Word scheme, while the second one uses pre-trained Transformers for text representation. We found some attractive results, especially in cases where different models adjust better to certain languages over others. | [
"Moctezuma, Daniela",
"Tellez, Eric",
"Graff, Mario"
] | INGEOTEC at SemEval-2024 Task 1: Bag of Words and Transformers | semeval-1.168 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.169.bib | https://aclanthology.org/2024.semeval-1.169/ | @inproceedings{brodoceanu-2024-octavianb,
title = "{O}ctavian{B} at {S}em{E}val-2024 Task 6: An exploration of humanlike qualities of hallucinated {LLM} texts",
author = "Brodoceanu, Octavian",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.169",
doi = "10.18653/v1/2024.semeval-1.169",
pages = "1160--1165",
abstract = "The tested method for detection involves utilizing models, trained for differentiating machine-generated text, in order to distinguish between regular and hallucinated sequences. The hypothesis under investigation is that the patterns learned in pretraining will be transferable to the task at hand. The rationale is as follows: the training data of the model is human-written text, therefore deviations from the training set could be detected in this manner.A second method has been added post competition as a further exploration of the dataset involving using the loss of the generation as determined by a pretrained LLM.",
}
| The tested method for detection involves utilizing models, trained for differentiating machine-generated text, in order to distinguish between regular and hallucinated sequences. The hypothesis under investigation is that the patterns learned in pretraining will be transferable to the task at hand. The rationale is as follows: the training data of the model is human-written text, therefore deviations from the training set could be detected in this manner.A second method has been added post competition as a further exploration of the dataset involving using the loss of the generation as determined by a pretrained LLM. | [
"Brodoceanu, Octavian"
] | OctavianB at SemEval-2024 Task 6: An exploration of humanlike qualities of hallucinated LLM texts | semeval-1.169 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.170.bib | https://aclanthology.org/2024.semeval-1.170/ | @inproceedings{ben-fares-etal-2024-fi,
title = "{FI} Group at {S}em{E}val-2024 Task 8: A Syntactically Motivated Architecture for Multilingual Machine-Generated Text Detection",
author = "Ben-fares, Maha and
Zaratiana, Urchade and
Hernandez, Simon and
Holat, Pierre",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.170",
doi = "10.18653/v1/2024.semeval-1.170",
pages = "1166--1171",
abstract = "In this paper, we present the description of our proposed system for Subtask A - multilingual track at SemEval-2024 Task 8, which aims to classify if text has been generated by an AI or Human. Our approach treats binary text classification as token-level prediction, with the final classification being the average of token-level predictions. Through the use of rich representations of pre-trained transformers, our model is trained to selectively aggregate information from across different layers to score individual tokens, given that each layer may contain distinct information. Notably, our model demonstrates competitive performance on the test dataset, achieving an accuracy score of 95.8{\%}. Furthermore, it secures the 2nd position in the multilingual track of Subtask A, with a mere 0.1{\%} behind the leading system.",
}
| In this paper, we present the description of our proposed system for Subtask A - multilingual track at SemEval-2024 Task 8, which aims to classify if text has been generated by an AI or Human. Our approach treats binary text classification as token-level prediction, with the final classification being the average of token-level predictions. Through the use of rich representations of pre-trained transformers, our model is trained to selectively aggregate information from across different layers to score individual tokens, given that each layer may contain distinct information. Notably, our model demonstrates competitive performance on the test dataset, achieving an accuracy score of 95.8{\%}. Furthermore, it secures the 2nd position in the multilingual track of Subtask A, with a mere 0.1{\%} behind the leading system. | [
"Ben-fares, Maha",
"Zaratiana, Urchade",
"Hern",
"ez, Simon",
"Holat, Pierre"
] | FI Group at SemEval-2024 Task 8: A Syntactically Motivated Architecture for Multilingual Machine-Generated Text Detection | semeval-1.170 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.171.bib | https://aclanthology.org/2024.semeval-1.171/ | @inproceedings{sharma-mansuri-2024-team,
title = "Team Innovative at {S}em{E}val-2024 Task 8: Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection",
author = "Sharma, Surbhi and
Mansuri, Irfan",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.171",
doi = "10.18653/v1/2024.semeval-1.171",
pages = "1172--1176",
abstract = "With the widespread adoption of large language models (LLMs), such as ChatGPT and GPT-4, in various domains, concerns regarding their potential misuse, including spreading misinformation and disrupting education, have escalated. The need to discern between human-generated and machine-generated text has become increasingly crucial. This paper addresses the challenge of automatic text classification with a focus on distinguishing between human-written and machine-generated text. Leveraging the robust capabilities of the RoBERTa model, we propose an approach for text classification, termed as RoBERTa hybrid, which involves fine-tuning the pre-trained Roberta model coupled with additional dense layers and softmax activation for authorship attribution. In this paper, we present an approach that leverages Stylometric features, hybrid features, and the output probabilities of a fine-tuned RoBERTa model. Our method achieves a test accuracy of 73{\%} and a validation accuracy of 89{\%}, demonstrating promising advancements in the field of machine-generated text detection. These results mark significant progress in the domain of machine-generated text detection, as evidenced by our 74th position on the leaderboard for Subtask-A of SemEval-2024 Task 8.",
}
| With the widespread adoption of large language models (LLMs), such as ChatGPT and GPT-4, in various domains, concerns regarding their potential misuse, including spreading misinformation and disrupting education, have escalated. The need to discern between human-generated and machine-generated text has become increasingly crucial. This paper addresses the challenge of automatic text classification with a focus on distinguishing between human-written and machine-generated text. Leveraging the robust capabilities of the RoBERTa model, we propose an approach for text classification, termed as RoBERTa hybrid, which involves fine-tuning the pre-trained Roberta model coupled with additional dense layers and softmax activation for authorship attribution. In this paper, we present an approach that leverages Stylometric features, hybrid features, and the output probabilities of a fine-tuned RoBERTa model. Our method achieves a test accuracy of 73{\%} and a validation accuracy of 89{\%}, demonstrating promising advancements in the field of machine-generated text detection. These results mark significant progress in the domain of machine-generated text detection, as evidenced by our 74th position on the leaderboard for Subtask-A of SemEval-2024 Task 8. | [
"Sharma, Surbhi",
"Mansuri, Irfan"
] | Team Innovative at SemEval-2024 Task 8: Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection | semeval-1.171 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.172.bib | https://aclanthology.org/2024.semeval-1.172/ | @inproceedings{peskine-etal-2024-eurecom,
title = "{EURECOM} at {S}em{E}val-2024 Task 4: Hierarchical Loss and Model Ensembling in Detecting Persuasion Techniques",
author = "Peskine, Youri and
Troncy, Raphael and
Papotti, Paolo",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.172",
doi = "10.18653/v1/2024.semeval-1.172",
pages = "1177--1182",
abstract = "This paper describes the submission of team EURECOM at SemEval-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes. We only tackled the first sub-task, consisting of detecting 20 named persuasion techniques in the textual content of memes. We trained multiple BERT-based models (BERT, RoBERTa, BERT pre-trained on harmful detection) using different losses (Cross Entropy, Binary Cross Entropy, Focal Loss and a custom-made hierarchical loss). The best results were obtained by leveraging the hierarchical nature of the data, by outputting ancestor classes and with a hierarchical loss. Our final submission consist of an ensembling of our top-3 best models for each persuasion techniques. We obtain hierarchical F1 scores of 0.655 (English), 0.345 (Bulgarian), 0.442 (North Macedonian) and 0.178 (Arabic) on the test set.",
}
| This paper describes the submission of team EURECOM at SemEval-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes. We only tackled the first sub-task, consisting of detecting 20 named persuasion techniques in the textual content of memes. We trained multiple BERT-based models (BERT, RoBERTa, BERT pre-trained on harmful detection) using different losses (Cross Entropy, Binary Cross Entropy, Focal Loss and a custom-made hierarchical loss). The best results were obtained by leveraging the hierarchical nature of the data, by outputting ancestor classes and with a hierarchical loss. Our final submission consist of an ensembling of our top-3 best models for each persuasion techniques. We obtain hierarchical F1 scores of 0.655 (English), 0.345 (Bulgarian), 0.442 (North Macedonian) and 0.178 (Arabic) on the test set. | [
"Peskine, Youri",
"Troncy, Raphael",
"Papotti, Paolo"
] | EURECOM at SemEval-2024 Task 4: Hierarchical Loss and Model Ensembling in Detecting Persuasion Techniques | semeval-1.172 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.173.bib | https://aclanthology.org/2024.semeval-1.173/ | @inproceedings{arzt-etal-2024-tu,
title = "{TU} {W}ien at {S}em{E}val-2024 Task 6: Unifying Model-Agnostic and Model-Aware Techniques for Hallucination Detection",
author = "Arzt, Varvara and
Azarbeik, Mohammad Mahdi and
Lasy, Ilya and
Kerl, Tilman and
Recski, G{\'a}bor",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.173",
doi = "10.18653/v1/2024.semeval-1.173",
pages = "1183--1196",
abstract = "This paper discusses challenges in Natural Language Generation (NLG), specifically addressing neural networks producing output that is fluent but incorrect, leading to {``}hallucinations{''}. The SHROOM shared task involves Large Language Models in various tasks, and our methodology employs both model-agnostic and model-aware approaches for hallucination detection. The limited availability of labeled training data is addressed through automatic label generation strategies. Model-agnostic methods include word alignment and fine-tuning a BERT-based pretrained model, while model-aware methods leverage separate classifiers trained on LLMs{'} internal data (layer activations and attention values). Ensemble methods combine outputs through various techniques such as regression metamodels, voting, and probability fusion. Our best performing systems achieved an accuracy of 80.6{\%} on the model-aware track and 81.7{\%} on the model-agnostic track, ranking 3rd and 8th among all systems, respectively.",
}
| This paper discusses challenges in Natural Language Generation (NLG), specifically addressing neural networks producing output that is fluent but incorrect, leading to {``}hallucinations{''}. The SHROOM shared task involves Large Language Models in various tasks, and our methodology employs both model-agnostic and model-aware approaches for hallucination detection. The limited availability of labeled training data is addressed through automatic label generation strategies. Model-agnostic methods include word alignment and fine-tuning a BERT-based pretrained model, while model-aware methods leverage separate classifiers trained on LLMs{'} internal data (layer activations and attention values). Ensemble methods combine outputs through various techniques such as regression metamodels, voting, and probability fusion. Our best performing systems achieved an accuracy of 80.6{\%} on the model-aware track and 81.7{\%} on the model-agnostic track, ranking 3rd and 8th among all systems, respectively. | [
"Arzt, Varvara",
"Azarbeik, Mohammad Mahdi",
"Lasy, Ilya",
"Kerl, Tilman",
"Recski, G{\\'a}bor"
] | TU Wien at SemEval-2024 Task 6: Unifying Model-Agnostic and Model-Aware Techniques for Hallucination Detection | semeval-1.173 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.174.bib | https://aclanthology.org/2024.semeval-1.174/ | @inproceedings{singh-etal-2024-silp,
title = "silp{\_}nlp at {S}em{E}val-2024 Task 1: Cross-lingual Knowledge Transfer for Mono-lingual Learning",
author = "Singh, Sumit and
Goyal, Pankaj and
Tiwary, Uma",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.174",
doi = "10.18653/v1/2024.semeval-1.174",
pages = "1197--1203",
abstract = "Our team, silp{\_}nlp, participated in all three tracks of SemEval2024 Task 1: Semantic Textual Relatedness (STR). We created systems for a total of 29 subtasks across all tracks: nine subtasks for track A, 10 subtasks for track B, and ten subtasks for track C. To make the most of our knowledge across all subtasks, we used transformer-based pre-trained models, which are known for their strong cross-lingual transferability. For track A, we trained our model in two stages. In the first stage, we focused on multi-lingual learning from all tracks. In the second stage, we fine-tuned the model for individual tracks. For track B, we used a unigram and bigram representation with suport vector regression (SVR) and eXtreme Gradient Boosting (XGBoost) regression. For track C, we again utilized cross-lingual transferability without the use of targeted subtask data. Our work highlights the fact that knowledge gained from all subtasks can be transferred to an individual subtask if the base language model has strong cross-lingual characteristics. Our system ranked first in the Indonesian subtask of Track B (C7) and in the top three for four other subtasks.",
}
| Our team, silp{\_}nlp, participated in all three tracks of SemEval2024 Task 1: Semantic Textual Relatedness (STR). We created systems for a total of 29 subtasks across all tracks: nine subtasks for track A, 10 subtasks for track B, and ten subtasks for track C. To make the most of our knowledge across all subtasks, we used transformer-based pre-trained models, which are known for their strong cross-lingual transferability. For track A, we trained our model in two stages. In the first stage, we focused on multi-lingual learning from all tracks. In the second stage, we fine-tuned the model for individual tracks. For track B, we used a unigram and bigram representation with suport vector regression (SVR) and eXtreme Gradient Boosting (XGBoost) regression. For track C, we again utilized cross-lingual transferability without the use of targeted subtask data. Our work highlights the fact that knowledge gained from all subtasks can be transferred to an individual subtask if the base language model has strong cross-lingual characteristics. Our system ranked first in the Indonesian subtask of Track B (C7) and in the top three for four other subtasks. | [
"Singh, Sumit",
"Goyal, Pankaj",
"Tiwary, Uma"
] | silp_nlp at SemEval-2024 Task 1: Cross-lingual Knowledge Transfer for Mono-lingual Learning | semeval-1.174 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.175.bib | https://aclanthology.org/2024.semeval-1.175/ | @inproceedings{mathur-etal-2024-lastresort,
title = "{L}ast{R}esort at {S}em{E}val-2024 Task 3: Exploring Multimodal Emotion Cause Pair Extraction as Sequence Labelling Task",
author = "Mathur, Suyash Vardhan and
Jindal, Akshett and
Mittal, Hardik and
Shrivastava, Manish",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.175",
doi = "10.18653/v1/2024.semeval-1.175",
pages = "1204--1211",
abstract = "Conversation is the most natural form of human communication, where each utterance can range over a variety of possible emotions. While significant work has been done towards the detection of emotions in text, relatively little work has been done towards finding the cause of the said emotions, especially in multimodal settings. SemEval 2024 introduces the task of Multimodal Emotion Cause Analysis in Conversations, which aims to extract emotions reflected in individual utterances in a conversation involving multiple modalities (textual, audio, and visual modalities) along with the corresponding utterances that were the cause for the emotion. In this paper, we propose models that tackle this task as an utterance labeling and a sequence labeling problem and perform a comparative study of these models, involving baselines using different encoders, using BiLSTM for adding contextual information of the conversation, and finally adding a CRF layer to try to model the inter-dependencies between adjacent utterances more effectively. In the official leaderboard for the task, our architecture was ranked 8th, achieving an F1-score of 0.1759 on the leaderboard.",
}
| Conversation is the most natural form of human communication, where each utterance can range over a variety of possible emotions. While significant work has been done towards the detection of emotions in text, relatively little work has been done towards finding the cause of the said emotions, especially in multimodal settings. SemEval 2024 introduces the task of Multimodal Emotion Cause Analysis in Conversations, which aims to extract emotions reflected in individual utterances in a conversation involving multiple modalities (textual, audio, and visual modalities) along with the corresponding utterances that were the cause for the emotion. In this paper, we propose models that tackle this task as an utterance labeling and a sequence labeling problem and perform a comparative study of these models, involving baselines using different encoders, using BiLSTM for adding contextual information of the conversation, and finally adding a CRF layer to try to model the inter-dependencies between adjacent utterances more effectively. In the official leaderboard for the task, our architecture was ranked 8th, achieving an F1-score of 0.1759 on the leaderboard. | [
"Mathur, Suyash Vardhan",
"Jindal, Akshett",
"Mittal, Hardik",
"Shrivastava, Manish"
] | LastResort at SemEval-2024 Task 3: Exploring Multimodal Emotion Cause Pair Extraction as Sequence Labelling Task | semeval-1.175 | Poster | 2404.02088 | [
"https://github.com/akshettrj/semeval2024_task03"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.176.bib | https://aclanthology.org/2024.semeval-1.176/ | @inproceedings{mathur-etal-2024-davinci,
title = "{D}a{V}inci at {S}em{E}val-2024 Task 9: Few-shot prompting {GPT}-3.5 for Unconventional Reasoning",
author = "Mathur, Suyash Vardhan and
Jindal, Akshett and
Shrivastava, Manish",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.176",
doi = "10.18653/v1/2024.semeval-1.176",
pages = "1212--1216",
abstract = "While significant work has been done in the field of NLP on vertical thinking, which involves primarily logical thinking, little work has been done towards lateral thinking, which involves looking at problems from an unconventional perspective defying existing conceptions and notions. Towards this direction, SemEval 2024 introduces the task of BRAINTEASER, which involves two types of questions {--} Sentence Puzzle and Word Puzzle that defy conventional common-sense reasoning and constraints. In this paper, we tackle both the questions using few-shot prompting on GPT-3.5 and gain insights regarding the difference in the nature of the two types of questions. Our prompting strategy placed us 26th on the leaderboard for the Sentence Puzzle and 15th on the Word Puzzle task.",
}
| While significant work has been done in the field of NLP on vertical thinking, which involves primarily logical thinking, little work has been done towards lateral thinking, which involves looking at problems from an unconventional perspective defying existing conceptions and notions. Towards this direction, SemEval 2024 introduces the task of BRAINTEASER, which involves two types of questions {--} Sentence Puzzle and Word Puzzle that defy conventional common-sense reasoning and constraints. In this paper, we tackle both the questions using few-shot prompting on GPT-3.5 and gain insights regarding the difference in the nature of the two types of questions. Our prompting strategy placed us 26th on the leaderboard for the Sentence Puzzle and 15th on the Word Puzzle task. | [
"Mathur, Suyash Vardhan",
"Jindal, Akshett",
"Shrivastava, Manish"
] | DaVinci at SemEval-2024 Task 9: Few-shot prompting GPT-3.5 for Unconventional Reasoning | semeval-1.176 | Poster | 2405.11559 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.177.bib | https://aclanthology.org/2024.semeval-1.177/ | @inproceedings{vyas-2024-morphingminds,
title = "{M}orphing{M}inds at {S}em{E}val-2024 Task 10: Emotion Recognition in Conversation in {H}indi-{E}nglish Code-Mixed Conversations",
author = "Vyas, Monika",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.177",
doi = "10.18653/v1/2024.semeval-1.177",
pages = "1217--1221",
abstract = "The research focuses on emotion detection in multilingual conversations, particularly in Romanized Hindi and English, with applications in sentiment analysis and mental health assessments. The study employs Machine learning, deep learning techniques, including Transformer-based models like XLM-RoBERTa, for feature extraction and emotion classification. Various experiments are conducted to evaluate model performance, including fine-tuning, data augmentation, and addressing dataset imbalances. The findings highlight challenges and opportunities in emotion detection across languages and emphasize culturally sensitive approaches. The study contributes to advancing emotion analysis in multilingual contexts and provides practical guidance for developing more accurate emotion detection systems.",
}
| The research focuses on emotion detection in multilingual conversations, particularly in Romanized Hindi and English, with applications in sentiment analysis and mental health assessments. The study employs Machine learning, deep learning techniques, including Transformer-based models like XLM-RoBERTa, for feature extraction and emotion classification. Various experiments are conducted to evaluate model performance, including fine-tuning, data augmentation, and addressing dataset imbalances. The findings highlight challenges and opportunities in emotion detection across languages and emphasize culturally sensitive approaches. The study contributes to advancing emotion analysis in multilingual contexts and provides practical guidance for developing more accurate emotion detection systems. | [
"Vyas, Monika"
] | MorphingMinds at SemEval-2024 Task 10: Emotion Recognition in Conversation in Hindi-English Code-Mixed Conversations | semeval-1.177 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.178.bib | https://aclanthology.org/2024.semeval-1.178/ | @inproceedings{hossain-etal-2024-semanticcuetsync,
title = "{S}emantic{CUETS}ync at {S}em{E}val-2024 Task 1: Finetuning Sentence Transformer to Find Semantic Textual Relatedness",
author = "Hossain, Md. Sajjad and
Paran, Ashraful Islam and
Shohan, Symom Hossain and
Hossain, Jawad and
Hoque, Mohammed Moshiul",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.178",
doi = "10.18653/v1/2024.semeval-1.178",
pages = "1222--1228",
abstract = "Semantic textual relatedness is crucial to Natural Language Processing (NLP). Methodologies often exhibit superior performance in high-resource languages such as English compared to low-resource ones like Marathi, Telugu, and Spanish. This study leverages various machine learning (ML) approaches, including Support Vector Regression (SVR) and Random Forest, deep learning (DL) techniques such as Siamese Neural Networks, and transformer-based models such as MiniLM-L6-v2, Marathi-sbert, Telugu-sentence-bert-nli, and Roberta-bne-sentiment-analysis-es, to assess semantic relatedness across English, Marathi, Telugu, and Spanish. The developed transformer-based methods notably outperformed other models in determining semantic textual relatedness across these languages, achieving a Spearman correlation coefficient of 0.822 (for English), 0.870 (for Marathi), 0.820 (for Telugu), and 0.677 (for Spanish). These results led to our work attaining rankings of 22th (for English), 11th (for Marathi), 11th (for Telegu) and 14th (for Spanish), respectively.",
}
| Semantic textual relatedness is crucial to Natural Language Processing (NLP). Methodologies often exhibit superior performance in high-resource languages such as English compared to low-resource ones like Marathi, Telugu, and Spanish. This study leverages various machine learning (ML) approaches, including Support Vector Regression (SVR) and Random Forest, deep learning (DL) techniques such as Siamese Neural Networks, and transformer-based models such as MiniLM-L6-v2, Marathi-sbert, Telugu-sentence-bert-nli, and Roberta-bne-sentiment-analysis-es, to assess semantic relatedness across English, Marathi, Telugu, and Spanish. The developed transformer-based methods notably outperformed other models in determining semantic textual relatedness across these languages, achieving a Spearman correlation coefficient of 0.822 (for English), 0.870 (for Marathi), 0.820 (for Telugu), and 0.677 (for Spanish). These results led to our work attaining rankings of 22th (for English), 11th (for Marathi), 11th (for Telegu) and 14th (for Spanish), respectively. | [
"Hossain, Md. Sajjad",
"Paran, Ashraful Islam",
"Shohan, Symom Hossain",
"Hossain, Jawad",
"Hoque, Mohammed Moshiul"
] | SemanticCUETSync at SemEval-2024 Task 1: Finetuning Sentence Transformer to Find Semantic Textual Relatedness | semeval-1.178 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.179.bib | https://aclanthology.org/2024.semeval-1.179/ | @inproceedings{tareh-etal-2024-iasbs,
title = "{IASBS} at {S}em{E}val-2024 Task 10: Delving into Emotion Discovery and Reasoning in Code-Mixed Conversations",
author = "Tareh, Mehrzad and
Mohandesi, Aydin and
Ansari, Ebrahim",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.179",
doi = "10.18653/v1/2024.semeval-1.179",
pages = "1229--1238",
abstract = "In this paper, we detail the IASBS team{'}s approach and findings from participating in SemEval-2024 Task 10, {``}Emotion Discovery and Reasoning in Hindi-English Code-mixed Conversations (EDiReF).{''} This task encompasses three critical subtasks: Emotion Recognition in Conversation (ERC), and Emotion Flip Reasoning (EFR) in both Hindi-English code-mixed and English dialogues. Our methodology integrates advanced NLP and machine learning techniques, focusing on the unique challenges of code-mixing, such as linguistic diversity and shifts in emotional context. By implementing a robust framework that includes data preprocessing, and feature engineering using models like GPT-4 and DistilBERT, we extend our analysis beyond mere emotion identification to explore the triggers behind emotion flips. This endeavor not only achieved third place on the leaderboard, demonstrating a high proficiency in emotion and flip detection with an F1-Score of 0.70 but also significantly contributed to the advancement of emotional AI. Our findings offer valuable insights into the complex interplay of emotions in communication, showcasing the potential for enhancing applications across various domains, from social media analytics to healthcare, and underscore the importance of understanding emotional dynamics in code-mixed conversations for future research and practical applications.",
}
| In this paper, we detail the IASBS team{'}s approach and findings from participating in SemEval-2024 Task 10, {``}Emotion Discovery and Reasoning in Hindi-English Code-mixed Conversations (EDiReF).{''} This task encompasses three critical subtasks: Emotion Recognition in Conversation (ERC), and Emotion Flip Reasoning (EFR) in both Hindi-English code-mixed and English dialogues. Our methodology integrates advanced NLP and machine learning techniques, focusing on the unique challenges of code-mixing, such as linguistic diversity and shifts in emotional context. By implementing a robust framework that includes data preprocessing, and feature engineering using models like GPT-4 and DistilBERT, we extend our analysis beyond mere emotion identification to explore the triggers behind emotion flips. This endeavor not only achieved third place on the leaderboard, demonstrating a high proficiency in emotion and flip detection with an F1-Score of 0.70 but also significantly contributed to the advancement of emotional AI. Our findings offer valuable insights into the complex interplay of emotions in communication, showcasing the potential for enhancing applications across various domains, from social media analytics to healthcare, and underscore the importance of understanding emotional dynamics in code-mixed conversations for future research and practical applications. | [
"Tareh, Mehrzad",
"Moh",
"esi, Aydin",
"Ansari, Ebrahim"
] | IASBS at SemEval-2024 Task 10: Delving into Emotion Discovery and Reasoning in Code-Mixed Conversations | semeval-1.179 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.180.bib | https://aclanthology.org/2024.semeval-1.180/ | @inproceedings{chakraborty-etal-2024-deja,
title = "Deja Vu at {S}em{E}val 2024 Task 9: A Comparative Study of Advanced Language Models for Commonsense Reasoning",
author = "Chakraborty, Trina and
Rahman, Marufur and
Riyad, Omar",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.180",
doi = "10.18653/v1/2024.semeval-1.180",
pages = "1239--1244",
abstract = "This research systematically forms an impression of the capabilities of advanced language models in addressing the BRAINTEASER task introduced at SemEval 2024, which is specifically designed to explore the models{'} proficiency in lateral commonsense reasoning. The task sets forth an array of Sentence and Word Puzzles, carefully crafted to challenge the models with scenarios requiring unconventional thought processes. Our methodology encompasses a holistic approach, incorporating pre-processing of data, fine-tuning of transformer-based language models, and strategic data augmentation to explore the depth and flexibility of each model{'}s understanding. The preliminary results of our analysis are encouraging, highlighting significant potential for advancements in the models{'} ability to engage in lateral reasoning. Further insights gained from post-competition evaluations suggest scopes for notable enhancements in model performance, emphasizing the continuous evolution of the models in mastering complex reasoning tasks.",
}
| This research systematically forms an impression of the capabilities of advanced language models in addressing the BRAINTEASER task introduced at SemEval 2024, which is specifically designed to explore the models{'} proficiency in lateral commonsense reasoning. The task sets forth an array of Sentence and Word Puzzles, carefully crafted to challenge the models with scenarios requiring unconventional thought processes. Our methodology encompasses a holistic approach, incorporating pre-processing of data, fine-tuning of transformer-based language models, and strategic data augmentation to explore the depth and flexibility of each model{'}s understanding. The preliminary results of our analysis are encouraging, highlighting significant potential for advancements in the models{'} ability to engage in lateral reasoning. Further insights gained from post-competition evaluations suggest scopes for notable enhancements in model performance, emphasizing the continuous evolution of the models in mastering complex reasoning tasks. | [
"Chakraborty, Trina",
"Rahman, Marufur",
"Riyad, Omar"
] | Deja Vu at SemEval 2024 Task 9: A Comparative Study of Advanced Language Models for Commonsense Reasoning | semeval-1.180 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.181.bib | https://aclanthology.org/2024.semeval-1.181/ | @inproceedings{zhang-etal-2024-ftg,
title = "{F}t{G}-{C}o{T} at {S}em{E}val-2024 Task 9: Solving Sentence Puzzles Using Fine-Tuned Language Models and Zero-Shot {C}o{T} Prompting",
author = "Zhang, Micah and
Ahmed, Shafiuddin Rehan and
Martin, James H.",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.181",
doi = "10.18653/v1/2024.semeval-1.181",
pages = "1245--1251",
abstract = "Recent large language models (LLMs) can solve puzzles that require creativity and lateral thinking. To advance this front of research, we tackle SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense. We approach this task by introducing a technique that we call Fine-tuned Generated Chain-of-Thought (FtG-CoT). It is a novel few-shot prompting method that combines a fine-tuned BERT classifier encoder with zero-shot chain-of-thought generation and a fine-tuned LLM. The fine-tuned BERT classifier provides a context-rich encoding of each example question and choice list. Zero-shot chain-of-thought generation leverages the benefits of chain-of-thought prompting without requiring manual creation of the reasoning chains. We fine-tune the LLM on the generated chains-of-thought and include a set of generated reasoning chains in the final few-shot LLM prompt to maximize the relevance and correctness of the final generated response. In this paper, we show that FtG-CoT outperforms the zero-shot prompting baseline presented in the task paper and is highly effective at solving challenging sentence puzzles achieving a perfect score on the practice set and a 0.9 score on the evaluation set.",
}
| Recent large language models (LLMs) can solve puzzles that require creativity and lateral thinking. To advance this front of research, we tackle SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense. We approach this task by introducing a technique that we call Fine-tuned Generated Chain-of-Thought (FtG-CoT). It is a novel few-shot prompting method that combines a fine-tuned BERT classifier encoder with zero-shot chain-of-thought generation and a fine-tuned LLM. The fine-tuned BERT classifier provides a context-rich encoding of each example question and choice list. Zero-shot chain-of-thought generation leverages the benefits of chain-of-thought prompting without requiring manual creation of the reasoning chains. We fine-tune the LLM on the generated chains-of-thought and include a set of generated reasoning chains in the final few-shot LLM prompt to maximize the relevance and correctness of the final generated response. In this paper, we show that FtG-CoT outperforms the zero-shot prompting baseline presented in the task paper and is highly effective at solving challenging sentence puzzles achieving a perfect score on the practice set and a 0.9 score on the evaluation set. | [
"Zhang, Micah",
"Ahmed, Shafiuddin Rehan",
"Martin, James H."
] | FtG-CoT at SemEval-2024 Task 9: Solving Sentence Puzzles Using Fine-Tuned Language Models and Zero-Shot CoT Prompting | semeval-1.181 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.182.bib | https://aclanthology.org/2024.semeval-1.182/ | @inproceedings{ezquerro-vilares-2024-lys,
title = "{L}y{S} at {S}em{E}val-2024 Task 3: An Early Prototype for End-to-End Multimodal Emotion Linking as Graph-Based Parsing",
author = "Ezquerro, Ana and
Vilares, David",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.182",
doi = "10.18653/v1/2024.semeval-1.182",
pages = "1252--1259",
abstract = "This paper describes our participation in SemEval 2024 Task 3, which focused on Multimodal Emotion Cause Analysis in Conversations. We developed an early prototype for an end-to-end system that uses graph-based methods from dependency parsing to identify causal emotion relations in multi-party conversations. Our model comprises a neural transformer-based encoder for contextualizing multimodal conversation data and a graph-based decoder for generating the adjacency matrix scores of the causal graph. We ranked 7th out of 15 valid and official submissions for Subtask 1, using textual inputs only. We also discuss our participation in Subtask 2 during post-evaluation using multi-modal inputs.",
}
| This paper describes our participation in SemEval 2024 Task 3, which focused on Multimodal Emotion Cause Analysis in Conversations. We developed an early prototype for an end-to-end system that uses graph-based methods from dependency parsing to identify causal emotion relations in multi-party conversations. Our model comprises a neural transformer-based encoder for contextualizing multimodal conversation data and a graph-based decoder for generating the adjacency matrix scores of the causal graph. We ranked 7th out of 15 valid and official submissions for Subtask 1, using textual inputs only. We also discuss our participation in Subtask 2 during post-evaluation using multi-modal inputs. | [
"Ezquerro, Ana",
"Vilares, David"
] | LyS at SemEval-2024 Task 3: An Early Prototype for End-to-End Multimodal Emotion Linking as Graph-Based Parsing | semeval-1.182 | Poster | 2405.06483 | [
"https://github.com/anaezquerro/semeval24-task3"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.183.bib | https://aclanthology.org/2024.semeval-1.183/ | @inproceedings{gonzalez-etal-2024-numdecoders,
title = "{N}um{D}ecoders at {S}em{E}val-2024 Task 7: {F}lan{T}5 and {GPT} enhanced with {C}o{T} for Numerical Reasoning",
author = "Gonzalez, Andres and
Hossain, Md Zobaer and
Junaed, Jahedul Alam",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.183",
doi = "10.18653/v1/2024.semeval-1.183",
pages = "1260--1268",
abstract = "In this paper we present a Chain-of-Thought enhanced solution for large language models, including flanT5 and GPT 3.5 Turbo, aimed at solving mathematical problems to fill in blanks from news headlines. Our approach builds on adata augmentation strategy that incorporates additional mathematical reasoning observations into the original dataset sourced from another mathematical corpus. Both automatic and manual annotations are applied to explicitly describe the reasoning steps required for models to reach the target answer. We employ an ensemble majority voting method to generate finalpredictions across our best-performing models. Our analysis reveals that while larger models trained with our enhanced dataset achieve significant gains (91{\%} accuracy, ranking 5th on the NumEval Task 3 leaderboard), smaller models do not experience improvements and may even see a decrease in overall accuracy. We conclude that improving our automatic an-notations via crowdsourcing methods can be a worthwhile endeavor to train larger models than the ones from this study to see the most accurate results.",
}
| In this paper we present a Chain-of-Thought enhanced solution for large language models, including flanT5 and GPT 3.5 Turbo, aimed at solving mathematical problems to fill in blanks from news headlines. Our approach builds on adata augmentation strategy that incorporates additional mathematical reasoning observations into the original dataset sourced from another mathematical corpus. Both automatic and manual annotations are applied to explicitly describe the reasoning steps required for models to reach the target answer. We employ an ensemble majority voting method to generate finalpredictions across our best-performing models. Our analysis reveals that while larger models trained with our enhanced dataset achieve significant gains (91{\%} accuracy, ranking 5th on the NumEval Task 3 leaderboard), smaller models do not experience improvements and may even see a decrease in overall accuracy. We conclude that improving our automatic an-notations via crowdsourcing methods can be a worthwhile endeavor to train larger models than the ones from this study to see the most accurate results. | [
"Gonzalez, Andres",
"Hossain, Md Zobaer",
"Junaed, Jahedul Alam"
] | NumDecoders at SemEval-2024 Task 7: FlanT5 and GPT enhanced with CoT for Numerical Reasoning | semeval-1.183 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.184.bib | https://aclanthology.org/2024.semeval-1.184/ | @inproceedings{liu-thoma-2024-fzi,
title = "{FZI}-{WIM} at {S}em{E}val-2024 Task 2: Self-Consistent {C}o{T} for Complex {NLI} in Biomedical Domain",
author = "Liu, Jin and
Thoma, Steffen",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.184",
doi = "10.18653/v1/2024.semeval-1.184",
pages = "1269--1279",
abstract = "This paper describes the inference system of FZI-WIM at the SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials. Our system utilizes the chain of thought (CoT) paradigm to tackle this complex reasoning problem and further improve the CoT performance with self-consistency. Instead of greedy decoding, we sample multiple reasoning chains with the same prompt and make thefinal verification with majority voting. The self-consistent CoT system achieves a baseline F1 score of 0.80 (1st), faithfulness score of 0.90 (3rd), and consistency score of 0.73 (12th). We release the code and data publicly.",
}
| This paper describes the inference system of FZI-WIM at the SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials. Our system utilizes the chain of thought (CoT) paradigm to tackle this complex reasoning problem and further improve the CoT performance with self-consistency. Instead of greedy decoding, we sample multiple reasoning chains with the same prompt and make thefinal verification with majority voting. The self-consistent CoT system achieves a baseline F1 score of 0.80 (1st), faithfulness score of 0.90 (3rd), and consistency score of 0.73 (12th). We release the code and data publicly. | [
"Liu, Jin",
"Thoma, Steffen"
] | FZI-WIM at SemEval-2024 Task 2: Self-Consistent CoT for Complex NLI in Biomedical Domain | semeval-1.184 | Poster | 2406.10040 | [
"https://github.com/jens5588/fzi-wim-nli4ct"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.185.bib | https://aclanthology.org/2024.semeval-1.185/ | @inproceedings{guimaraes-etal-2024-lisbon,
title = "Lisbon Computational Linguists at {S}em{E}val-2024 Task 2: Using a Mistral-7{B} Model and Data Augmentation",
author = "Guimar{\~a}es, Artur and
Martins, Bruno and
Magalh{\~a}es, Jo{\~a}o",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.185",
doi = "10.18653/v1/2024.semeval-1.185",
pages = "1280--1287",
abstract = "ABSTRACT: This paper describes our approach to the SemEval-2024 safe biomedical Natural Language Inference for Clinical Trials (NLI4CT) task, which concerns classifying statements about Clinical Trial Reports (CTRs). We explored the capabilities of Mistral-7B, a generalistic open-source Large Language Model (LLM). We developed a prompt for the NLI4CT task, and fine-tuning a quantized version of the model using a slightly augmented version of the training dataset. The experimental results show that this approach can produce notable results in terms of the macro F1-score, while having limitations in terms of faithfulness and consistency. All the developed code is publicly available on a GitHub repository.",
}
| ABSTRACT: This paper describes our approach to the SemEval-2024 safe biomedical Natural Language Inference for Clinical Trials (NLI4CT) task, which concerns classifying statements about Clinical Trial Reports (CTRs). We explored the capabilities of Mistral-7B, a generalistic open-source Large Language Model (LLM). We developed a prompt for the NLI4CT task, and fine-tuning a quantized version of the model using a slightly augmented version of the training dataset. The experimental results show that this approach can produce notable results in terms of the macro F1-score, while having limitations in terms of faithfulness and consistency. All the developed code is publicly available on a GitHub repository. | [
"Guimar{\\~a}es, Artur",
"Martins, Bruno",
"Magalh{\\~a}es, Jo{\\~a}o"
] | Lisbon Computational Linguists at SemEval-2024 Task 2: Using a Mistral-7B Model and Data Augmentation | semeval-1.185 | Poster | [
"https://github.com/araag2/SemEval2024-Task2"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.186.bib | https://aclanthology.org/2024.semeval-1.186/ | @inproceedings{lopez-ponce-etal-2024-gil,
title = "{GIL}-{IIMAS} {UNAM} at {S}em{E}val-2024 Task 1: {SAND}: An In Depth Analysis of Semantic Relatedness Using Regression and Similarity Characteristics",
author = "Lopez-ponce, Francisco and
Cadena, {\'A}ngel and
Salas-jimenez, Karla and
Bel-enguix, Gemma and
Preciado-m{\'a}rquez, David",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.186",
doi = "10.18653/v1/2024.semeval-1.186",
pages = "1288--1292",
abstract = "The STR shared task aims at detecting the degree of semantic relatedness between sentence pairs in multiple languages. Semantic relatedness relies on elements such as topic similarity, point of view agreement, entailment, and even human intuition, making it a broader field than sentence similarity. The GIL-IIMAS UNAM team proposes a model based in the SAND characteristics composition (Sentence Transformers, AnglE Embeddings, N-grams, Sentence Length Difference coefficient) and classical regression algorithms. This model achieves a 0.83 Spearman Correlation score in the English test, and a 0.73 in the Spanish counterpart, finishing just above the SemEval baseline in English, and second place in Spanish.",
}
| The STR shared task aims at detecting the degree of semantic relatedness between sentence pairs in multiple languages. Semantic relatedness relies on elements such as topic similarity, point of view agreement, entailment, and even human intuition, making it a broader field than sentence similarity. The GIL-IIMAS UNAM team proposes a model based in the SAND characteristics composition (Sentence Transformers, AnglE Embeddings, N-grams, Sentence Length Difference coefficient) and classical regression algorithms. This model achieves a 0.83 Spearman Correlation score in the English test, and a 0.73 in the Spanish counterpart, finishing just above the SemEval baseline in English, and second place in Spanish. | [
"Lopez-ponce, Francisco",
"Cadena, {\\'A}ngel",
"Salas-jimenez, Karla",
"Bel-enguix, Gemma",
"Preciado-m{\\'a}rquez, David"
] | GIL-IIMAS UNAM at SemEval-2024 Task 1: SAND: An In Depth Analysis of Semantic Relatedness Using Regression and Similarity Characteristics | semeval-1.186 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.187.bib | https://aclanthology.org/2024.semeval-1.187/ | @inproceedings{schumacher-rios-2024-team,
title = "Team {UTSA}-{NLP} at {S}em{E}val 2024 Task 5: Prompt Ensembling for Argument Reasoning in Civil Procedures with {GPT}4",
author = "Schumacher, Dan and
Rios, Anthony",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.187",
doi = "10.18653/v1/2024.semeval-1.187",
pages = "1293--1301",
abstract = "In this paper, we present our system for the SemEval Task 5, The Legal Argument Reasoning Task in Civil Procedure Challenge. Legal argument reasoning is an essential skill that all law students must master. Moreover, it is important to develop natural language processing solutions that can reason about a question given terse domain-specific contextual information. Our system explores a prompt-based solution using GPT4 to reason over legal arguments. We also evaluate an ensemble of prompting strategies, including chain-of-thought reasoning and in-context learning. Overall, our system results in a Macro F1 of .8095 on the validation dataset and .7315 (5th out of 21 teams) on the final test set. Code for this project is available at https://github.com/danschumac1/CivilPromptReasoningGPT4.",
}
| In this paper, we present our system for the SemEval Task 5, The Legal Argument Reasoning Task in Civil Procedure Challenge. Legal argument reasoning is an essential skill that all law students must master. Moreover, it is important to develop natural language processing solutions that can reason about a question given terse domain-specific contextual information. Our system explores a prompt-based solution using GPT4 to reason over legal arguments. We also evaluate an ensemble of prompting strategies, including chain-of-thought reasoning and in-context learning. Overall, our system results in a Macro F1 of .8095 on the validation dataset and .7315 (5th out of 21 teams) on the final test set. Code for this project is available at https://github.com/danschumac1/CivilPromptReasoningGPT4. | [
"Schumacher, Dan",
"Rios, Anthony"
] | Team UTSA-NLP at SemEval 2024 Task 5: Prompt Ensembling for Argument Reasoning in Civil Procedures with GPT4 | semeval-1.187 | Poster | 2404.01961 | [
"https://github.com/danschumac1/civilpromptreasoninggpt4"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.188.bib | https://aclanthology.org/2024.semeval-1.188/ | @inproceedings{nath-samin-2024-bd,
title = "{BD}-{NLP} at {S}em{E}val-2024 Task 2: Investigating Generative and Discriminative Models for Clinical Inference with Knowledge Augmentation",
author = "Nath, Shantanu and
Samin, Ahnaf Mozib",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.188",
doi = "10.18653/v1/2024.semeval-1.188",
pages = "1302--1308",
abstract = "Healthcare professionals rely on evidence from clinical trial records (CTRs) to devise treatment plans. However, the increasing quantity of CTRs poses challenges in efficiently assimilating the latest evidence to provide personalized evidence-based care. In this paper, we present our solution to the SemEval- 2024 Task 2 titled {``}Safe Biomedical Natural Language Inference for Clinical Trials{''}. Given a statement and one/two CTRs as inputs, the task is to determine whether or not the statement entails or contradicts the CTRs. We explore both generative and discriminative large language models (LLM) to investigate their performance for clinical inference. Moreover, we contrast the general-purpose LLMs with the ones specifically tailored for the clinical domain to study the potential advantage in mitigating distributional shifts. Furthermore, the benefit of augmenting additional knowledge within the prompt/statement is examined in this work. Our empirical study suggests that DeBERTa-lg, a discriminative general-purpose natural language inference model, obtains the highest F1 score of 0.77 on the test set, securing the fourth rank on the leaderboard. Intriguingly, the augmentation of knowledge yields subpar results across most cases.",
}
| Healthcare professionals rely on evidence from clinical trial records (CTRs) to devise treatment plans. However, the increasing quantity of CTRs poses challenges in efficiently assimilating the latest evidence to provide personalized evidence-based care. In this paper, we present our solution to the SemEval- 2024 Task 2 titled {``}Safe Biomedical Natural Language Inference for Clinical Trials{''}. Given a statement and one/two CTRs as inputs, the task is to determine whether or not the statement entails or contradicts the CTRs. We explore both generative and discriminative large language models (LLM) to investigate their performance for clinical inference. Moreover, we contrast the general-purpose LLMs with the ones specifically tailored for the clinical domain to study the potential advantage in mitigating distributional shifts. Furthermore, the benefit of augmenting additional knowledge within the prompt/statement is examined in this work. Our empirical study suggests that DeBERTa-lg, a discriminative general-purpose natural language inference model, obtains the highest F1 score of 0.77 on the test set, securing the fourth rank on the leaderboard. Intriguingly, the augmentation of knowledge yields subpar results across most cases. | [
"Nath, Shantanu",
"Samin, Ahnaf Mozib"
] | BD-NLP at SemEval-2024 Task 2: Investigating Generative and Discriminative Models for Clinical Inference with Knowledge Augmentation | semeval-1.188 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.189.bib | https://aclanthology.org/2024.semeval-1.189/ | @inproceedings{pahilajani-etal-2024-nlp,
title = "{NLP} at {UC} {S}anta {C}ruz at {S}em{E}val-2024 Task 5: Legal Answer Validation using Few-Shot Multi-Choice {QA}",
author = "Pahilajani, Anish and
Jain, Samyak and
Trivedi, Devasha",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.189",
doi = "10.18653/v1/2024.semeval-1.189",
pages = "1309--1314",
abstract = "This paper presents our submission to the SemEval 2024 Task 5: The Legal Argument Reasoning Task in Civil Procedure. We present two approaches to solving the task of legal answer validation, given an introduction to the case, a question and an answer candidate. Firstly, we fine-tuned pre-trained BERT-based models and found that models trained on domain knowledge perform better. Secondly, we performed few-shot prompting on GPT models and found that reformulating the answer validation task to be a multiple-choice QA task remarkably improves the performance of the model. Our best submission is a BERT-based model that achieved the 7th place out of 20.",
}
| This paper presents our submission to the SemEval 2024 Task 5: The Legal Argument Reasoning Task in Civil Procedure. We present two approaches to solving the task of legal answer validation, given an introduction to the case, a question and an answer candidate. Firstly, we fine-tuned pre-trained BERT-based models and found that models trained on domain knowledge perform better. Secondly, we performed few-shot prompting on GPT models and found that reformulating the answer validation task to be a multiple-choice QA task remarkably improves the performance of the model. Our best submission is a BERT-based model that achieved the 7th place out of 20. | [
"Pahilajani, Anish",
"Jain, Samyak",
"Trivedi, Devasha"
] | NLP at UC Santa Cruz at SemEval-2024 Task 5: Legal Answer Validation using Few-Shot Multi-Choice QA | semeval-1.189 | Poster | 2404.03150 | [
"https://github.com/devashat/ucsc-nlp-semeval-2024-task-5"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.190.bib | https://aclanthology.org/2024.semeval-1.190/ | @inproceedings{li-etal-2024-cot,
title = "{C}o{T}-based Data Augmentation Strategy for Persuasion Techniques Detection",
author = "Li, Dailin and
Wang, Chuhan and
Zou, Xin and
Wang, Junlong and
Chen, Peng and
Wang, Jian and
Yang, Liang and
Lin, Hongfei",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.190",
doi = "10.18653/v1/2024.semeval-1.190",
pages = "1315--1321",
abstract = "Detecting persuasive communication is an important topic in Natural Language Processing (NLP), as it can be useful in identifying fake information on social media. We have developed a system to identify applied persuasion techniques in text fragments across four languages: English, Bulgarian, North Macedonian, and Arabic. Our system uses data augmentation methods and employs an ensemble strategy that combines the strengths of both RoBERTa and DeBERTa models. Due to limited resources, we concentrated solely on task 1, and our solution achieved the top ranking in the English track during the official assessments. We also analyse the impact of architectural decisions, data constructionand training strategies.",
}
| Detecting persuasive communication is an important topic in Natural Language Processing (NLP), as it can be useful in identifying fake information on social media. We have developed a system to identify applied persuasion techniques in text fragments across four languages: English, Bulgarian, North Macedonian, and Arabic. Our system uses data augmentation methods and employs an ensemble strategy that combines the strengths of both RoBERTa and DeBERTa models. Due to limited resources, we concentrated solely on task 1, and our solution achieved the top ranking in the English track during the official assessments. We also analyse the impact of architectural decisions, data constructionand training strategies. | [
"Li, Dailin",
"Wang, Chuhan",
"Zou, Xin",
"Wang, Junlong",
"Chen, Peng",
"Wang, Jian",
"Yang, Liang",
"Lin, Hongfei"
] | CoT-based Data Augmentation Strategy for Persuasion Techniques Detection | semeval-1.190 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.191.bib | https://aclanthology.org/2024.semeval-1.191/ | @inproceedings{obiso-etal-2024-harmonee,
title = "{H}a{RM}o{NEE} at {S}em{E}val-2024 Task 6: Tuning-based Approaches to Hallucination Recognition",
author = "Obiso, Timothy and
Tu, Jingxuan and
Pustejovsky, James",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.191",
doi = "10.18653/v1/2024.semeval-1.191",
pages = "1322--1331",
abstract = "This paper presents the Hallucination Recognition Model for New Experiment Evaluation (HaRMoNEE) team{'}s winning ({\#}1) and {\#}10 submissions for SemEval-2024 Task 6: Shared- task on Hallucinations and Related Observable Overgeneration Mistakes (SHROOM){'}s two subtasks. This task challenged its participants to design systems to detect hallucinations in Large Language Model (LLM) outputs. Team HaRMoNEE proposes two architectures: (1) fine-tuning an off-the-shelf transformer-based model and (2) prompt tuning large-scale Large Language Models (LLMs). One submission from the fine-tuning approach outperformed all other submissions for the model-aware subtask; one submission from the prompt-tuning approach is the 10th-best submission on the leaderboard for the model-agnostic subtask. Our systems also include pre-processing, system-specific tuning, post-processing, and evaluation.",
}
| This paper presents the Hallucination Recognition Model for New Experiment Evaluation (HaRMoNEE) team{'}s winning ({\#}1) and {\#}10 submissions for SemEval-2024 Task 6: Shared- task on Hallucinations and Related Observable Overgeneration Mistakes (SHROOM){'}s two subtasks. This task challenged its participants to design systems to detect hallucinations in Large Language Model (LLM) outputs. Team HaRMoNEE proposes two architectures: (1) fine-tuning an off-the-shelf transformer-based model and (2) prompt tuning large-scale Large Language Models (LLMs). One submission from the fine-tuning approach outperformed all other submissions for the model-aware subtask; one submission from the prompt-tuning approach is the 10th-best submission on the leaderboard for the model-agnostic subtask. Our systems also include pre-processing, system-specific tuning, post-processing, and evaluation. | [
"Obiso, Timothy",
"Tu, Jingxuan",
"Pustejovsky, James"
] | HaRMoNEE at SemEval-2024 Task 6: Tuning-based Approaches to Hallucination Recognition | semeval-1.191 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.192.bib | https://aclanthology.org/2024.semeval-1.192/ | @inproceedings{garcia-etal-2024-verbanexai,
title = "{V}erba{N}ex{AI} Lab at {S}em{E}val-2024 Task 10: Emotion recognition and reasoning in mixed-coded conversations based on an {NRC} {VAD} approach",
author = "Garcia, Santiago and
Martinez, Elizabeth and
Cuadrado, Juan and
Martinez-santos, Juan and
Puertas, Edwin",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.192",
doi = "10.18653/v1/2024.semeval-1.192",
pages = "1332--1338",
abstract = "This study introduces an innovative approach to emotion recognition and reasoning about emotional shifts in code-mixed conversations, leveraging the NRC VAD Lexicon and computational models such as Transformer and GRU. Our methodology systematically identifies and categorizes emotional triggers, employing Emotion Flip Reasoning (EFR) and Emotion Recognition in Conversation (ERC). Through experiments with the MELD and MaSaC datasets, we demonstrate the model{'}s precision in accurately identifying emotional shift triggers and classifying emotions, evidenced by a significant improvement in accuracy as shown by an increase in the F1 score when including VAD analysis. These results underscore the importance of incorporating complex emotional dimensions into conversation analysis, paving new pathways for understanding emotional dynamics in code-mixed texts.",
}
| This study introduces an innovative approach to emotion recognition and reasoning about emotional shifts in code-mixed conversations, leveraging the NRC VAD Lexicon and computational models such as Transformer and GRU. Our methodology systematically identifies and categorizes emotional triggers, employing Emotion Flip Reasoning (EFR) and Emotion Recognition in Conversation (ERC). Through experiments with the MELD and MaSaC datasets, we demonstrate the model{'}s precision in accurately identifying emotional shift triggers and classifying emotions, evidenced by a significant improvement in accuracy as shown by an increase in the F1 score when including VAD analysis. These results underscore the importance of incorporating complex emotional dimensions into conversation analysis, paving new pathways for understanding emotional dynamics in code-mixed texts. | [
"Garcia, Santiago",
"Martinez, Elizabeth",
"Cuadrado, Juan",
"Martinez-santos, Juan",
"Puertas, Edwin"
] | VerbaNexAI Lab at SemEval-2024 Task 10: Emotion recognition and reasoning in mixed-coded conversations based on an NRC VAD approach | semeval-1.192 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.193.bib | https://aclanthology.org/2024.semeval-1.193/ | @inproceedings{pacheco-etal-2024-verbanexai,
title = "{V}erba{N}ex{AI} Lab at {S}em{E}val-2024 Task 3: Deciphering emotional causality in conversations using multimodal analysis approach",
author = "Pacheco, Victor and
Martinez, Elizabeth and
Cuadrado, Juan and
Martinez Santos, Juan Carlos and
Puertas, Edwin",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.193",
doi = "10.18653/v1/2024.semeval-1.193",
pages = "1339--1343",
abstract = "This study delineates our participation in the SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations, focusing on developing and applying an innovative methodology for emotion detection and cause analysis in conversational contexts. Leveraging logistic regression, we analyzed conversational utterances to identify emotions per utterance. Subsequently, we employed a dependency analysis pipeline, utilizing SpaCy to extract significant chunk features, including object, subject, adjectival modifiers, and adverbial clause modifiers. These features were analyzed within a graph-like framework, conceptualizing the dependency relationships as edges connecting emotional causes (tails) to their corresponding emotions (heads). Despite the novelty of our approach, the preliminary results were unexpectedly humbling, with a consistent score of 0.0 across all evaluated metrics. This paper presents our methodology, the challenges encountered, and an analysis of the potential factors contributing to these outcomes, offering insights into the complexities of emotion-cause analysis in multimodal conversational data.",
}
| This study delineates our participation in the SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations, focusing on developing and applying an innovative methodology for emotion detection and cause analysis in conversational contexts. Leveraging logistic regression, we analyzed conversational utterances to identify emotions per utterance. Subsequently, we employed a dependency analysis pipeline, utilizing SpaCy to extract significant chunk features, including object, subject, adjectival modifiers, and adverbial clause modifiers. These features were analyzed within a graph-like framework, conceptualizing the dependency relationships as edges connecting emotional causes (tails) to their corresponding emotions (heads). Despite the novelty of our approach, the preliminary results were unexpectedly humbling, with a consistent score of 0.0 across all evaluated metrics. This paper presents our methodology, the challenges encountered, and an analysis of the potential factors contributing to these outcomes, offering insights into the complexities of emotion-cause analysis in multimodal conversational data. | [
"Pacheco, Victor",
"Martinez, Elizabeth",
"Cuadrado, Juan",
"Martinez Santos, Juan Carlos",
"Puertas, Edwin"
] | VerbaNexAI Lab at SemEval-2024 Task 3: Deciphering emotional causality in conversations using multimodal analysis approach | semeval-1.193 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.194.bib | https://aclanthology.org/2024.semeval-1.194/ | @inproceedings{morillo-etal-2024-verbanexai,
title = "{V}erba{N}ex{AI} Lab at {S}em{E}val-2024 Task 1: A Multilayer Artificial Intelligence Model for Semantic Relationship Detection",
author = "Morillo, Anderson and
Pe{\~n}a, Daniel and
Martinez Santos, Juan Carlos and
Puertas, Edwin",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.194",
doi = "10.18653/v1/2024.semeval-1.194",
pages = "1344--1350",
abstract = "This paper presents an artificial intelligence model designed to detect semantic relationships in natural language, addressing the challenges of SemEval 2024 Task 1. Our goal is to advance machine understanding of the subtleties of human language through semantic analysis. Using a novel combination of convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and an attention mechanism, our model is trained on the STR-2022 dataset. This approach enhances its ability to detect semantic nuances in different texts. The model achieved an 81.92{\%} effectiveness rate and ranked 24th in SemEval 2024 Task 1. These results demonstrate its robustness and adaptability in detecting semantic relationships and validate its performance in diverse linguistic contexts. Our work contributes to natural language processing by providing insights into semantic textual relatedness. It sets a benchmark for future research and promises to inspire innovations that could transform digital language processing and interaction.",
}
| This paper presents an artificial intelligence model designed to detect semantic relationships in natural language, addressing the challenges of SemEval 2024 Task 1. Our goal is to advance machine understanding of the subtleties of human language through semantic analysis. Using a novel combination of convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and an attention mechanism, our model is trained on the STR-2022 dataset. This approach enhances its ability to detect semantic nuances in different texts. The model achieved an 81.92{\%} effectiveness rate and ranked 24th in SemEval 2024 Task 1. These results demonstrate its robustness and adaptability in detecting semantic relationships and validate its performance in diverse linguistic contexts. Our work contributes to natural language processing by providing insights into semantic textual relatedness. It sets a benchmark for future research and promises to inspire innovations that could transform digital language processing and interaction. | [
"Morillo, Anderson",
"Pe{\\~n}a, Daniel",
"Martinez Santos, Juan Carlos",
"Puertas, Edwin"
] | VerbaNexAI Lab at SemEval-2024 Task 1: A Multilayer Artificial Intelligence Model for Semantic Relationship Detection | semeval-1.194 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.195.bib | https://aclanthology.org/2024.semeval-1.195/ | @inproceedings{roy-dipta-vallurupalli-2024-umbclu,
title = "{UMBCLU} at {S}em{E}val-2024 Task 1: Semantic Textual Relatedness with and without machine translation",
author = "Roy Dipta, Shubhashis and
Vallurupalli, Sai",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.195",
doi = "10.18653/v1/2024.semeval-1.195",
pages = "1351--1357",
abstract = "The aim of SemEval-2024 Task 1, {``}Semantic Textual Relatedness for African and Asian Languages{''} is to develop models for identifying semantic textual relatedness (STR) between two sentences using multiple languages (14 African and Asian languages) and settings (supervised, unsupervised, and cross-lingual). Large language models (LLMs) have shown impressive performance on several natural language understanding tasks such as multilingual machine translation (MMT), semantic similarity (STS), and encoding sentence embeddings. Using a combination of LLMs that perform well on these tasks, we developed two STR models, TranSem and FineSem, for the supervised and cross-lingual settings. We explore the effectiveness of several training methods and the usefulness of machine translation. We find that direct fine-tuning on the task is comparable to using sentence embeddings and translating to English leads to better performance for some languages. In the supervised setting, our model performance is better than the official baseline for 3 languages with the remaining 4 performing on par. In the cross-lingual setting, our model performance is better than the baseline for 3 languages (leading to 1st place for Africaans and 2nd place for Indonesian), is on par for 2 languages and performs poorly on the remaining 7 languages.",
}
| The aim of SemEval-2024 Task 1, {``}Semantic Textual Relatedness for African and Asian Languages{''} is to develop models for identifying semantic textual relatedness (STR) between two sentences using multiple languages (14 African and Asian languages) and settings (supervised, unsupervised, and cross-lingual). Large language models (LLMs) have shown impressive performance on several natural language understanding tasks such as multilingual machine translation (MMT), semantic similarity (STS), and encoding sentence embeddings. Using a combination of LLMs that perform well on these tasks, we developed two STR models, TranSem and FineSem, for the supervised and cross-lingual settings. We explore the effectiveness of several training methods and the usefulness of machine translation. We find that direct fine-tuning on the task is comparable to using sentence embeddings and translating to English leads to better performance for some languages. In the supervised setting, our model performance is better than the official baseline for 3 languages with the remaining 4 performing on par. In the cross-lingual setting, our model performance is better than the baseline for 3 languages (leading to 1st place for Africaans and 2nd place for Indonesian), is on par for 2 languages and performs poorly on the remaining 7 languages. | [
"Roy Dipta, Shubhashis",
"Vallurupalli, Sai"
] | UMBCLU at SemEval-2024 Task 1: Semantic Textual Relatedness with and without machine translation | semeval-1.195 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.196.bib | https://aclanthology.org/2024.semeval-1.196/ | @inproceedings{raihan-etal-2024-masontigers,
title = "{M}ason{T}igers at {S}em{E}val-2024 Task 9: Solving Puzzles with an Ensemble of Chain-of-Thought Prompts",
author = "Raihan, Nishat and
Goswami, Dhiman and
Bin Emran, Al Nahian and
Puspo, Sadiya Sayara Chowdhury and
Ganguly, Amrita and
Zampieri, Marcos",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.196",
doi = "10.18653/v1/2024.semeval-1.196",
pages = "1358--1363",
abstract = "Our paper presents team MasonTigers submission to the SemEval-2024 Task 9 - which provides a dataset of puzzles for testing natural language understanding. We employ large language models (LLMs) to solve this task through several prompting techniques. Zero-shot and few-shot prompting generate reasonably good results when tested with proprietary LLMs, compared to the open-source models. We obtain further improved results with chain-of-thought prompting, an iterative prompting method that breaks down the reasoning process step-by-step. We obtain our best results by utilizing an ensemble of chain-of-thought prompts, placing 2nd in the word puzzle subtask and 13th in the sentence puzzle subtask. The strong performance of prompted LLMs demonstrates their capability for complex reasoning when provided with a decomposition of the thought process. Our work sheds light on how step-wise explanatory prompts can unlock more of the knowledge encoded in the parameters of large models.",
}
| Our paper presents team MasonTigers submission to the SemEval-2024 Task 9 - which provides a dataset of puzzles for testing natural language understanding. We employ large language models (LLMs) to solve this task through several prompting techniques. Zero-shot and few-shot prompting generate reasonably good results when tested with proprietary LLMs, compared to the open-source models. We obtain further improved results with chain-of-thought prompting, an iterative prompting method that breaks down the reasoning process step-by-step. We obtain our best results by utilizing an ensemble of chain-of-thought prompts, placing 2nd in the word puzzle subtask and 13th in the sentence puzzle subtask. The strong performance of prompted LLMs demonstrates their capability for complex reasoning when provided with a decomposition of the thought process. Our work sheds light on how step-wise explanatory prompts can unlock more of the knowledge encoded in the parameters of large models. | [
"Raihan, Nishat",
"Goswami, Dhiman",
"Bin Emran, Al Nahian",
"Puspo, Sadiya Sayara Chowdhury",
"Ganguly, Amrita",
"Zampieri, Marcos"
] | MasonTigers at SemEval-2024 Task 9: Solving Puzzles with an Ensemble of Chain-of-Thought Prompts | semeval-1.196 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.197.bib | https://aclanthology.org/2024.semeval-1.197/ | @inproceedings{puspo-etal-2024-masontigers,
title = "{M}ason{T}igers at {S}em{E}val-2024 Task 8: Performance Analysis of Transformer-based Models on Machine-Generated Text Detection",
author = {Puspo, Sadiya Sayara Chowdhury and
Raihan, Nishat and
Goswami, Dhiman and
Bin Emran, Al Nahian and
Ganguly, Amrita and
Uzuner, {\"O}zlem},
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.197",
doi = "10.18653/v1/2024.semeval-1.197",
pages = "1364--1372",
abstract = "This paper presents the MasonTigers entryto the SemEval-2024 Task 8 - Multigenerator, Multidomain, and Multilingual BlackBox Machine-Generated Text Detection. Thetask encompasses Binary Human-Written vs.Machine-Generated Text Classification (TrackA), Multi-Way Machine-Generated Text Classification (Track B), and Human-Machine MixedText Detection (Track C). Our best performing approaches utilize mainly the ensemble ofdiscriminator transformer models along withsentence transformer and statistical machinelearning approaches in specific cases. Moreover, Zero shot prompting and fine-tuning ofFLAN-T5 are used for Track A and B.",
}
| This paper presents the MasonTigers entryto the SemEval-2024 Task 8 - Multigenerator, Multidomain, and Multilingual BlackBox Machine-Generated Text Detection. Thetask encompasses Binary Human-Written vs.Machine-Generated Text Classification (TrackA), Multi-Way Machine-Generated Text Classification (Track B), and Human-Machine MixedText Detection (Track C). Our best performing approaches utilize mainly the ensemble ofdiscriminator transformer models along withsentence transformer and statistical machinelearning approaches in specific cases. Moreover, Zero shot prompting and fine-tuning ofFLAN-T5 are used for Track A and B. | [
"Puspo, Sadiya Sayara Chowdhury",
"Raihan, Nishat",
"Goswami, Dhiman",
"Bin Emran, Al Nahian",
"Ganguly, Amrita",
"Uzuner, {\\\"O}zlem"
] | MasonTigers at SemEval-2024 Task 8: Performance Analysis of Transformer-based Models on Machine-Generated Text Detection | semeval-1.197 | Poster | 2403.14989 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.198.bib | https://aclanthology.org/2024.semeval-1.198/ | @inproceedings{chandakacherla-etal-2024-uic,
title = "{UIC} {NLP} {GRADS} at {S}em{E}val-2024 Task 3: Two-Step Disjoint Modeling for Emotion-Cause Pair Extraction",
author = "Chandakacherla, Sharad and
Bhargava, Vaibhav and
Parde, Natalie",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.198",
doi = "10.18653/v1/2024.semeval-1.198",
pages = "1373--1379",
abstract = "Disentangling underlying factors contributing to the expression of emotion in multimodal data is challenging but may accelerate progress toward many real-world applications. In this paper we describe our approach for solving SemEval-2024 Task {\#}3, Sub-Task {\#}1, focused on identifying utterance-level emotions and their causes using the text available from the multimodal F.R.I.E.N.D.S. television series dataset. We propose to disjointly model emotion detection and causal span detection, borrowing a paradigm popular in question answering (QA) to train our model. Through our experiments we find that (a) contextual utterances before and after the target utterance play a crucial role in emotion classification; and (b) once the emotion is established, detecting the causal spans resulting in that emotion using our QA-based technique yields promising results.",
}
| Disentangling underlying factors contributing to the expression of emotion in multimodal data is challenging but may accelerate progress toward many real-world applications. In this paper we describe our approach for solving SemEval-2024 Task {\#}3, Sub-Task {\#}1, focused on identifying utterance-level emotions and their causes using the text available from the multimodal F.R.I.E.N.D.S. television series dataset. We propose to disjointly model emotion detection and causal span detection, borrowing a paradigm popular in question answering (QA) to train our model. Through our experiments we find that (a) contextual utterances before and after the target utterance play a crucial role in emotion classification; and (b) once the emotion is established, detecting the causal spans resulting in that emotion using our QA-based technique yields promising results. | [
"Ch",
"akacherla, Sharad",
"Bhargava, Vaibhav",
"Parde, Natalie"
] | UIC NLP GRADS at SemEval-2024 Task 3: Two-Step Disjoint Modeling for Emotion-Cause Pair Extraction | semeval-1.198 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.199.bib | https://aclanthology.org/2024.semeval-1.199/ | @inproceedings{goswami-etal-2024-masontigers-semeval,
title = "{M}ason{T}igers at {S}em{E}val-2024 Task 1: An Ensemble Approach for Semantic Textual Relatedness",
author = "Goswami, Dhiman and
Puspo, Sadiya Sayara Chowdhury and
Raihan, Nishat and
Bin Emran, Al Nahian and
Ganguly, Amrita and
Zampieri, Marcos",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.199",
doi = "10.18653/v1/2024.semeval-1.199",
pages = "1380--1390",
abstract = "This paper presents the MasonTigers{'} entry to the SemEval-2024 Task 1 - Semantic Textual Relatedness. The task encompasses supervised (Track A), unsupervised (Track B), and cross-lingual (Track C) approaches to semantic textual relatedness across 14 languages. MasonTigers stands out as one of the two teams who participated in all languages across the three tracks. Our approaches achieved rankings ranging from 11th to 21st in Track A, from 1st to 8th in Track B, and from 5th to 12th in Track C. Adhering to the task-specific constraints, our best performing approaches utilize an ensemble of statistical machine learning approaches combined with language-specific BERT based models and sentence transformers.",
}
| This paper presents the MasonTigers{'} entry to the SemEval-2024 Task 1 - Semantic Textual Relatedness. The task encompasses supervised (Track A), unsupervised (Track B), and cross-lingual (Track C) approaches to semantic textual relatedness across 14 languages. MasonTigers stands out as one of the two teams who participated in all languages across the three tracks. Our approaches achieved rankings ranging from 11th to 21st in Track A, from 1st to 8th in Track B, and from 5th to 12th in Track C. Adhering to the task-specific constraints, our best performing approaches utilize an ensemble of statistical machine learning approaches combined with language-specific BERT based models and sentence transformers. | [
"Goswami, Dhiman",
"Puspo, Sadiya Sayara Chowdhury",
"Raihan, Nishat",
"Bin Emran, Al Nahian",
"Ganguly, Amrita",
"Zampieri, Marcos"
] | MasonTigers at SemEval-2024 Task 1: An Ensemble Approach for Semantic Textual Relatedness | semeval-1.199 | Poster | 2403.14990 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.200.bib | https://aclanthology.org/2024.semeval-1.200/ | @inproceedings{take-tran-2024-riddlemasters,
title = "{R}iddle{M}asters at {S}em{E}val-2024 Task 9: Comparing Instruction Fine-tuning with Zero-Shot Approaches",
author = "Take, Kejsi and
Tran, Chau",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.200",
doi = "10.18653/v1/2024.semeval-1.200",
pages = "1391--1396",
abstract = "This paper describes our contribution to SemEval 2023 Task 8: Brainteaser. We compared multiple zero-shot approaches using GPT-4, the state of the art model with Mistral-7B, a much smaller open-source LLM. While GPT-4 remains a clear winner in all the zero-shot approaches, we show that finetuning Mistral-7B can achieve comparable, even though marginally lower results.",
}
| This paper describes our contribution to SemEval 2023 Task 8: Brainteaser. We compared multiple zero-shot approaches using GPT-4, the state of the art model with Mistral-7B, a much smaller open-source LLM. While GPT-4 remains a clear winner in all the zero-shot approaches, we show that finetuning Mistral-7B can achieve comparable, even though marginally lower results. | [
"Take, Kejsi",
"Tran, Chau"
] | RiddleMasters at SemEval-2024 Task 9: Comparing Instruction Fine-tuning with Zero-Shot Approaches | semeval-1.200 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.201.bib | https://aclanthology.org/2024.semeval-1.201/ | @inproceedings{mandal-modi-2024-iitk,
title = "{IITK} at {S}em{E}val-2024 Task 2: Exploring the Capabilities of {LLM}s for Safe Biomedical Natural Language Inference for Clinical Trials",
author = "Mandal, Shreyasi and
Modi, Ashutosh",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.201",
doi = "10.18653/v1/2024.semeval-1.201",
pages = "1397--1404",
abstract = "Large Language models (LLMs) have demonstrated state-of-the-art performance in various natural language processing (NLP) tasks across multiple domains, yet they are prone to shortcut learning and factual inconsistencies. This research investigates LLMs{'} robustness, consistency, and faithful reasoning when performing Natural Language Inference (NLI) on breast cancer Clinical Trial Reports (CTRs) in the context of SemEval 2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials. We examine the reasoning capabilities of LLMs and their adeptness at logical problem-solving. A comparative analysis is conducted on pre-trained language models (PLMs), GPT-3.5, and Gemini Pro under zero-shot settings using Retrieval-Augmented Generation (RAG) framework, integrating various reasoning chains. The evaluation yields an F1 score of 0.69, consistency of 0.71, and a faithfulness score of 0.90 on the test dataset.",
}
| Large Language models (LLMs) have demonstrated state-of-the-art performance in various natural language processing (NLP) tasks across multiple domains, yet they are prone to shortcut learning and factual inconsistencies. This research investigates LLMs{'} robustness, consistency, and faithful reasoning when performing Natural Language Inference (NLI) on breast cancer Clinical Trial Reports (CTRs) in the context of SemEval 2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials. We examine the reasoning capabilities of LLMs and their adeptness at logical problem-solving. A comparative analysis is conducted on pre-trained language models (PLMs), GPT-3.5, and Gemini Pro under zero-shot settings using Retrieval-Augmented Generation (RAG) framework, integrating various reasoning chains. The evaluation yields an F1 score of 0.69, consistency of 0.71, and a faithfulness score of 0.90 on the test dataset. | [
"M",
"al, Shreyasi",
"Modi, Ashutosh"
] | IITK at SemEval-2024 Task 2: Exploring the Capabilities of LLMs for Safe Biomedical Natural Language Inference for Clinical Trials | semeval-1.201 | Poster | 2404.04510 | [
"https://github.com/exploration-lab/iitk-semeval-2024-task-2-clinical-nli"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.202.bib | https://aclanthology.org/2024.semeval-1.202/ | @inproceedings{jorgensen-2024-pear,
title = "{PEAR} at {S}em{E}val-2024 Task 1: Pair Encoding with Augmented Re-sampling for Semantic Textual Relatedness",
author = "J{\o}rgensen, Tollef",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.202",
doi = "10.18653/v1/2024.semeval-1.202",
pages = "1405--1411",
abstract = "This paper describes a system submitted to the supervised track (Track A) at SemEval-24: Semantic Textual Relatedness for African and Asian Languages. Challenged with datasets of varying sizes, some as small as 800 samples, we observe that the PEAR system, using smaller pre-trained masked language models to process sentence pairs (Pair Encoding), results in models that efficiently adapt to the task.In addition to the simplistic modeling approach, we experiment with hyperparameter optimization and data expansion from the provided training sets using multilingual bi-encoders, sampling a dynamic number of nearest neighbors (Augmented Re-sampling). The final models are lightweight, allowing fast experimentation and integration of new languages.",
}
| This paper describes a system submitted to the supervised track (Track A) at SemEval-24: Semantic Textual Relatedness for African and Asian Languages. Challenged with datasets of varying sizes, some as small as 800 samples, we observe that the PEAR system, using smaller pre-trained masked language models to process sentence pairs (Pair Encoding), results in models that efficiently adapt to the task.In addition to the simplistic modeling approach, we experiment with hyperparameter optimization and data expansion from the provided training sets using multilingual bi-encoders, sampling a dynamic number of nearest neighbors (Augmented Re-sampling). The final models are lightweight, allowing fast experimentation and integration of new languages. | [
"J{\\o}rgensen, Tollef"
] | PEAR at SemEval-2024 Task 1: Pair Encoding with Augmented Re-sampling for Semantic Textual Relatedness | semeval-1.202 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.203.bib | https://aclanthology.org/2024.semeval-1.203/ | @inproceedings{abaskohi-etal-2024-bcamirs,
title = "{BCA}mirs at {S}em{E}val-2024 Task 4: Beyond Words: A Multimodal and Multilingual Exploration of Persuasion in Memes",
author = "Abaskohi, Amirhossein and
Dabiriaghdam, Amirhossein and
Wang, Lele and
Carenini, Giuseppe",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.203",
doi = "10.18653/v1/2024.semeval-1.203",
pages = "1412--1423",
abstract = "Memes, combining text and images, frequently use metaphors to convey persuasive messages, shaping public opinion. Motivated by this, our team engaged in SemEval-2024 Task 4, a hierarchical multi-label classification task designed to identify rhetorical and psychological persuasion techniques embedded within memes. To tackle this problem, we introduced a caption generation step to assess the modality gap and the impact of additional semantic information from images, which improved our result. Our best model utilizes GPT-4 generated captions alongside meme text to fine-tune RoBERTa as the text encoder and CLIP as the image encoder. It outperforms the baseline by a large margin in all 12 subtasks. In particular, it ranked in top-3 across all languages in Subtask 2a, and top-4 in Subtask 2b, demonstrating quantitatively strong performance. The improvement achieved by the introduced intermediate step is likely attributable to the metaphorical essence of images that challenges visual encoders. This highlights the potential for improving abstract visual semantics encoding.",
}
| Memes, combining text and images, frequently use metaphors to convey persuasive messages, shaping public opinion. Motivated by this, our team engaged in SemEval-2024 Task 4, a hierarchical multi-label classification task designed to identify rhetorical and psychological persuasion techniques embedded within memes. To tackle this problem, we introduced a caption generation step to assess the modality gap and the impact of additional semantic information from images, which improved our result. Our best model utilizes GPT-4 generated captions alongside meme text to fine-tune RoBERTa as the text encoder and CLIP as the image encoder. It outperforms the baseline by a large margin in all 12 subtasks. In particular, it ranked in top-3 across all languages in Subtask 2a, and top-4 in Subtask 2b, demonstrating quantitatively strong performance. The improvement achieved by the introduced intermediate step is likely attributable to the metaphorical essence of images that challenges visual encoders. This highlights the potential for improving abstract visual semantics encoding. | [
"Abaskohi, Amirhossein",
"Dabiriaghdam, Amirhossein",
"Wang, Lele",
"Carenini, Giuseppe"
] | BCAmirs at SemEval-2024 Task 4: Beyond Words: A Multimodal and Multilingual Exploration of Persuasion in Memes | semeval-1.203 | Poster | 2404.03022 | [
"https://github.com/amirabaskohi/beyond-words-a-multimodal-exploration-of-persuasion-in-memes"
] | https://huggingface.co/papers/2404.03022 | 0 | 0 | 0 | 4 | 1 | [
"AmirHossein1378/LLaVA-1.5-7b-meme-captioner"
] | [] | [] |
https://aclanthology.org/2024.semeval-1.204.bib | https://aclanthology.org/2024.semeval-1.204/ | @inproceedings{pauk-pacheco-2024-pauk,
title = "Pauk at {S}em{E}val-2024 Task 4: A Neuro-Symbolic Method for Consistent Classification of Propaganda Techniques in Memes",
author = "Pauk, Matt and
Pacheco, Maria Leonor",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.204",
doi = "10.18653/v1/2024.semeval-1.204",
pages = "1424--1434",
abstract = "Memes play a key role in most modern informa-tion campaigns, particularly propaganda cam-paigns. Identifying the persuasive techniquespresent in memes is an important step in de-veloping systems to recognize and curtail pro-paganda. This work presents a framework toidentify the persuasive techniques present inmemes for the SemEval 2024 Task 4, accordingto a hierarchical taxonomy of propaganda tech-niques. The framework involves a knowledgedistillation method, where the base model is acombination of DeBERTa and ResNET usedto classify the text and image, and the teachermodel consists of a group of weakly enforcedlogic rules that promote the hierarchy of per-suasion techniques. The addition of the logicrule layer for knowledge distillation shows im-provement in respecting the hierarchy of thetaxonomy with a slight boost in performance.",
}
| Memes play a key role in most modern informa-tion campaigns, particularly propaganda cam-paigns. Identifying the persuasive techniquespresent in memes is an important step in de-veloping systems to recognize and curtail pro-paganda. This work presents a framework toidentify the persuasive techniques present inmemes for the SemEval 2024 Task 4, accordingto a hierarchical taxonomy of propaganda tech-niques. The framework involves a knowledgedistillation method, where the base model is acombination of DeBERTa and ResNET usedto classify the text and image, and the teachermodel consists of a group of weakly enforcedlogic rules that promote the hierarchy of per-suasion techniques. The addition of the logicrule layer for knowledge distillation shows im-provement in respecting the hierarchy of thetaxonomy with a slight boost in performance. | [
"Pauk, Matt",
"Pacheco, Maria Leonor"
] | Pauk at SemEval-2024 Task 4: A Neuro-Symbolic Method for Consistent Classification of Propaganda Techniques in Memes | semeval-1.204 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.205.bib | https://aclanthology.org/2024.semeval-1.205/ | @inproceedings{kim-etal-2024-saama,
title = "Saama Technologies at {S}em{E}val-2024 Task 2: Three-module System for {NLI}4{CT} Enhanced by {LLM}-generated Intermediate Labels",
author = "Kim, Hwanmun and
Kanakarajan, Kamal Raj and
Sankarasubbu, Malaikannan",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.205",
doi = "10.18653/v1/2024.semeval-1.205",
pages = "1435--1435",
abstract = "Participating in SemEval 2024 Task 2, we built a three-module system to predict entailment labels for NLI4CT, which consists of a sequence of the query generation module, the query answering module, and the aggregation module. We fine-tuned or prompted each module with the intermediate labels we generated with LLMs, and we optimized the combinations of different modules through experiments. Our system is ranked 19th {\textasciitilde} 24th in the SemEval 2024 Task 2 leaderboard in different metrics. We made several interesting observations regarding the correlation between different metrics and the sensitivity of our system on the aggregation module. We performed the error analysis on our system which can potentially help to improve our system further.",
}
| Participating in SemEval 2024 Task 2, we built a three-module system to predict entailment labels for NLI4CT, which consists of a sequence of the query generation module, the query answering module, and the aggregation module. We fine-tuned or prompted each module with the intermediate labels we generated with LLMs, and we optimized the combinations of different modules through experiments. Our system is ranked 19th {\textasciitilde} 24th in the SemEval 2024 Task 2 leaderboard in different metrics. We made several interesting observations regarding the correlation between different metrics and the sensitivity of our system on the aggregation module. We performed the error analysis on our system which can potentially help to improve our system further. | [
"Kim, Hwanmun",
"Kanakarajan, Kamal Raj",
"Sankarasubbu, Malaikannan"
] | Saama Technologies at SemEval-2024 Task 2: Three-module System for NLI4CT Enhanced by LLM-generated Intermediate Labels | semeval-1.205 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.206.bib | https://aclanthology.org/2024.semeval-1.206/ | @inproceedings{mishra-ghashami-2024-amazutah,
title = "{A}maz{U}tah{\_}{NLP} at {S}em{E}val-2024 Task 9: A {M}ulti{C}hoice Question Answering System for Commonsense Defying Reasoning",
author = "Mishra, Soumya and
Ghashami, Mina",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.206",
doi = "10.18653/v1/2024.semeval-1.206",
pages = "1436--1442",
abstract = "The SemEval 2024 BRAINTEASER task represents a pioneering venture in Natural Language Processing (NLP) by focusing on lateral thinking, a dimension of cognitive reasoning that is often overlooked in traditional linguistic analyses. This challenge comprises of Sentence Puzzle and Word Puzzle sub-tasks and aims to test language models{'} capacity for divergent thinking. In this paper, we present our approach to the BRAINTEASER task. We employ a holistic strategy by leveraging cutting-edge pre-trained models in multiple choice architecture, and diversify the training data with Sentence and Word Puzzle datasets. To gain further improvement, we fine-tuned the model with synthetic humor/jokes dataset and the RiddleSense dataset which helped augmenting the model{'}s lateral thinking abilities. Empirical results show that our approach achieve 92.5{\%} accuracy in Sentence Puzzle subtask and 80.2{\%} accuracy in Word Puzzle subtask.",
}
| The SemEval 2024 BRAINTEASER task represents a pioneering venture in Natural Language Processing (NLP) by focusing on lateral thinking, a dimension of cognitive reasoning that is often overlooked in traditional linguistic analyses. This challenge comprises of Sentence Puzzle and Word Puzzle sub-tasks and aims to test language models{'} capacity for divergent thinking. In this paper, we present our approach to the BRAINTEASER task. We employ a holistic strategy by leveraging cutting-edge pre-trained models in multiple choice architecture, and diversify the training data with Sentence and Word Puzzle datasets. To gain further improvement, we fine-tuned the model with synthetic humor/jokes dataset and the RiddleSense dataset which helped augmenting the model{'}s lateral thinking abilities. Empirical results show that our approach achieve 92.5{\%} accuracy in Sentence Puzzle subtask and 80.2{\%} accuracy in Word Puzzle subtask. | [
"Mishra, Soumya",
"Ghashami, Mina"
] | AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning | semeval-1.206 | Poster | 2405.10385 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.207.bib | https://aclanthology.org/2024.semeval-1.207/ | @inproceedings{basak-etal-2024-iitk,
title = "{IITK} at {S}em{E}val-2024 Task 1: Contrastive Learning and Autoencoders for Semantic Textual Relatedness in Multilingual Texts",
author = "Basak, Udvas and
Dutta, Rajarshi and
Pandey, Shivam and
Modi, Ashutosh",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.207",
doi = "10.18653/v1/2024.semeval-1.207",
pages = "1443--1448",
abstract = "This paper describes our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness. The challenge is focused on automatically detecting the degree of relatedness between pairs of sentences for 14 languages including both high and low-resource Asian and African languages. Our team participated in two subtasks consisting of Track A: supervised and Track B: unsupervised. This paper focuses on a BERT-based contrastive learning and similarity metric based approach primarily for the supervised track while exploring autoencoders for the unsupervised track. It also aims on the creation of a bigram relatedness corpus using negative sampling strategy, thereby producing refined word embeddings.",
}
| This paper describes our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness. The challenge is focused on automatically detecting the degree of relatedness between pairs of sentences for 14 languages including both high and low-resource Asian and African languages. Our team participated in two subtasks consisting of Track A: supervised and Track B: unsupervised. This paper focuses on a BERT-based contrastive learning and similarity metric based approach primarily for the supervised track while exploring autoencoders for the unsupervised track. It also aims on the creation of a bigram relatedness corpus using negative sampling strategy, thereby producing refined word embeddings. | [
"Basak, Udvas",
"Dutta, Rajarshi",
"P",
"ey, Shivam",
"Modi, Ashutosh"
] | IITK at SemEval-2024 Task 1: Contrastive Learning and Autoencoders for Semantic Textual Relatedness in Multilingual Texts | semeval-1.207 | Poster | 2404.04513 | [
"https://github.com/exploration-lab/iitk-semeval-2024-task-1"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.208.bib | https://aclanthology.org/2024.semeval-1.208/ | @inproceedings{das-srihari-2024-compos,
title = "Compos Mentis at {S}em{E}val2024 Task6: A Multi-Faceted Role-based Large Language Model Ensemble to Detect Hallucination",
author = "Das, Souvik and
Srihari, Rohini",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.208",
doi = "10.18653/v1/2024.semeval-1.208",
pages = "1449--1454",
abstract = "Hallucinations in large language models (LLMs), where they generate fluent but factually incorrect outputs, pose challenges for applications requiring strict truthfulness. This work proposes a multi-faceted approach to detect such hallucinations across various language tasks. We leverage automatic data annotation using a proprietary LLM, fine-tuning of the Mistral-7B-instruct-v0.2 model on annotated and benchmark data, role-based and rationale-based prompting strategies, and an ensemble method combining different model outputs through majority voting. This comprehensive framework aims to improve the robustness and reliability of hallucination detection for LLM generations.",
}
| Hallucinations in large language models (LLMs), where they generate fluent but factually incorrect outputs, pose challenges for applications requiring strict truthfulness. This work proposes a multi-faceted approach to detect such hallucinations across various language tasks. We leverage automatic data annotation using a proprietary LLM, fine-tuning of the Mistral-7B-instruct-v0.2 model on annotated and benchmark data, role-based and rationale-based prompting strategies, and an ensemble method combining different model outputs through majority voting. This comprehensive framework aims to improve the robustness and reliability of hallucination detection for LLM generations. | [
"Das, Souvik",
"Srihari, Rohini"
] | Compos Mentis at SemEval2024 Task6: A Multi-Faceted Role-based Large Language Model Ensemble to Detect Hallucination | semeval-1.208 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.209.bib | https://aclanthology.org/2024.semeval-1.209/ | @inproceedings{lee-etal-2024-nycu,
title = "{NYCU}-{NLP} at {S}em{E}val-2024 Task 2: Aggregating Large Language Models in Biomedical Natural Language Inference for Clinical Trials",
author = "Lee, Lung-hao and
Chiou, Chen-ya and
Lin, Tzu-mi",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.209",
doi = "10.18653/v1/2024.semeval-1.209",
pages = "1455--1462",
abstract = "This study describes the model design of the NYCU-NLP system for the SemEval-2024 Task 2 that focuses on natural language inference for clinical trials. We aggregate several large language models to determine the inference relation (i.e., entailment or contradiction) between clinical trial reports and statements that may be manipulated with designed interventions to investigate the faithfulness and consistency of the developed models. First, we use ChatGPT v3.5 to augment original statements in training data and then fine-tune the SOLAR model with all augmented data. During the testing inference phase, we fine-tune the OpenChat model to reduce the influence of interventions and fed a cleaned statement into the fine-tuned SOLAR model for label prediction. Our submission produced a faithfulness score of 0.9236, ranking second of 32 participating teams, and ranked first for consistency with a score of 0.8092.",
}
| This study describes the model design of the NYCU-NLP system for the SemEval-2024 Task 2 that focuses on natural language inference for clinical trials. We aggregate several large language models to determine the inference relation (i.e., entailment or contradiction) between clinical trial reports and statements that may be manipulated with designed interventions to investigate the faithfulness and consistency of the developed models. First, we use ChatGPT v3.5 to augment original statements in training data and then fine-tune the SOLAR model with all augmented data. During the testing inference phase, we fine-tune the OpenChat model to reduce the influence of interventions and fed a cleaned statement into the fine-tuned SOLAR model for label prediction. Our submission produced a faithfulness score of 0.9236, ranking second of 32 participating teams, and ranked first for consistency with a score of 0.8092. | [
"Lee, Lung-hao",
"Chiou, Chen-ya",
"Lin, Tzu-mi"
] | NYCU-NLP at SemEval-2024 Task 2: Aggregating Large Language Models in Biomedical Natural Language Inference for Clinical Trials | semeval-1.209 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.210.bib | https://aclanthology.org/2024.semeval-1.210/ | @inproceedings{li-etal-2024-team,
title = "Team {ML}ab at {S}em{E}val-2024 Task 8: Analyzing Encoder Embeddings for Detecting {LLM}-generated Text",
author = "Li, Kevin and
Hasanaliyev, Kenan and
Zhu, Sally and
Altshuler, George and
Eberts, Alden and
Chen, Eric and
Wang, Kate and
Xia, Emily and
Browne, Eli and
Chen, Ian",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.210",
doi = "10.18653/v1/2024.semeval-1.210",
pages = "1463--1467",
abstract = "This paper explores solutions to the challenges posed by the widespread use of LLMs, particularly in the context of identifying human-written versus machine-generated text. Focusing on Subtask B of SemEval 2024 Task 8, we compare the performance of RoBERTa and DeBERTa models. Subtask B involved identifying not only human or machine text but also the specific LLM responsible for generating text, where our DeBERTa model outperformed the RoBERTa baseline by over 10{\%} in leaderboard accuracy. The results highlight the rapidly growing capabilities of LLMs and importance of keeping up with the latest advancements. Additionally, our paper presents visualizations using PCA and t-SNE that showcase the DeBERTa model{'}s ability to cluster different LLM outputs effectively. These findings contribute to understanding and improving AI methods for detecting machine-generated text, allowing us to build more robust and traceable AI systems in the language ecosystem.",
}
| This paper explores solutions to the challenges posed by the widespread use of LLMs, particularly in the context of identifying human-written versus machine-generated text. Focusing on Subtask B of SemEval 2024 Task 8, we compare the performance of RoBERTa and DeBERTa models. Subtask B involved identifying not only human or machine text but also the specific LLM responsible for generating text, where our DeBERTa model outperformed the RoBERTa baseline by over 10{\%} in leaderboard accuracy. The results highlight the rapidly growing capabilities of LLMs and importance of keeping up with the latest advancements. Additionally, our paper presents visualizations using PCA and t-SNE that showcase the DeBERTa model{'}s ability to cluster different LLM outputs effectively. These findings contribute to understanding and improving AI methods for detecting machine-generated text, allowing us to build more robust and traceable AI systems in the language ecosystem. | [
"Li, Kevin",
"Hasanaliyev, Kenan",
"Zhu, Sally",
"Altshuler, George",
"Eberts, Alden",
"Chen, Eric",
"Wang, Kate",
"Xia, Emily",
"Browne, Eli",
"Chen, Ian"
] | Team MLab at SemEval-2024 Task 8: Analyzing Encoder Embeddings for Detecting LLM-generated Text | semeval-1.210 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.211.bib | https://aclanthology.org/2024.semeval-1.211/ | @inproceedings{veerendranath-etal-2024-calc,
title = "Calc-{CMU} at {S}em{E}val-2024 Task 7: Pre-Calc - Learning to Use the Calculator Improves Numeracy in Language Models",
author = "Veerendranath, Vishruth and
Shah, Vishwa and
Ghate, Kshitish",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.211",
doi = "10.18653/v1/2024.semeval-1.211",
pages = "1468--1475",
abstract = "Quantitative and numerical comprehension in language is an important task in many fields like education and finance, but still remains a challenging task for language models. While tool and calculator usage has shown to be helpful to improve mathematical reasoning in large pretrained decoder-only language models, this remains unexplored for smaller language models with encoders. In this paper, we propose Pre-Calc, a simple pre-finetuning objective of learning to use the calculator for both encoder-only and encoder-decoder architectures, formulated as a discriminative and generative task respectively. We pre-train BERT and RoBERTa for discriminative calculator use and Flan-T5 for generative calculator use on the MAWPS, SVAMP, and AsDiv-A datasets, which improves performance on downstream tasks that require numerical understanding. Our code and data are available at https://github.com/calc-cmu/pre-calc.",
}
| Quantitative and numerical comprehension in language is an important task in many fields like education and finance, but still remains a challenging task for language models. While tool and calculator usage has shown to be helpful to improve mathematical reasoning in large pretrained decoder-only language models, this remains unexplored for smaller language models with encoders. In this paper, we propose Pre-Calc, a simple pre-finetuning objective of learning to use the calculator for both encoder-only and encoder-decoder architectures, formulated as a discriminative and generative task respectively. We pre-train BERT and RoBERTa for discriminative calculator use and Flan-T5 for generative calculator use on the MAWPS, SVAMP, and AsDiv-A datasets, which improves performance on downstream tasks that require numerical understanding. Our code and data are available at https://github.com/calc-cmu/pre-calc. | [
"Veerendranath, Vishruth",
"Shah, Vishwa",
"Ghate, Kshitish"
] | Calc-CMU at SemEval-2024 Task 7: Pre-Calc - Learning to Use the Calculator Improves Numeracy in Language Models | semeval-1.211 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.212.bib | https://aclanthology.org/2024.semeval-1.212/ | @inproceedings{gu-meng-2024-aispace,
title = "{AISPACE} at {S}em{E}val-2024 task 8: A Class-balanced Soft-voting System for Detecting Multi-generator Machine-generated Text",
author = "Gu, Renhua and
Meng, Xiangfeng",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.212",
doi = "10.18653/v1/2024.semeval-1.212",
pages = "1476--1481",
abstract = "SemEval-2024 Task 8 provides a challenge to detect human-written and machine-generated text. There are 3 subtasks for different detection scenarios. This paper proposes a system that mainly deals with Subtask B. It aims to detect if given full text is written by human or is generated by a specific Large Language Model (LLM), which is actually a multi-class text classification task. Our team AISPACE conducted a systematic study of fine-tuning transformer-based models, including encoder-only, decoder-only and encoder-decoder models. We compared their performance on this task and identified that encoder-only models performed exceptionally well. We also applied a weighted Cross Entropy loss function to address the issue of data imbalance of different class samples. Additionally, we employed soft-voting strategy over multi-models ensemble to enhance the reliability of our predictions. Our system ranked top 1 in Subtask B, which sets a state-of-the-art benchmark for this new challenge.",
}
| SemEval-2024 Task 8 provides a challenge to detect human-written and machine-generated text. There are 3 subtasks for different detection scenarios. This paper proposes a system that mainly deals with Subtask B. It aims to detect if given full text is written by human or is generated by a specific Large Language Model (LLM), which is actually a multi-class text classification task. Our team AISPACE conducted a systematic study of fine-tuning transformer-based models, including encoder-only, decoder-only and encoder-decoder models. We compared their performance on this task and identified that encoder-only models performed exceptionally well. We also applied a weighted Cross Entropy loss function to address the issue of data imbalance of different class samples. Additionally, we employed soft-voting strategy over multi-models ensemble to enhance the reliability of our predictions. Our system ranked top 1 in Subtask B, which sets a state-of-the-art benchmark for this new challenge. | [
"Gu, Renhua",
"Meng, Xiangfeng"
] | AISPACE at SemEval-2024 task 8: A Class-balanced Soft-voting System for Detecting Multi-generator Machine-generated Text | semeval-1.212 | Poster | 2404.00950 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.213.bib | https://aclanthology.org/2024.semeval-1.213/ | @inproceedings{chen-etal-2024-semeval,
title = "{S}em{E}val-2024 Task 7: Numeral-Aware Language Understanding and Generation",
author = "Chen, Chung-chi and
Huang, Jian-tao and
Huang, Hen-hsen and
Takamura, Hiroya and
Chen, Hsin-hsi",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.213",
doi = "10.18653/v1/2024.semeval-1.213",
pages = "1482--1491",
abstract = "Numbers are frequently utilized in both our daily narratives and professional documents, such as clinical notes, scientific papers, financial documents, and legal court orders. The ability to understand and generate numbers is thus one of the essential aspects of evaluating large language models. In this vein, we propose a collection of datasets in SemEval-2024 Task 7 - NumEval. This collection encompasses several tasks focused on numeral-aware instances, including number prediction, natural language inference, question answering, reading comprehension, reasoning, and headline generation. This paper offers an overview of the dataset and presents the results of all subtasks in NumEval. Additionally, we contribute by summarizing participants{'} methods and conducting an error analysis. To the best of our knowledge, NumEval represents one of the early tasks that perform peer evaluation in SemEval{'}s history. We will further share observations from this aspect and provide suggestions for future SemEval tasks.",
}
| Numbers are frequently utilized in both our daily narratives and professional documents, such as clinical notes, scientific papers, financial documents, and legal court orders. The ability to understand and generate numbers is thus one of the essential aspects of evaluating large language models. In this vein, we propose a collection of datasets in SemEval-2024 Task 7 - NumEval. This collection encompasses several tasks focused on numeral-aware instances, including number prediction, natural language inference, question answering, reading comprehension, reasoning, and headline generation. This paper offers an overview of the dataset and presents the results of all subtasks in NumEval. Additionally, we contribute by summarizing participants{'} methods and conducting an error analysis. To the best of our knowledge, NumEval represents one of the early tasks that perform peer evaluation in SemEval{'}s history. We will further share observations from this aspect and provide suggestions for future SemEval tasks. | [
"Chen, Chung-chi",
"Huang, Jian-tao",
"Huang, Hen-hsen",
"Takamura, Hiroya",
"Chen, Hsin-hsi"
] | SemEval-2024 Task 7: Numeral-Aware Language Understanding and Generation | semeval-1.213 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.214.bib | https://aclanthology.org/2024.semeval-1.214/ | @inproceedings{wan-etal-2024-ucsc,
title = "{UCSC} {NLP} at {S}em{E}val-2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversation ({ED}i{R}e{F})",
author = "Wan, Neng and
Au, Steven and
Ubale, Esha and
Krogh, Decker",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.214",
doi = "10.18653/v1/2024.semeval-1.214",
pages = "1492--1497",
abstract = "We describe SemEval-2024 Task 10: EDiReF consisting of three sub-tasks involving emotion in conversation across Hinglish code-mixed and English datasets. Subtasks include classification of speaker emotion in multiparty conversations (Emotion Recognition in Conversation) and reasoning around shifts in speaker emotion state (Emotion Flip Reasoning). We deployed a BERT model for emotion recognition and two GRU-based models for emotion flip. Our model achieved F1 scores of 0.45, 0.79, and 0.68 for subtasks 1, 2, and 3, respectively.",
}
| We describe SemEval-2024 Task 10: EDiReF consisting of three sub-tasks involving emotion in conversation across Hinglish code-mixed and English datasets. Subtasks include classification of speaker emotion in multiparty conversations (Emotion Recognition in Conversation) and reasoning around shifts in speaker emotion state (Emotion Flip Reasoning). We deployed a BERT model for emotion recognition and two GRU-based models for emotion flip. Our model achieved F1 scores of 0.45, 0.79, and 0.68 for subtasks 1, 2, and 3, respectively. | [
"Wan, Neng",
"Au, Steven",
"Ubale, Esha",
"Krogh, Decker"
] | UCSC NLP at SemEval-2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversation (EDiReF) | semeval-1.214 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.215.bib | https://aclanthology.org/2024.semeval-1.215/ | @inproceedings{rezaei-etal-2024-clulab,
title = "{CLUL}ab-{U}of{A} at {S}em{E}val-2024 Task 8: Detecting Machine-Generated Text Using Triplet-Loss-Trained Text Similarity and Text Classification",
author = "Rezaei, Mohammadhossein and
Kwon, Yeaeun and
Sanayei, Reza and
Singh, Abhyuday and
Bethard, Steven",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.215",
doi = "10.18653/v1/2024.semeval-1.215",
pages = "1498--1504",
abstract = "Detecting machine-generated text is a critical task in the era of large language models. In this paper, we present our systems for SemEval-2024 Task 8, which focuses on multi-class classification to discern between human-written and maching-generated texts by five state-of-the-art large language models. We propose three different systems: unsupervised text similarity, triplet-loss-trained text similarity, and text classification. We show that the triplet-loss trained text similarity system outperforms the other systems, achieving 80{\%} accuracy on the test set and surpassing the baseline model for this subtask. Additionally, our text classification system, which takes into account sentence paraphrases generated by the candidate models, also outperforms the unsupervised text similarity system, achieving 74{\%} accuracy.",
}
| Detecting machine-generated text is a critical task in the era of large language models. In this paper, we present our systems for SemEval-2024 Task 8, which focuses on multi-class classification to discern between human-written and maching-generated texts by five state-of-the-art large language models. We propose three different systems: unsupervised text similarity, triplet-loss-trained text similarity, and text classification. We show that the triplet-loss trained text similarity system outperforms the other systems, achieving 80{\%} accuracy on the test set and surpassing the baseline model for this subtask. Additionally, our text classification system, which takes into account sentence paraphrases generated by the candidate models, also outperforms the unsupervised text similarity system, achieving 74{\%} accuracy. | [
"Rezaei, Mohammadhossein",
"Kwon, Yeaeun",
"Sanayei, Reza",
"Singh, Abhyuday",
"Bethard, Steven"
] | CLULab-UofA at SemEval-2024 Task 8: Detecting Machine-Generated Text Using Triplet-Loss-Trained Text Similarity and Text Classification | semeval-1.215 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.216.bib | https://aclanthology.org/2024.semeval-1.216/ | @inproceedings{gutierrez-megias-etal-2024-sinai,
title = "{SINAI} at {S}em{E}val-2024 Task 8: Fine-tuning on Words and Perplexity as Features for Detecting Machine Written Text",
author = "Guti{\'e}rrez Meg{\'\i}as, Alberto and
Ure{\~n}a-l{\'o}pez, L. Alfonso and
Mart{\'\i}nez C{\'a}mara, Eugenio",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.216",
doi = "10.18653/v1/2024.semeval-1.216",
pages = "1505--1510",
abstract = "This work presents the proposed systems of the SINAI team for the subtask A of the Task 8 in SemEval 2024. We present the evaluation of two disparate systems, and our final submitted system. We claim that the perplexity value of a text may be used as classification signal. Accordingly, we conduct a study on the utility of perplexity for discerning text authorship, and we perform a comparative analysis of the results obtained on the datasets of the task. This comparative evaluation includes results derived from the systems evaluated, such as fine-tuning using an XLM-RoBERTa-Large transformer or using perplexity as a classification criterion. In addition, we discuss the results reached on the test set, where we show that there is large differences among the language probability distribution of the training and test sets. These analysis allows us to open new research lines to improve the detection of machine-generated text.",
}
| This work presents the proposed systems of the SINAI team for the subtask A of the Task 8 in SemEval 2024. We present the evaluation of two disparate systems, and our final submitted system. We claim that the perplexity value of a text may be used as classification signal. Accordingly, we conduct a study on the utility of perplexity for discerning text authorship, and we perform a comparative analysis of the results obtained on the datasets of the task. This comparative evaluation includes results derived from the systems evaluated, such as fine-tuning using an XLM-RoBERTa-Large transformer or using perplexity as a classification criterion. In addition, we discuss the results reached on the test set, where we show that there is large differences among the language probability distribution of the training and test sets. These analysis allows us to open new research lines to improve the detection of machine-generated text. | [
"Guti{\\'e}rrez Meg{\\'\\i}as, Alberto",
"Ure{\\~n}a-l{\\'o}pez, L. Alfonso",
"Mart{\\'\\i}nez C{\\'a}mara, Eugenio"
] | SINAI at SemEval-2024 Task 8: Fine-tuning on Words and Perplexity as Features for Detecting Machine Written Text | semeval-1.216 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.217.bib | https://aclanthology.org/2024.semeval-1.217/ | @inproceedings{guo-etal-2024-ustc,
title = "{USTC}-{BUPT} at {S}em{E}val-2024 Task 8: Enhancing Machine-Generated Text Detection via Domain Adversarial Neural Networks and {LLM} Embeddings",
author = "Guo, Zikang and
Jiao, Kaijie and
Yao, Xingyu and
Wan, Yuning and
Li, Haoran and
Xu, Benfeng and
Zhang, Licheng and
Wang, Quan and
Zhang, Yongdong and
Mao, Zhendong",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.217",
doi = "10.18653/v1/2024.semeval-1.217",
pages = "1511--1522",
abstract = "This paper introduces the system developed by USTC-BUPT for SemEval-2024 Task 8. The shared task comprises three subtasks across four tracks, aiming to develop automatic systems to distinguish between human-written and machine-generated text across various domains, languages and generators. Our system comprises four components: DATeD, LLAM, TLE, and AuDM, which empower us to effectively tackle all subtasks posed by the challenge. In the monolingual track, DATeD improves machine-generated text detection by incorporating a gradient reversal layer and integrating additional domain labels through Domain Adversarial Neural Networks, enhancing adaptation to diverse text domains. In the multilingual track, LLAM employs different strategies based on language characteristics. For English text, the LLM Embeddings approach utilizes embeddings from a proxy LLM followed by a two-stage CNN for classification, leveraging the broad linguistic knowledge captured during pre-training to enhance performance. For text in other languages, the LLM Sentinel approach transforms the classification task into a next-token prediction task, which facilitates easier adaptation to texts in various languages, especially low-resource languages. TLE utilizes the LLM Embeddings method with a minor modification in the classification strategy for subtask B. AuDM employs data augmentation and fine-tunes the DeBERTa model specifically for subtask C. Our system wins the multilingual track and ranks second in the monolingual track. Additionally, it achieves third place in both subtask B and C.",
}
| This paper introduces the system developed by USTC-BUPT for SemEval-2024 Task 8. The shared task comprises three subtasks across four tracks, aiming to develop automatic systems to distinguish between human-written and machine-generated text across various domains, languages and generators. Our system comprises four components: DATeD, LLAM, TLE, and AuDM, which empower us to effectively tackle all subtasks posed by the challenge. In the monolingual track, DATeD improves machine-generated text detection by incorporating a gradient reversal layer and integrating additional domain labels through Domain Adversarial Neural Networks, enhancing adaptation to diverse text domains. In the multilingual track, LLAM employs different strategies based on language characteristics. For English text, the LLM Embeddings approach utilizes embeddings from a proxy LLM followed by a two-stage CNN for classification, leveraging the broad linguistic knowledge captured during pre-training to enhance performance. For text in other languages, the LLM Sentinel approach transforms the classification task into a next-token prediction task, which facilitates easier adaptation to texts in various languages, especially low-resource languages. TLE utilizes the LLM Embeddings method with a minor modification in the classification strategy for subtask B. AuDM employs data augmentation and fine-tunes the DeBERTa model specifically for subtask C. Our system wins the multilingual track and ranks second in the monolingual track. Additionally, it achieves third place in both subtask B and C. | [
"Guo, Zikang",
"Jiao, Kaijie",
"Yao, Xingyu",
"Wan, Yuning",
"Li, Haoran",
"Xu, Benfeng",
"Zhang, Licheng",
"Wang, Quan",
"Zhang, Yongdong",
"Mao, Zhendong"
] | USTC-BUPT at SemEval-2024 Task 8: Enhancing Machine-Generated Text Detection via Domain Adversarial Neural Networks and LLM Embeddings | semeval-1.217 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.218.bib | https://aclanthology.org/2024.semeval-1.218/ | @inproceedings{farokh-zeinali-2024-alf,
title = "{ALF} at {S}em{E}val-2024 Task 9: Exploring Lateral Thinking Capabilities of {LM}s through Multi-task Fine-tuning",
author = "Farokh, Seyed Ali and
Zeinali, Hossein",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.218",
doi = "10.18653/v1/2024.semeval-1.218",
pages = "1523--1528",
abstract = "Recent advancements in natural language processing (NLP) have prompted the development of sophisticated reasoning benchmarks. This paper presents our system for the SemEval 2024 Task 9 competition and also investigates the efficacy of fine-tuning language models (LMs) on BrainTeaser{---}a benchmark designed to evaluate NLP models{'} lateral thinking and creative reasoning abilities. Our experiments focus on two prominent families of pre-trained models, BERT and T5. Additionally, we explore the potential benefits of multi-task fine-tuning on commonsense reasoning datasets to enhance performance. Our top-performing model, DeBERTa-v3-large, achieves an impressive overall accuracy of 93.33{\%}, surpassing human performance.",
}
| Recent advancements in natural language processing (NLP) have prompted the development of sophisticated reasoning benchmarks. This paper presents our system for the SemEval 2024 Task 9 competition and also investigates the efficacy of fine-tuning language models (LMs) on BrainTeaser{---}a benchmark designed to evaluate NLP models{'} lateral thinking and creative reasoning abilities. Our experiments focus on two prominent families of pre-trained models, BERT and T5. Additionally, we explore the potential benefits of multi-task fine-tuning on commonsense reasoning datasets to enhance performance. Our top-performing model, DeBERTa-v3-large, achieves an impressive overall accuracy of 93.33{\%}, surpassing human performance. | [
"Farokh, Seyed Ali",
"Zeinali, Hossein"
] | ALF at SemEval-2024 Task 9: Exploring Lateral Thinking Capabilities of LMs through Multi-task Fine-tuning | semeval-1.218 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.219.bib | https://aclanthology.org/2024.semeval-1.219/ | @inproceedings{kobs-etal-2024-pollice,
title = "Pollice Verso at {S}em{E}val-2024 Task 6: The {R}oman Empire Strikes Back",
author = "Kobs, Konstantin and
Pfister, Jan and
Hotho, Andreas",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.219",
doi = "10.18653/v1/2024.semeval-1.219",
pages = "1529--1536",
abstract = "We present an intuitive approach for hallucination detection in LLM outputs that is modeled after how humans would go about this task. We engage several LLM {``}experts{''} to independently assess whether a response is hallucinated. For this we select recent and popular LLMs smaller than 7B parameters. By analyzing the log probabilities for tokens that signal a positive or negative judgment, we can determine the likelihood of hallucination. Additionally, we enhance the performance of our {``}experts{''} by automatically refining their prompts using the recently introduced OPRO framework. Furthermore, we ensemble the replies of the different experts in a uniform or weighted manner, which builds a quorum from the expert replies. Overall this leads to accuracy improvements of up to 10.6 p.p. compared to the challenge baseline. We show that a Zephyr 3B model is well suited for the task. Our approach can be applied in the model-agnostic and model-aware subtasks without modification and is flexible and easily extendable to related tasks.",
}
| We present an intuitive approach for hallucination detection in LLM outputs that is modeled after how humans would go about this task. We engage several LLM {``}experts{''} to independently assess whether a response is hallucinated. For this we select recent and popular LLMs smaller than 7B parameters. By analyzing the log probabilities for tokens that signal a positive or negative judgment, we can determine the likelihood of hallucination. Additionally, we enhance the performance of our {``}experts{''} by automatically refining their prompts using the recently introduced OPRO framework. Furthermore, we ensemble the replies of the different experts in a uniform or weighted manner, which builds a quorum from the expert replies. Overall this leads to accuracy improvements of up to 10.6 p.p. compared to the challenge baseline. We show that a Zephyr 3B model is well suited for the task. Our approach can be applied in the model-agnostic and model-aware subtasks without modification and is flexible and easily extendable to related tasks. | [
"Kobs, Konstantin",
"Pfister, Jan",
"Hotho, Andreas"
] | Pollice Verso at SemEval-2024 Task 6: The Roman Empire Strikes Back | semeval-1.219 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.220.bib | https://aclanthology.org/2024.semeval-1.220/ | @inproceedings{chatterjee-etal-2024-whatdoyoumeme,
title = "whatdoyoumeme at {S}em{E}val-2024 Task 4: Hierarchical-Label-Aware Persuasion Detection using Translated Texts",
author = "Chatterjee, Nishan and
Pranjic, Marko and
Koloski, Boshko and
Pivovarova, Lidia and
Pollak, Senja",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.220",
doi = "10.18653/v1/2024.semeval-1.220",
pages = "1537--1543",
abstract = "In this paper, we detail the methodology of team whatdoyoumeme for the SemEval 2024 Task on Multilingual Persuasion Detection in Memes. We integrate hierarchical label information to refine detection capabilities, and employ a cross-lingual approach, utilizing translation to adapt the model to Macedonian, Arabic, and Bulgarian. Our methodology encompasses both the analysis of meme content and extending labels to include hierarchical structure. The effectiveness of the approach is demonstrated through improved model performance in multilingual contexts, highlighting the utility of translation-based methods and hierarchy-aware learning, over traditional baselines.",
}
| In this paper, we detail the methodology of team whatdoyoumeme for the SemEval 2024 Task on Multilingual Persuasion Detection in Memes. We integrate hierarchical label information to refine detection capabilities, and employ a cross-lingual approach, utilizing translation to adapt the model to Macedonian, Arabic, and Bulgarian. Our methodology encompasses both the analysis of meme content and extending labels to include hierarchical structure. The effectiveness of the approach is demonstrated through improved model performance in multilingual contexts, highlighting the utility of translation-based methods and hierarchy-aware learning, over traditional baselines. | [
"Chatterjee, Nishan",
"Pranjic, Marko",
"Koloski, Boshko",
"Pivovarova, Lidia",
"Pollak, Senja"
] | whatdoyoumeme at SemEval-2024 Task 4: Hierarchical-Label-Aware Persuasion Detection using Translated Texts | semeval-1.220 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.221.bib | https://aclanthology.org/2024.semeval-1.221/ | @inproceedings{skiba-etal-2024-lomonosovmsu,
title = "{L}omonosov{MSU} at {S}em{E}val-2024 Task 4: Comparing {LLM}s and embedder models to identifying propaganda techniques in the content of memes in {E}nglish for subtasks No1, No2a, and No2b",
author = "Skiba, Gleb and
Pukemo, Mikhail and
Melikhov, Dmitry and
Vorontsov, Konstantin",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.221",
doi = "10.18653/v1/2024.semeval-1.221",
pages = "1544--1548",
abstract = "This paper presents the solution of the LomonosovMSU team for the SemEval-2024 Task 4 {``}Multilingual Detection of Persuasion Techniques in Memes{''} competition for the English language task. During the task solving process, generative and BERT-like (training classifiers on top of embedder models) approaches were tested for subtask No1, as well as an BERT-like approach on top of multimodal embedder models for subtasks No2a/No2b. The models were trained using datasets provided by the competition organizers, enriched with filtered datasets from previous SemEval competitions. The following results were achieved: 18th place for subtask No1, 9th place for subtask No2a, and 11th place for subtask No2b.",
}
| This paper presents the solution of the LomonosovMSU team for the SemEval-2024 Task 4 {``}Multilingual Detection of Persuasion Techniques in Memes{''} competition for the English language task. During the task solving process, generative and BERT-like (training classifiers on top of embedder models) approaches were tested for subtask No1, as well as an BERT-like approach on top of multimodal embedder models for subtasks No2a/No2b. The models were trained using datasets provided by the competition organizers, enriched with filtered datasets from previous SemEval competitions. The following results were achieved: 18th place for subtask No1, 9th place for subtask No2a, and 11th place for subtask No2b. | [
"Skiba, Gleb",
"Pukemo, Mikhail",
"Melikhov, Dmitry",
"Vorontsov, Konstantin"
] | LomonosovMSU at SemEval-2024 Task 4: Comparing LLMs and embedder models to identifying propaganda techniques in the content of memes in English for subtasks No1, No2a, and No2b | semeval-1.221 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.222.bib | https://aclanthology.org/2024.semeval-1.222/ | @inproceedings{grigoriadou-etal-2024-ails,
title = "{AILS}-{NTUA} at {S}em{E}val-2024 Task 6: Efficient model tuning for hallucination detection and analysis",
author = "Grigoriadou, Natalia and
Lymperaiou, Maria and
Filandrianos, George and
Stamou, Giorgos",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.222",
doi = "10.18653/v1/2024.semeval-1.222",
pages = "1549--1560",
abstract = "In this paper, we present our team{'}s submissions for SemEval-2024 Task-6 - SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes. The participants were asked to perform binary classification to identify cases of fluent overgeneration hallucinations. Our experimentation included fine-tuning a pre-trained model on hallucination detection and a Natural Language Inference (NLI) model. The most successful strategy involved creating an ensemble of these models, resulting in accuracy rates of 77.8{\%} and 79.9{\%} on model-agnostic and model-aware datasets respectively, outperforming the organizers{'} baseline and achieving notable results when contrasted with the top-performing results in the competition, which reported accuracies of 84.7{\%} and 81.3{\%} correspondingly.",
}
| In this paper, we present our team{'}s submissions for SemEval-2024 Task-6 - SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes. The participants were asked to perform binary classification to identify cases of fluent overgeneration hallucinations. Our experimentation included fine-tuning a pre-trained model on hallucination detection and a Natural Language Inference (NLI) model. The most successful strategy involved creating an ensemble of these models, resulting in accuracy rates of 77.8{\%} and 79.9{\%} on model-agnostic and model-aware datasets respectively, outperforming the organizers{'} baseline and achieving notable results when contrasted with the top-performing results in the competition, which reported accuracies of 84.7{\%} and 81.3{\%} correspondingly. | [
"Grigoriadou, Natalia",
"Lymperaiou, Maria",
"Fil",
"rianos, George",
"Stamou, Giorgos"
] | AILS-NTUA at SemEval-2024 Task 6: Efficient model tuning for hallucination detection and analysis | semeval-1.222 | Poster | 2404.01210 | [
"https://github.com/ngregoriade/semeval2024-shroom"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.223.bib | https://aclanthology.org/2024.semeval-1.223/ | @inproceedings{-etal-2024-jmi,
title = "{JMI} at {S}em{E}val 2024 Task 3: Two-step approach for multimodal {ECAC} using in-context learning with {GPT} and instruction-tuned Llama models",
author = "., Arefa and
Ansari, Mohammed Abbas and
Saxena, Chandni and
Ahmad, Tanvir",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.223",
doi = "10.18653/v1/2024.semeval-1.223",
pages = "1561--1576",
abstract = "This paper presents our system development for SemEval-2024 Task 3: {``}The Competition of Multimodal Emotion Cause Analysis in Conversations{''}. Effectively capturing emotions in human conversations requires integrating multiple modalities such as text, audio, and video. However, the complexities of these diverse modalities pose challenges for developing an efficient multimodal emotion cause analysis (ECA) system. Our proposed approach addresses these challenges by a two-step framework. We adopt two different approaches in our implementation. In Approach 1, we employ instruction-tuning with two separate Llama 2 models for emotion and cause prediction. In Approach 2, we use GPT-4V for conversation-level video description and employ in-context learning with annotated conversation using GPT 3.5. Our system wins rank 4, and system ablation experiments demonstrate that our proposed solutions achieve significant performance gains.",
}
| This paper presents our system development for SemEval-2024 Task 3: {``}The Competition of Multimodal Emotion Cause Analysis in Conversations{''}. Effectively capturing emotions in human conversations requires integrating multiple modalities such as text, audio, and video. However, the complexities of these diverse modalities pose challenges for developing an efficient multimodal emotion cause analysis (ECA) system. Our proposed approach addresses these challenges by a two-step framework. We adopt two different approaches in our implementation. In Approach 1, we employ instruction-tuning with two separate Llama 2 models for emotion and cause prediction. In Approach 2, we use GPT-4V for conversation-level video description and employ in-context learning with annotated conversation using GPT 3.5. Our system wins rank 4, and system ablation experiments demonstrate that our proposed solutions achieve significant performance gains. | [
"., Arefa",
"Ansari, Mohammed Abbas",
"Saxena, Ch",
"ni",
"Ahmad, Tanvir"
] | JMI at SemEval 2024 Task 3: Two-step approach for multimodal ECAC using in-context learning with GPT and instruction-tuned Llama models | semeval-1.223 | Poster | 2403.04798 | [
"https://github.com/cmooncs/semeval-2024_multimodal_ecpe"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.semeval-1.224.bib | https://aclanthology.org/2024.semeval-1.224/ | @inproceedings{sun-etal-2024-lmu,
title = "{LMU}-{B}io{NLP} at {S}em{E}val-2024 Task 2: Large Diverse Ensembles for Robust Clinical {NLI}",
author = "Sun, Zihang and
Yan, Danqi and
Wang, Anyi and
Agustoslu, Tanalp and
Feng, Qi and
Hu, Chengzhi and
Zuo, Longfei and
Zhou, Shijia and
Kleiner, Hermine and
Hong, Pingjun",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.224",
doi = "10.18653/v1/2024.semeval-1.224",
pages = "1577--1583",
abstract = "In this paper, we describe our submission for the NLI4CT 2024 shared task on robust Natural Language Inference over clinical trial reports. Our system is an ensemble of nine diverse models which we aggregate via majority voting. The models use a large spectrum of different approaches ranging from a straightforward Convolutional Neural Network over fine-tuned Large Language Models to few-shot-prompted language models using chain-of-thought reasoning.Surprisingly, we find that some individual ensemble members are not only more accurate than the final ensemble model but also more robust.",
}
| In this paper, we describe our submission for the NLI4CT 2024 shared task on robust Natural Language Inference over clinical trial reports. Our system is an ensemble of nine diverse models which we aggregate via majority voting. The models use a large spectrum of different approaches ranging from a straightforward Convolutional Neural Network over fine-tuned Large Language Models to few-shot-prompted language models using chain-of-thought reasoning.Surprisingly, we find that some individual ensemble members are not only more accurate than the final ensemble model but also more robust. | [
"Sun, Zihang",
"Yan, Danqi",
"Wang, Anyi",
"Agustoslu, Tanalp",
"Feng, Qi",
"Hu, Chengzhi",
"Zuo, Longfei",
"Zhou, Shijia",
"Kleiner, Hermine",
"Hong, Pingjun"
] | LMU-BioNLP at SemEval-2024 Task 2: Large Diverse Ensembles for Robust Clinical NLI | semeval-1.224 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.225.bib | https://aclanthology.org/2024.semeval-1.225/ | @inproceedings{sanayei-etal-2024-maria,
title = "{MAR}i{A} at {S}em{E}val 2024 Task-6: Hallucination Detection Through {LLM}s, {MNLI}, and Cosine similarity",
author = "Sanayei, Reza and
Singh, Abhyuday and
Rezaei, Mohammadhossein and
Bethard, Steven",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.225",
doi = "10.18653/v1/2024.semeval-1.225",
pages = "1584--1588",
abstract = "The advent of large language models (LLMs) has revolutionized Natural Language Generation (NLG), offering unmatched text generation capabilities. However, this progress introduces significant challenges, notably hallucinations{---}semantically incorrect yet fluent outputs. This phenomenon undermines content reliability, as traditional detection systems focus more on fluency than accuracy, posing a risk of misinformation spread.Our study addresses these issues by proposing a unified strategy for detecting hallucinations in neural model-generated text, focusing on the SHROOM task in SemEval 2024. We employ diverse methodologies to identify output divergence from the source content. We utilized Sentence Transformers to measure cosine similarity between source-hypothesis and source-target embeddings, experimented with omitting source content in the cosine similarity computations, and Leveragied LLMs{'} In-Context Learning with detailed task prompts as our methodologies. The varying performance of our different approaches across the subtasks underscores the complexity of Natural Language Understanding tasks, highlighting the importance of addressing the nuances of semantic correctness in the era of advanced language models.",
}
| The advent of large language models (LLMs) has revolutionized Natural Language Generation (NLG), offering unmatched text generation capabilities. However, this progress introduces significant challenges, notably hallucinations{---}semantically incorrect yet fluent outputs. This phenomenon undermines content reliability, as traditional detection systems focus more on fluency than accuracy, posing a risk of misinformation spread.Our study addresses these issues by proposing a unified strategy for detecting hallucinations in neural model-generated text, focusing on the SHROOM task in SemEval 2024. We employ diverse methodologies to identify output divergence from the source content. We utilized Sentence Transformers to measure cosine similarity between source-hypothesis and source-target embeddings, experimented with omitting source content in the cosine similarity computations, and Leveragied LLMs{'} In-Context Learning with detailed task prompts as our methodologies. The varying performance of our different approaches across the subtasks underscores the complexity of Natural Language Understanding tasks, highlighting the importance of addressing the nuances of semantic correctness in the era of advanced language models. | [
"Sanayei, Reza",
"Singh, Abhyuday",
"Rezaei, Mohammadhossein",
"Bethard, Steven"
] | MARiA at SemEval 2024 Task-6: Hallucination Detection Through LLMs, MNLI, and Cosine similarity | semeval-1.225 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.226.bib | https://aclanthology.org/2024.semeval-1.226/ | @inproceedings{luo-etal-2024-nus,
title = "{NUS}-Emo at {S}em{E}val-2024 Task 3: Instruction-Tuning {LLM} for Multimodal Emotion-Cause Analysis in Conversations",
author = "Luo, Meng and
Zhang, Han and
Wu, Shengqiong and
Li, Bobo and
Han, Hong and
Fei, Hao",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.226",
doi = "10.18653/v1/2024.semeval-1.226",
pages = "1589--1596",
abstract = "This paper describes the architecture of our system developed for participation in Task 3 of SemEval-2024: Multimodal Emotion-Cause Analysis in Conversations. Our project targets the challenges of subtask 2, dedicated to Multimodal Emotion-Cause Pair Extraction with Emotion Category (MECPE-Cat), and constructs a dual-component system tailored to the unique challenges of this task. We divide the task into two subtasks: emotion recognition in conversation (ERC) and emotion-cause pair extraction (ECPE). To address these subtasks, we capitalize on the abilities of Large Language Models (LLMs), which have consistently demonstrated state-of-the-art performance across various natural language processing tasks and domains. Most importantly, we design an approach of emotion-cause-aware instruction-tuning for LLMs, to enhance the perception of the emotions with their corresponding causal rationales. Our method enables us to adeptly navigate the complexities of MECPE-Cat, achieving an average 34.71{\%} F1 score of the task, and securing the 2nd rank on the leaderboard. The code and metadata to reproduce our experiments are all made publicly available.",
}
| This paper describes the architecture of our system developed for participation in Task 3 of SemEval-2024: Multimodal Emotion-Cause Analysis in Conversations. Our project targets the challenges of subtask 2, dedicated to Multimodal Emotion-Cause Pair Extraction with Emotion Category (MECPE-Cat), and constructs a dual-component system tailored to the unique challenges of this task. We divide the task into two subtasks: emotion recognition in conversation (ERC) and emotion-cause pair extraction (ECPE). To address these subtasks, we capitalize on the abilities of Large Language Models (LLMs), which have consistently demonstrated state-of-the-art performance across various natural language processing tasks and domains. Most importantly, we design an approach of emotion-cause-aware instruction-tuning for LLMs, to enhance the perception of the emotions with their corresponding causal rationales. Our method enables us to adeptly navigate the complexities of MECPE-Cat, achieving an average 34.71{\%} F1 score of the task, and securing the 2nd rank on the leaderboard. The code and metadata to reproduce our experiments are all made publicly available. | [
"Luo, Meng",
"Zhang, Han",
"Wu, Shengqiong",
"Li, Bobo",
"Han, Hong",
"Fei, Hao"
] | NUS-Emo at SemEval-2024 Task 3: Instruction-Tuning LLM for Multimodal Emotion-Cause Analysis in Conversations | semeval-1.226 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.227.bib | https://aclanthology.org/2024.semeval-1.227/ | @inproceedings{stuhlinger-winkler-2024-tuecicl,
title = "{T}ue{CICL} at {S}em{E}val-2024 Task 8: Resource-efficient approaches for machine-generated text detection",
author = "Stuhlinger, Daniel and
Winkler, Aron",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.227",
doi = "10.18653/v1/2024.semeval-1.227",
pages = "1597--1601",
abstract = "Recent developments in the field of NLP have brought large language models (LLMs) to the forefront of both public and research attention. As the use of language generation technologies becomes more widespread, the problem arises of determining whether a given text is machine generated or not. Task 8 at SemEval 2024 consists of a shared task with this exact objective. Our approach aims at developing models and strategies that strike a good balance between performance and model size. We show that it is possible to compete with large transformer-based solutions with smaller systems.",
}
| Recent developments in the field of NLP have brought large language models (LLMs) to the forefront of both public and research attention. As the use of language generation technologies becomes more widespread, the problem arises of determining whether a given text is machine generated or not. Task 8 at SemEval 2024 consists of a shared task with this exact objective. Our approach aims at developing models and strategies that strike a good balance between performance and model size. We show that it is possible to compete with large transformer-based solutions with smaller systems. | [
"Stuhlinger, Daniel",
"Winkler, Aron"
] | TueCICL at SemEval-2024 Task 8: Resource-efficient approaches for machine-generated text detection | semeval-1.227 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.228.bib | https://aclanthology.org/2024.semeval-1.228/ | @inproceedings{choi-na-2024-geminipro,
title = "{G}emini{P}ro at {S}em{E}val-2024 Task 9: {B}rain{T}easer on Gemini",
author = "Choi, Kyu Hyun and
Na, Seung-hoon",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.228",
doi = "10.18653/v1/2024.semeval-1.228",
pages = "1602--1606",
abstract = "It is known that human thought can be distinguished into lateral and vertical thinking. The development of language models has thus far been focused on evaluating and advancing vertical thinking, while lateral thinking has been somewhat neglected. To foster progress in this area, SemEval has created and distributed a brainteaser dataset based on lateral thinking consist of sentence puzzles and word puzzle QA. In this paper, we test and discuss the performance of the currently known best model, Gemini, on this dataset.",
}
| It is known that human thought can be distinguished into lateral and vertical thinking. The development of language models has thus far been focused on evaluating and advancing vertical thinking, while lateral thinking has been somewhat neglected. To foster progress in this area, SemEval has created and distributed a brainteaser dataset based on lateral thinking consist of sentence puzzles and word puzzle QA. In this paper, we test and discuss the performance of the currently known best model, Gemini, on this dataset. | [
"Choi, Kyu Hyun",
"Na, Seung-hoon"
] | GeminiPro at SemEval-2024 Task 9: BrainTeaser on Gemini | semeval-1.228 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.semeval-1.229.bib | https://aclanthology.org/2024.semeval-1.229/ | @inproceedings{chlapanis-etal-2024-archimedes,
title = "Archimedes-{AUEB} at {S}em{E}val-2024 Task 5: {LLM} explains Civil Procedure",
author = "Chlapanis, Odysseas and
Androutsopoulos, Ion and
Galanis, Dimitrios",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Tayyar Madabushi, Harish and
Da San Martino, Giovanni and
Rosenthal, Sara and
Ros{\'a}, Aiala},
booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.semeval-1.229",
doi = "10.18653/v1/2024.semeval-1.229",
pages = "1607--1622",
abstract = "The SemEval task on Argument Reasoning in Civil Procedure is challenging in that it requires understanding legal concepts and inferring complex arguments. Currently, most Large Language Models (LLM) excelling in the legal realm are principally purposed for classification tasks, hence their reasoning rationale is subject to contention. The approach we advocate involves using a powerful teacher-LLM (ChatGPT) to extend the training dataset with explanations and generate synthetic data. The resulting data are then leveraged to fine-tune a small student-LLM. Contrary to previous work, our explanations are not directly derived from the teacher{'}s internal knowledge. Instead they are grounded in authentic human analyses, therefore delivering a superior reasoning signal. Additionally, a new {`}mutation{'} method generates artificial data instances inspired from existing ones. We are publicly releasing the explanations as an extension to the original dataset, along with the synthetic dataset and the prompts that were used to generate both. Our system ranked 15th in the SemEval competition. It outperforms its own teacher and can produce explanations aligned with the original human analyses, as verified by legal experts.",
}
| The SemEval task on Argument Reasoning in Civil Procedure is challenging in that it requires understanding legal concepts and inferring complex arguments. Currently, most Large Language Models (LLM) excelling in the legal realm are principally purposed for classification tasks, hence their reasoning rationale is subject to contention. The approach we advocate involves using a powerful teacher-LLM (ChatGPT) to extend the training dataset with explanations and generate synthetic data. The resulting data are then leveraged to fine-tune a small student-LLM. Contrary to previous work, our explanations are not directly derived from the teacher{'}s internal knowledge. Instead they are grounded in authentic human analyses, therefore delivering a superior reasoning signal. Additionally, a new {`}mutation{'} method generates artificial data instances inspired from existing ones. We are publicly releasing the explanations as an extension to the original dataset, along with the synthetic dataset and the prompts that were used to generate both. Our system ranked 15th in the SemEval competition. It outperforms its own teacher and can produce explanations aligned with the original human analyses, as verified by legal experts. | [
"Chlapanis, Odysseas",
"Androutsopoulos, Ion",
"Galanis, Dimitrios"
] | Archimedes-AUEB at SemEval-2024 Task 5: LLM explains Civil Procedure | semeval-1.229 | Poster | 2405.08502 | [
"https://github.com/nlpaueb/multiple-choice-mutation"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |