Datasets:

bibtex_url
stringlengths
41
52
proceedings
stringlengths
38
49
bibtext
stringlengths
788
3.49k
abstract
stringlengths
0
2.12k
authors
sequencelengths
1
58
title
stringlengths
16
181
id
stringlengths
7
18
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
170 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
56
num_comments
int64
-1
9
n_authors
int64
-1
57
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
99
Datasets
sequencelengths
0
5
Spaces
sequencelengths
0
57
https://aclanthology.org/2024.findings-naacl.239.bib
https://aclanthology.org/2024.findings-naacl.239/
@inproceedings{nebbia-kovashka-2024-synonym, title = "Synonym relations affect object detection learned on vision-language data", author = "Nebbia, Giacomo and Kovashka, Adriana", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.239", doi = "10.18653/v1/2024.findings-naacl.239", pages = "3770--3776", abstract = "We analyze whether object detectors trained on vision-language data learn effective visual representations for synonyms. Since many current vision-language models accept user-provided textual input, we highlight the need for such models to learn feature representations that are robust to changes in how such input is provided. Specifically, we analyze changes in synonyms used to refer to objects. Here, we study object detectors trained on vision-language data and investigate how to make their performance less dependent on whether synonyms are used to refer to an object. We propose two approaches to achieve this goal: data augmentation by back-translation and class embedding enrichment. We show the promise of such approaches, reporting improved performance on synonyms from mAP@0.5=33.87{\%} to 37.93{\%}.", }
We analyze whether object detectors trained on vision-language data learn effective visual representations for synonyms. Since many current vision-language models accept user-provided textual input, we highlight the need for such models to learn feature representations that are robust to changes in how such input is provided. Specifically, we analyze changes in synonyms used to refer to objects. Here, we study object detectors trained on vision-language data and investigate how to make their performance less dependent on whether synonyms are used to refer to an object. We propose two approaches to achieve this goal: data augmentation by back-translation and class embedding enrichment. We show the promise of such approaches, reporting improved performance on synonyms from mAP@0.5=33.87{\%} to 37.93{\%}.
[ "Nebbia, Giacomo", "Kovashka, Adriana" ]
Synonym relations affect object detection learned on vision-language data
findings-naacl.239
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.240.bib
https://aclanthology.org/2024.findings-naacl.240/
@inproceedings{li-etal-2024-cm, title = "{CM}-{TTS}: Enhancing Real Time Text-to-Speech Synthesis Efficiency through Weighted Samplers and Consistency Models", author = "Li, Xiang and FanBu, FanBu and Mehrish, Ambuj and Li, Yingting and Han, Jiale and Cheng, Bo and Poria, Soujanya", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.240", doi = "10.18653/v1/2024.findings-naacl.240", pages = "3777--3794", abstract = "Neural Text-to-Speech (TTS) systems find broad applications in voice assistants, e-learning, and audiobook creation. The pursuit of modern models, like Diffusion Models (DMs), holds promise for achieving high-fidelity, real-time speech synthesis. Yet, the efficiency of multi-step sampling in Diffusion Models presents challenges. Efforts have been made to integrate GANs with DMs, speeding up inference by approximating denoising distributions, but this introduces issues with model convergence due to adversarial training. To overcome this, we introduce CM-TTS, a novel architecture grounded in consistency models (CMs). Drawing inspiration from continuous-time diffusion models, CM-TTS achieves top-quality speech synthesis in fewer steps without adversarial training or pre-trained model dependencies. We further design weighted samplers to incorporate different sampling positions into model training with dynamic probabilities, ensuring unbiased learning throughout the entire training process. We present a real-time mel-spectrogram generation consistency model, validated through comprehensive evaluations. Experimental results underscore CM-TTS{'}s superiority over existing single-step speech synthesis systems, representing a significant advancement in the field.", }
Neural Text-to-Speech (TTS) systems find broad applications in voice assistants, e-learning, and audiobook creation. The pursuit of modern models, like Diffusion Models (DMs), holds promise for achieving high-fidelity, real-time speech synthesis. Yet, the efficiency of multi-step sampling in Diffusion Models presents challenges. Efforts have been made to integrate GANs with DMs, speeding up inference by approximating denoising distributions, but this introduces issues with model convergence due to adversarial training. To overcome this, we introduce CM-TTS, a novel architecture grounded in consistency models (CMs). Drawing inspiration from continuous-time diffusion models, CM-TTS achieves top-quality speech synthesis in fewer steps without adversarial training or pre-trained model dependencies. We further design weighted samplers to incorporate different sampling positions into model training with dynamic probabilities, ensuring unbiased learning throughout the entire training process. We present a real-time mel-spectrogram generation consistency model, validated through comprehensive evaluations. Experimental results underscore CM-TTS{'}s superiority over existing single-step speech synthesis systems, representing a significant advancement in the field.
[ "Li, Xiang", "FanBu, FanBu", "Mehrish, Ambuj", "Li, Yingting", "Han, Jiale", "Cheng, Bo", "Poria, Soujanya" ]
CM-TTS: Enhancing Real Time Text-to-Speech Synthesis Efficiency through Weighted Samplers and Consistency Models
findings-naacl.240
Poster
2404.00569
[ "https://github.com/xiangli2022/cm-tts" ]
https://huggingface.co/papers/2404.00569
0
0
0
7
1
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.241.bib
https://aclanthology.org/2024.findings-naacl.241/
@inproceedings{rafiei-asl-etal-2024-robustsentembed, title = "{R}obust{S}ent{E}mbed: Robust Sentence Embeddings Using Adversarial Self-Supervised Contrastive Learning", author = "Rafiei Asl, Javad and Panzade, Prajwal and Blanco, Eduardo and Takabi, Daniel and Cai, Zhipeng", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.241", doi = "10.18653/v1/2024.findings-naacl.241", pages = "3795--3809", abstract = "Pre-trained language models (PLMs) have consistently demonstrated outstanding performance across a diverse spectrum of natural language processing tasks. Nevertheless, despite their success with unseen data, current PLM-based representations often exhibit poor robustness in adversarial settings. In this paper, we introduce RobustSentEmbed, a self-supervised sentence embedding framework designed to improve both generalization and robustness in diverse text representation tasks and against a diverse set of adversarial attacks. Through the generation of high-risk adversarial perturbations and their utilization in a novel objective function, RobustSentEmbed adeptly learns high-quality and robust sentence embeddings. Our experiments confirm the superiority of RobustSentEmbed over state-of-the-art representations. Specifically, Our framework achieves a significant reduction in the success rate of various adversarial attacks, notably reducing the BERTAttack success rate by almost half (from 75.51{\%} to 38.81{\%}). The framework also yields improvements of 1.59{\%} and 0.23{\%} in semantic textual similarity tasks and various transfer tasks, respectively.", }
Pre-trained language models (PLMs) have consistently demonstrated outstanding performance across a diverse spectrum of natural language processing tasks. Nevertheless, despite their success with unseen data, current PLM-based representations often exhibit poor robustness in adversarial settings. In this paper, we introduce RobustSentEmbed, a self-supervised sentence embedding framework designed to improve both generalization and robustness in diverse text representation tasks and against a diverse set of adversarial attacks. Through the generation of high-risk adversarial perturbations and their utilization in a novel objective function, RobustSentEmbed adeptly learns high-quality and robust sentence embeddings. Our experiments confirm the superiority of RobustSentEmbed over state-of-the-art representations. Specifically, Our framework achieves a significant reduction in the success rate of various adversarial attacks, notably reducing the BERTAttack success rate by almost half (from 75.51{\%} to 38.81{\%}). The framework also yields improvements of 1.59{\%} and 0.23{\%} in semantic textual similarity tasks and various transfer tasks, respectively.
[ "Rafiei Asl, Javad", "Panzade, Prajwal", "Blanco, Eduardo", "Takabi, Daniel", "Cai, Zhipeng" ]
RobustSentEmbed: Robust Sentence Embeddings Using Adversarial Self-Supervised Contrastive Learning
findings-naacl.241
Poster
2403.11082
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.242.bib
https://aclanthology.org/2024.findings-naacl.242/
@inproceedings{mcknight-fyshe-2024-characterizing, title = "Characterizing Human and Zero-Shot {GPT}-3.5 Object-Similarity Judgments", author = "McKnight, D and Fyshe, Alona", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.242", doi = "10.18653/v1/2024.findings-naacl.242", pages = "3810--3828", abstract = "Recent advancements in large language models{'} (LLMs) capabilities have yielded few-shot, human-comparable performance on a range of tasks. At the same time, researchers expend significant effort and resources gathering human annotations. At some point, LLMs may be able to perform some simple annotation tasks, but studies of LLM annotation accuracy and behavior are sparse. In this paper, we characterize OpenAI{'}s GPT-3.5{'}s judgment on a behavioral task for implicit object categorization. We characterize the embedding spaces of models trained on human vs. GPT responses and give similarities and differences between them, finding many similar dimensions. We also find that despite these similar dimensions, augmenting humans{'} responses with GPT ones drives model divergence across the sizes of datasets tested.", }
Recent advancements in large language models{'} (LLMs) capabilities have yielded few-shot, human-comparable performance on a range of tasks. At the same time, researchers expend significant effort and resources gathering human annotations. At some point, LLMs may be able to perform some simple annotation tasks, but studies of LLM annotation accuracy and behavior are sparse. In this paper, we characterize OpenAI{'}s GPT-3.5{'}s judgment on a behavioral task for implicit object categorization. We characterize the embedding spaces of models trained on human vs. GPT responses and give similarities and differences between them, finding many similar dimensions. We also find that despite these similar dimensions, augmenting humans{'} responses with GPT ones drives model divergence across the sizes of datasets tested.
[ "McKnight, D", "Fyshe, Alona" ]
Characterizing Human and Zero-Shot GPT-3.5 Object-Similarity Judgments
findings-naacl.242
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.243.bib
https://aclanthology.org/2024.findings-naacl.243/
@inproceedings{he-etal-2024-self, title = "Self-Demos: Eliciting Out-of-Demonstration Generalizability in Large Language Models", author = "He, Wei and Liu, Shichun and Zhao, Jun and Ding, Yiwen and Lu, Yi and Xi, Zhiheng and Gui, Tao and Zhang, Qi and Huang, Xuanjing", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.243", doi = "10.18653/v1/2024.findings-naacl.243", pages = "3829--3845", abstract = "Large language models (LLMs) have shown promising abilities of in-context learning (ICL), adapting swiftly to new tasks with only few-shot demonstrations. However, current few-shot methods heavily depend on high-quality, query-specific demos, which are often lacking. When faced with out-of-demonstration (OOD) queries, methods that rely on hand-crafted demos or external retrievers might fail. To bridge the gap between limited demos and OOD queries, we propose Self-Demos, a novel prompting method that elicits the inherent generalizability in LLMs by query-aware demo generation. The generated demos strategically interpolate between existing demos and the given query, transforming the query from OOD to ID. To evaluate the effectiveness of our approach, we manually constructed OOD-Toolset, a dataset in the tool-using scenario with over 300 real-world APIs and 1000 instances, each consisting of three tool-use cases as demos and an OOD query. Thorough experiments on our dataset and two public math benchmarks have shown that our method can outperform state-of-the-art baselines in the OOD setting. Moreover, we conduct a range of analyses to validate Self-Demos{'}s generalization and provide more insights.", }
Large language models (LLMs) have shown promising abilities of in-context learning (ICL), adapting swiftly to new tasks with only few-shot demonstrations. However, current few-shot methods heavily depend on high-quality, query-specific demos, which are often lacking. When faced with out-of-demonstration (OOD) queries, methods that rely on hand-crafted demos or external retrievers might fail. To bridge the gap between limited demos and OOD queries, we propose Self-Demos, a novel prompting method that elicits the inherent generalizability in LLMs by query-aware demo generation. The generated demos strategically interpolate between existing demos and the given query, transforming the query from OOD to ID. To evaluate the effectiveness of our approach, we manually constructed OOD-Toolset, a dataset in the tool-using scenario with over 300 real-world APIs and 1000 instances, each consisting of three tool-use cases as demos and an OOD query. Thorough experiments on our dataset and two public math benchmarks have shown that our method can outperform state-of-the-art baselines in the OOD setting. Moreover, we conduct a range of analyses to validate Self-Demos{'}s generalization and provide more insights.
[ "He, Wei", "Liu, Shichun", "Zhao, Jun", "Ding, Yiwen", "Lu, Yi", "Xi, Zhiheng", "Gui, Tao", "Zhang, Qi", "Huang, Xuanjing" ]
Self-Demos: Eliciting Out-of-Demonstration Generalizability in Large Language Models
findings-naacl.243
Poster
2404.00884
[ "https://github.com/hewei2001/Self-Demos" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.244.bib
https://aclanthology.org/2024.findings-naacl.244/
@inproceedings{fang-etal-2024-getting, title = "Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal Reasoning", author = "Fang, Tianqing and Wang, Zhaowei and Zhou, Wenxuan and Zhang, Hongming and Song, Yangqiu and Chen, Muhao", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.244", doi = "10.18653/v1/2024.findings-naacl.244", pages = "3846--3868", abstract = "Event temporal reasoning aims at identifying the temporal relations between two or more events from narratives. However, knowledge conflicts arise when there is a mismatch between the actual temporal relations of events in the context and the prior knowledge or biases learned by the model. In this paper, we propose to detect knowledge-conflict examples in event temporal reasoning using bias indicators, which include event relation prior bias, tense bias, narrative bias, and dependency bias. We define conflict examples as those where event relations are opposite to biased or prior relations. To mitigate event-related knowledge conflicts, we introduce a Counterfactual Data Augmentation (CDA) based method that can be applied to both Pre-trained Language Models (PLMs) and Large Language Models (LLMs) either as additional training data or demonstrations for In- Context Learning. Experiments suggest both PLMs and LLMs suffer from knowledge conflicts in event temporal reasoning, and CDA has the potential for reducing hallucination and improving model performance.", }
Event temporal reasoning aims at identifying the temporal relations between two or more events from narratives. However, knowledge conflicts arise when there is a mismatch between the actual temporal relations of events in the context and the prior knowledge or biases learned by the model. In this paper, we propose to detect knowledge-conflict examples in event temporal reasoning using bias indicators, which include event relation prior bias, tense bias, narrative bias, and dependency bias. We define conflict examples as those where event relations are opposite to biased or prior relations. To mitigate event-related knowledge conflicts, we introduce a Counterfactual Data Augmentation (CDA) based method that can be applied to both Pre-trained Language Models (PLMs) and Large Language Models (LLMs) either as additional training data or demonstrations for In- Context Learning. Experiments suggest both PLMs and LLMs suffer from knowledge conflicts in event temporal reasoning, and CDA has the potential for reducing hallucination and improving model performance.
[ "Fang, Tianqing", "Wang, Zhaowei", "Zhou, Wenxuan", "Zhang, Hongming", "Song, Yangqiu", "Chen, Muhao" ]
Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal Reasoning
findings-naacl.244
Poster
2305.14970
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.245.bib
https://aclanthology.org/2024.findings-naacl.245/
@inproceedings{pouran-ben-veyseh-etal-2024-mcecr, title = "{MCECR}: A Novel Dataset for Multilingual Cross-Document Event Coreference Resolution", author = "Pouran Ben Veyseh, Amir and Lai, Viet and Nguyen, Chien and Dernoncourt, Franck and Nguyen, Thien", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.245", doi = "10.18653/v1/2024.findings-naacl.245", pages = "3869--3880", abstract = "Event coreference resolution (ECR) is a critical task in information extraction of natural language processing, aiming to identify and link event mentions across multiple documents. Despite recent progress, existing datasets for ECR primarily focus on within-document event coreference and English text, lacking cross-document ECR datasets for multiple languages beyond English. To address this issue, this work presents the first multiligual dataset for cross-document ECR, called MCECR (Multilingual Cross-Document Event Coreference Resolution), that manually annotates a diverse collection of documents for event mentions and coreference in five languages, i.e., English, Spanish, Hindi, Turkish, and Ukrainian. Using sampled articles from Wikinews over various topics as the seeds, our dataset fetches related news articles from the Google search engine to increase the number of non-singleton event clusters. In total, we annotate 5,802 news articles, providing a substantial and varied dataset for multilingual ECR in both within-document and cross-document scenarios. Extensive analysis of the proposed dataset reveals the challenging nature of multilingual event coreference resolution tasks, promoting MCECR as a strong benchmark dataset for future research in this area.", }
Event coreference resolution (ECR) is a critical task in information extraction of natural language processing, aiming to identify and link event mentions across multiple documents. Despite recent progress, existing datasets for ECR primarily focus on within-document event coreference and English text, lacking cross-document ECR datasets for multiple languages beyond English. To address this issue, this work presents the first multiligual dataset for cross-document ECR, called MCECR (Multilingual Cross-Document Event Coreference Resolution), that manually annotates a diverse collection of documents for event mentions and coreference in five languages, i.e., English, Spanish, Hindi, Turkish, and Ukrainian. Using sampled articles from Wikinews over various topics as the seeds, our dataset fetches related news articles from the Google search engine to increase the number of non-singleton event clusters. In total, we annotate 5,802 news articles, providing a substantial and varied dataset for multilingual ECR in both within-document and cross-document scenarios. Extensive analysis of the proposed dataset reveals the challenging nature of multilingual event coreference resolution tasks, promoting MCECR as a strong benchmark dataset for future research in this area.
[ "Pouran Ben Veyseh, Amir", "Lai, Viet", "Nguyen, Chien", "Dernoncourt, Franck", "Nguyen, Thien" ]
MCECR: A Novel Dataset for Multilingual Cross-Document Event Coreference Resolution
findings-naacl.245
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.246.bib
https://aclanthology.org/2024.findings-naacl.246/
@inproceedings{zhang-etal-2024-sentiment, title = "Sentiment Analysis in the Era of Large Language Models: A Reality Check", author = "Zhang, Wenxuan and Deng, Yue and Liu, Bing and Pan, Sinno and Bing, Lidong", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.246", doi = "10.18653/v1/2024.findings-naacl.246", pages = "3881--3906", abstract = "Sentiment analysis (SA) has been a long-standing research area in natural language processing. With the recent advent of large language models (LLMs), there is great potential for their employment on SA problems. However, the extent to which current LLMs can be leveraged for different sentiment analysis tasks remains unclear. This paper aims to provide a comprehensive investigation into the capabilities of LLMs in performing various sentiment analysis tasks, from conventional sentiment classification to aspect-based sentiment analysis and multifaceted analysis of subjective texts. We evaluate performance across 13 tasks on 26 datasets and compare the results against small language models (SLMs) trained on domain-specific datasets. Our study reveals that while LLMs demonstrate satisfactory performance in simpler tasks, they lag behind in more complex tasks requiring a deeper understanding of specific sentiment phenomena or structured sentiment information. However, LLMs significantly outperform SLMs in few-shot learning settings, suggesting their potential when annotation resources are limited. We also highlight the limitations of current evaluation practices in assessing LLMs{'} SA abilities and propose a novel benchmark, SentiEval, for a more comprehensive and realistic evaluation. Data and code are available at \url{https://github.com/DAMO-NLP-SG/LLM-Sentiment}.", }
Sentiment analysis (SA) has been a long-standing research area in natural language processing. With the recent advent of large language models (LLMs), there is great potential for their employment on SA problems. However, the extent to which current LLMs can be leveraged for different sentiment analysis tasks remains unclear. This paper aims to provide a comprehensive investigation into the capabilities of LLMs in performing various sentiment analysis tasks, from conventional sentiment classification to aspect-based sentiment analysis and multifaceted analysis of subjective texts. We evaluate performance across 13 tasks on 26 datasets and compare the results against small language models (SLMs) trained on domain-specific datasets. Our study reveals that while LLMs demonstrate satisfactory performance in simpler tasks, they lag behind in more complex tasks requiring a deeper understanding of specific sentiment phenomena or structured sentiment information. However, LLMs significantly outperform SLMs in few-shot learning settings, suggesting their potential when annotation resources are limited. We also highlight the limitations of current evaluation practices in assessing LLMs{'} SA abilities and propose a novel benchmark, SentiEval, for a more comprehensive and realistic evaluation. Data and code are available at \url{https://github.com/DAMO-NLP-SG/LLM-Sentiment}.
[ "Zhang, Wenxuan", "Deng, Yue", "Liu, Bing", "Pan, Sinno", "Bing, Lidong" ]
Sentiment Analysis in the Era of Large Language Models: A Reality Check
findings-naacl.246
Poster
2305.15005
[ "https://github.com/damo-nlp-sg/llm-sentiment" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.247.bib
https://aclanthology.org/2024.findings-naacl.247/
@inproceedings{ali-etal-2024-tokenizer, title = "Tokenizer Choice For {LLM} Training: Negligible or Crucial?", author = {Ali, Mehdi and Fromm, Michael and Thellmann, Klaudia and Rutmann, Richard and L{\"u}bbering, Max and Leveling, Johannes and Klug, Katrin and Ebert, Jan and Doll, Niclas and Buschhoff, Jasper and Jain, Charvi and Weber, Alexander and Jurkschat, Lena and Abdelwahab, Hammam and John, Chelsea and Ortiz Suarez, Pedro and Ostendorff, Malte and Weinbach, Samuel and Sifa, Rafet and Kesselheim, Stefan and Flores-Herr, Nicolas}, editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.247", doi = "10.18653/v1/2024.findings-naacl.247", pages = "3907--3924", abstract = "The recent success of large language models (LLMs) has been predominantly driven by curating the training dataset composition, scaling of model architectures and dataset sizes and advancements in pretraining objectives, leaving tokenizer influence as a blind spot.Shedding light on this underexplored area, we conduct a comprehensive study on the influence of tokenizer choice on LLM downstream performance by training 24 mono- and multilingual LLMs at a 2.6B parameter scale, ablating different tokenizer algorithms and parameterizations. Our studies highlight that the tokenizer choice can significantly impact the model{'}s downstream performance and training costs. In particular, we find that the common tokenizer evaluation metrics fertility and parity are not always predictive of model downstream performance, rendering these metrics a questionable proxy for the model{'}s downstream performance. Furthermore, we show that multilingual tokenizers trained on the five most frequent European languages require vocabulary size increases of factor three in comparison to English. While English-centric tokenizers have been applied to the training of multi-lingual LLMs in the past, we find that this approach results in a severe downstream performance degradation and additional training costs of up to 68{\%}, due to an inefficient tokenization vocabulary.", }
The recent success of large language models (LLMs) has been predominantly driven by curating the training dataset composition, scaling of model architectures and dataset sizes and advancements in pretraining objectives, leaving tokenizer influence as a blind spot.Shedding light on this underexplored area, we conduct a comprehensive study on the influence of tokenizer choice on LLM downstream performance by training 24 mono- and multilingual LLMs at a 2.6B parameter scale, ablating different tokenizer algorithms and parameterizations. Our studies highlight that the tokenizer choice can significantly impact the model{'}s downstream performance and training costs. In particular, we find that the common tokenizer evaluation metrics fertility and parity are not always predictive of model downstream performance, rendering these metrics a questionable proxy for the model{'}s downstream performance. Furthermore, we show that multilingual tokenizers trained on the five most frequent European languages require vocabulary size increases of factor three in comparison to English. While English-centric tokenizers have been applied to the training of multi-lingual LLMs in the past, we find that this approach results in a severe downstream performance degradation and additional training costs of up to 68{\%}, due to an inefficient tokenization vocabulary.
[ "Ali, Mehdi", "Fromm, Michael", "Thellmann, Klaudia", "Rutmann, Richard", "L{\\\"u}bbering, Max", "Leveling, Johannes", "Klug, Katrin", "Ebert, Jan", "Doll, Niclas", "Buschhoff, Jasper", "Jain, Charvi", "Weber, Alex", "er", "Jurkschat, Lena", "Abdelwahab, Hammam", "John, Chelsea", "Ortiz Suarez, Pedro", "Ostendorff, Malte", "Weinbach, Samuel", "Sifa, Rafet", "Kesselheim, Stefan", "Flores-Herr, Nicolas" ]
Tokenizer Choice For LLM Training: Negligible or Crucial?
findings-naacl.247
Poster
2310.08754
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.248.bib
https://aclanthology.org/2024.findings-naacl.248/
@inproceedings{zhou-etal-2024-think, title = "Think Before You Speak: Cultivating Communication Skills of Large Language Models via Inner Monologue", author = "Zhou, Junkai and Pang, Liang and Shen, Huawei and Cheng, Xueqi", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.248", doi = "10.18653/v1/2024.findings-naacl.248", pages = "3925--3951", abstract = "The emergence of large language models (LLMs) further improves the capabilities of open-domain dialogue systems and can generate fluent, coherent, and diverse responses. However, LLMs still lack a crucial ability: communication skills. This limitation renders them more like information seeking tools rather than anthropomorphic chatbots. Communication skills, such as topic transition, proactively asking questions, concept guidance, empathy, and summarising often should be taken into consideration, to make LLMs more anthropomorphic and proactive during the conversation, thereby increasing the interest of users and attracting them to chat for longer. However, enabling these communication skills in black-box LLMs remains a key challenge because they do not have the same utterance formation mode as real people: think before speaking. Inspired by linguistics and cognitive science, we empower LLMs with communication skills through inner monologues. To evaluate various communication skills, we construct a benchmark named Cskills, which can also more comprehensively evaluate the dialogue generation ability of the model. Experimental results show that the proposed CSIM strategy improves the backbone models and outperforms the baselines.", }
The emergence of large language models (LLMs) further improves the capabilities of open-domain dialogue systems and can generate fluent, coherent, and diverse responses. However, LLMs still lack a crucial ability: communication skills. This limitation renders them more like information seeking tools rather than anthropomorphic chatbots. Communication skills, such as topic transition, proactively asking questions, concept guidance, empathy, and summarising often should be taken into consideration, to make LLMs more anthropomorphic and proactive during the conversation, thereby increasing the interest of users and attracting them to chat for longer. However, enabling these communication skills in black-box LLMs remains a key challenge because they do not have the same utterance formation mode as real people: think before speaking. Inspired by linguistics and cognitive science, we empower LLMs with communication skills through inner monologues. To evaluate various communication skills, we construct a benchmark named Cskills, which can also more comprehensively evaluate the dialogue generation ability of the model. Experimental results show that the proposed CSIM strategy improves the backbone models and outperforms the baselines.
[ "Zhou, Junkai", "Pang, Liang", "Shen, Huawei", "Cheng, Xueqi" ]
Think Before You Speak: Cultivating Communication Skills of Large Language Models via Inner Monologue
findings-naacl.248
Poster
2311.07445
[ "https://github.com/934865517zjk/csim" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.249.bib
https://aclanthology.org/2024.findings-naacl.249/
@inproceedings{hansen-etal-2024-impact, title = "The Impact of Differential Privacy on Group Disparity Mitigation", author = "Hansen, Victor and Neerkaje, Atula and Sawhney, Ramit and Flek, Lucie and S{\o}gaard, Anders", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.249", doi = "10.18653/v1/2024.findings-naacl.249", pages = "3952--3965", abstract = "The performance cost of differential privacy has, for some applications, been shown to be higher for minority groups; fairness, conversely, has been shown to disproportionally compromise the privacy of members of such groups. Most work in this area has been restricted to computer vision and risk assessment. In response, we evaluate the impact of differential privacy on fairness across four diverse tasks, focusing on how attempts to mitigate privacy violations and between-group performance differences interact: Does privacy inhibit attempts to ensure fairness? To this end, we train $(\varepsilon,\delta)$-differentially private models with empirical risk minimization and group distributionally robust training objectives. Consistent with previous findings, we find that differential privacy increases between-group performance differences in the baseline setting; more interestingly, differential privacy \textit{reduces} between-group performance differences in the robust setting. We explain this by interpreting differential privacy as regularization.", }
The performance cost of differential privacy has, for some applications, been shown to be higher for minority groups; fairness, conversely, has been shown to disproportionally compromise the privacy of members of such groups. Most work in this area has been restricted to computer vision and risk assessment. In response, we evaluate the impact of differential privacy on fairness across four diverse tasks, focusing on how attempts to mitigate privacy violations and between-group performance differences interact: Does privacy inhibit attempts to ensure fairness? To this end, we train $(\varepsilon,\delta)$-differentially private models with empirical risk minimization and group distributionally robust training objectives. Consistent with previous findings, we find that differential privacy increases between-group performance differences in the baseline setting; more interestingly, differential privacy \textit{reduces} between-group performance differences in the robust setting. We explain this by interpreting differential privacy as regularization.
[ "Hansen, Victor", "Neerkaje, Atula", "Sawhney, Ramit", "Flek, Lucie", "S{\\o}gaard, Anders" ]
The Impact of Differential Privacy on Group Disparity Mitigation
findings-naacl.249
Poster
2203.02745
[ "https://github.com/vpetren/fair_dp" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.250.bib
https://aclanthology.org/2024.findings-naacl.250/
@inproceedings{mhaskar-etal-2024-isometric, title = "Isometric Neural Machine Translation using Phoneme Count Ratio Reward-based Reinforcement Learning", author = "Mhaskar, Shivam and Shah, Nirmesh and Zaki, Mohammadi and Gudmalwar, Ashishkumar and Wasnik, Pankaj and Shah, Rajiv", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.250", doi = "10.18653/v1/2024.findings-naacl.250", pages = "3966--3976", abstract = "Traditional Automatic Video Dubbing (AVD) pipeline consists of three key modules, namely, Automatic Speech Recognition (ASR), Neural Machine Translation (NMT), and Text-to-Speech (TTS). Within AVD pipelines, isometric-NMT algorithms are employed to regulate the length of the synthesized output text. This is done to guarantee synchronization with respect to the alignment of video and audio subsequent to the dubbing process. Previous approaches have focused on aligning the number of characters and words in the source and target language texts of Machine Translation models. However, our approach aims to align the number of phonemes instead, as they are closely associated with speech duration. In this paper, we present the development of an isometric NMT system using Reinforcement Learning (RL), with a focus on optimizing the alignment of phoneme counts in the source and target language sentence pairs. To evaluate our models, we propose the Phoneme Count Compliance (PCC) score, which is a measure of length compliance. Our approach demonstrates a substantial improvement of approximately 36{\%} in the PCC score compared to the state-of-the-art models when applied to English-Hindi language pairs. Moreover, we propose a student-teacher architecture within the framework of our RL approach to maintain a trade-off between the phoneme count and translation quality.", }
Traditional Automatic Video Dubbing (AVD) pipeline consists of three key modules, namely, Automatic Speech Recognition (ASR), Neural Machine Translation (NMT), and Text-to-Speech (TTS). Within AVD pipelines, isometric-NMT algorithms are employed to regulate the length of the synthesized output text. This is done to guarantee synchronization with respect to the alignment of video and audio subsequent to the dubbing process. Previous approaches have focused on aligning the number of characters and words in the source and target language texts of Machine Translation models. However, our approach aims to align the number of phonemes instead, as they are closely associated with speech duration. In this paper, we present the development of an isometric NMT system using Reinforcement Learning (RL), with a focus on optimizing the alignment of phoneme counts in the source and target language sentence pairs. To evaluate our models, we propose the Phoneme Count Compliance (PCC) score, which is a measure of length compliance. Our approach demonstrates a substantial improvement of approximately 36{\%} in the PCC score compared to the state-of-the-art models when applied to English-Hindi language pairs. Moreover, we propose a student-teacher architecture within the framework of our RL approach to maintain a trade-off between the phoneme count and translation quality.
[ "Mhaskar, Shivam", "Shah, Nirmesh", "Zaki, Mohammadi", "Gudmalwar, Ashishkumar", "Wasnik, Pankaj", "Shah, Rajiv" ]
Isometric Neural Machine Translation using Phoneme Count Ratio Reward-based Reinforcement Learning
findings-naacl.250
Poster
2403.15469
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.251.bib
https://aclanthology.org/2024.findings-naacl.251/
@inproceedings{kumar-etal-2024-read, title = "Read between the lines - Functionality Extraction From {README}s", author = "Kumar, Prince and Tamilselvam, Srikanth and Garg, Dinesh", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.251", doi = "10.18653/v1/2024.findings-naacl.251", pages = "3977--3990", abstract = "While text summarization is a well-known NLP task, in this paper, we introduce a novel and useful variant of it called functionality extraction from Git README files. Though this task is a text2text generation at an abstract level, it involves its own peculiarities and challenges making existing text2text generation systems not very useful. The motivation behind this task stems from a recent surge in research and development activities around the use of large language models for code-related tasks, such as code refactoring, code summarization, etc. We also release a human-annotated dataset called FuncRead, and develop a battery of models for the task. Our exhaustive experimentation shows that small size fine-tuned models beat any baseline models that can be designed using popular black-box or white-box large language models (LLMs) such as ChatGPT and Bard. Our best fine-tuned 7 Billion CodeLlama model exhibit 70{\%} and 20{\%} gain on the F1 score against ChatGPT and Bard respectively.", }
While text summarization is a well-known NLP task, in this paper, we introduce a novel and useful variant of it called functionality extraction from Git README files. Though this task is a text2text generation at an abstract level, it involves its own peculiarities and challenges making existing text2text generation systems not very useful. The motivation behind this task stems from a recent surge in research and development activities around the use of large language models for code-related tasks, such as code refactoring, code summarization, etc. We also release a human-annotated dataset called FuncRead, and develop a battery of models for the task. Our exhaustive experimentation shows that small size fine-tuned models beat any baseline models that can be designed using popular black-box or white-box large language models (LLMs) such as ChatGPT and Bard. Our best fine-tuned 7 Billion CodeLlama model exhibit 70{\%} and 20{\%} gain on the F1 score against ChatGPT and Bard respectively.
[ "Kumar, Prince", "Tamilselvam, Srikanth", "Garg, Dinesh" ]
Read between the lines - Functionality Extraction From READMEs
findings-naacl.251
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.252.bib
https://aclanthology.org/2024.findings-naacl.252/
@inproceedings{wang-etal-2024-abspyramid, title = "{A}bs{P}yramid: Benchmarking the Abstraction Ability of Language Models with a Unified Entailment Graph", author = "Wang, Zhaowei and Shi, Haochen and Wang, Weiqi and Fang, Tianqing and Zhang, Hongming and Choi, Sehyun and Liu, Xin and Song, Yangqiu", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.252", doi = "10.18653/v1/2024.findings-naacl.252", pages = "3991--4010", abstract = "Cognitive research indicates that abstraction ability is essential in human intelligence, which remains under-explored in language models. In this paper, we present AbsPyramid, a unified entailment graph of 221K textual descriptions of abstraction knowledge. While existing resources only touch nouns or verbs within simplified events or specific domains, AbsPyramid collects abstract knowledge for three components of diverse events to comprehensively evaluate the abstraction ability of language models in the open domain. Experimental results demonstrate that current LLMs face challenges comprehending abstraction knowledge in zero-shot and few-shot settings. By training on our rich abstraction knowledge, we find LLMs can acquire basic abstraction abilities and generalize to unseen events. In the meantime, we empirically show that our benchmark is comprehensive to enhance LLMs across two previous abstraction tasks.", }
Cognitive research indicates that abstraction ability is essential in human intelligence, which remains under-explored in language models. In this paper, we present AbsPyramid, a unified entailment graph of 221K textual descriptions of abstraction knowledge. While existing resources only touch nouns or verbs within simplified events or specific domains, AbsPyramid collects abstract knowledge for three components of diverse events to comprehensively evaluate the abstraction ability of language models in the open domain. Experimental results demonstrate that current LLMs face challenges comprehending abstraction knowledge in zero-shot and few-shot settings. By training on our rich abstraction knowledge, we find LLMs can acquire basic abstraction abilities and generalize to unseen events. In the meantime, we empirically show that our benchmark is comprehensive to enhance LLMs across two previous abstraction tasks.
[ "Wang, Zhaowei", "Shi, Haochen", "Wang, Weiqi", "Fang, Tianqing", "Zhang, Hongming", "Choi, Sehyun", "Liu, Xin", "Song, Yangqiu" ]
AbsPyramid: Benchmarking the Abstraction Ability of Language Models with a Unified Entailment Graph
findings-naacl.252
Poster
2311.09174
[ "https://github.com/hkust-knowcomp/abspyramid" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.253.bib
https://aclanthology.org/2024.findings-naacl.253/
@inproceedings{lahiri-etal-2024-tk, title = "Few-{TK}: A Dataset for Few-shot Scientific Typed Keyphrase Recognition", author = "Lahiri, Avishek and Sarkar, Pratyay and Sen, Medha and Sanyal, Debarshi Kumar and Mukherjee, Imon", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.253", doi = "10.18653/v1/2024.findings-naacl.253", pages = "4011--4025", abstract = "Scientific texts are distinctive from ordinary texts in quite a few aspects like their vocabulary and discourse structure. Consequently, Information Extraction (IE) tasks for scientific texts come with their own set of challenges. The classical definition of Named Entities restricts the inclusion of all scientific terms under its hood, which is why previous works have used the terms Named Entities and Keyphrases interchangeably. We suggest the rechristening of Named Entities for the scientific domain as Typed Keyphrases (TK), broadening their scope. We advocate for exploring this task in the few-shot domain due to the scarcity of labeled scientific IE data. Currently, no dataset exists for few-shot scientific Typed Keyphrase Recognition. To address this gap, we develop an annotation schema and present Few-TK, a dataset in the AI/ML field that includes scientific Typed Keyphrase annotations on abstracts of 500 research papers. To the best of our knowledge, this is the introductory few-shot Typed Keyphrase recognition dataset and only the second dataset structured specifically for few-shot NER, after Few-NERD. We report the results of several few-shot sequence-labelling models applied to our dataset. The data and code are available at https://github.com/AvishekLahiri/Few{\_}TK.git", }
Scientific texts are distinctive from ordinary texts in quite a few aspects like their vocabulary and discourse structure. Consequently, Information Extraction (IE) tasks for scientific texts come with their own set of challenges. The classical definition of Named Entities restricts the inclusion of all scientific terms under its hood, which is why previous works have used the terms Named Entities and Keyphrases interchangeably. We suggest the rechristening of Named Entities for the scientific domain as Typed Keyphrases (TK), broadening their scope. We advocate for exploring this task in the few-shot domain due to the scarcity of labeled scientific IE data. Currently, no dataset exists for few-shot scientific Typed Keyphrase Recognition. To address this gap, we develop an annotation schema and present Few-TK, a dataset in the AI/ML field that includes scientific Typed Keyphrase annotations on abstracts of 500 research papers. To the best of our knowledge, this is the introductory few-shot Typed Keyphrase recognition dataset and only the second dataset structured specifically for few-shot NER, after Few-NERD. We report the results of several few-shot sequence-labelling models applied to our dataset. The data and code are available at https://github.com/AvishekLahiri/Few{\_}TK.git
[ "Lahiri, Avishek", "Sarkar, Pratyay", "Sen, Medha", "Sanyal, Debarshi Kumar", "Mukherjee, Imon" ]
Few-TK: A Dataset for Few-shot Scientific Typed Keyphrase Recognition
findings-naacl.253
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.254.bib
https://aclanthology.org/2024.findings-naacl.254/
@inproceedings{feng-etal-2024-language, title = "Language Models can be Deductive Solvers", author = "Feng, Jiazhan and Xu, Ruochen and Hao, Junheng and Sharma, Hiteshi and Shen, Yelong and Zhao, Dongyan and Chen, Weizhu", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.254", doi = "10.18653/v1/2024.findings-naacl.254", pages = "4026--4042", abstract = "Logical reasoning is a fundamental aspect of human intelligence and a key component of tasks like problem-solving and decision-making. Recent advancements have enabled Large Language Models (LLMs) to potentially exhibit reasoning capabilities, but complex logical reasoning remains a challenge. The state-of-the-art, solver-augmented language models, use LLMs to parse natural language logical questions into symbolic representations first and then adopt external logical solvers to take in the symbolic representations and output the answers. Despite their impressive performance, any parsing errors will inevitably result in the failure of the execution of external logical solvers and no answer to the logical questions. In this paper, we introduce LoGiPT, a novel language model that directly internalizes and emulates the reasoning processes of logical solvers and avoids parsing errors by learning strict adherence to solver syntax and grammar. LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers. Experimental results on two public deductive reasoning benchmarks show that LoGiPT outperforms state-of-the-art solver-augmented LMs and few-shot prompting methods on competitive LLMs like GPT-4. This project is available in https://github.com/Cyril-JZ/LoGiPT.", }
Logical reasoning is a fundamental aspect of human intelligence and a key component of tasks like problem-solving and decision-making. Recent advancements have enabled Large Language Models (LLMs) to potentially exhibit reasoning capabilities, but complex logical reasoning remains a challenge. The state-of-the-art, solver-augmented language models, use LLMs to parse natural language logical questions into symbolic representations first and then adopt external logical solvers to take in the symbolic representations and output the answers. Despite their impressive performance, any parsing errors will inevitably result in the failure of the execution of external logical solvers and no answer to the logical questions. In this paper, we introduce LoGiPT, a novel language model that directly internalizes and emulates the reasoning processes of logical solvers and avoids parsing errors by learning strict adherence to solver syntax and grammar. LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers. Experimental results on two public deductive reasoning benchmarks show that LoGiPT outperforms state-of-the-art solver-augmented LMs and few-shot prompting methods on competitive LLMs like GPT-4. This project is available in https://github.com/Cyril-JZ/LoGiPT.
[ "Feng, Jiazhan", "Xu, Ruochen", "Hao, Junheng", "Sharma, Hiteshi", "Shen, Yelong", "Zhao, Dongyan", "Chen, Weizhu" ]
Language Models can be Deductive Solvers
findings-naacl.254
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.255.bib
https://aclanthology.org/2024.findings-naacl.255/
@inproceedings{moghe-etal-2024-interpreting, title = "Interpreting User Requests in the Context of Natural Language Standing Instructions", author = "Moghe, Nikita and Xia, Patrick and Andreas, Jacob and Eisner, Jason and Van Durme, Benjamin and Jhamtani, Harsh", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.255", doi = "10.18653/v1/2024.findings-naacl.255", pages = "4043--4060", abstract = "Users of natural language interfaces, frequently powered by Large Language Models (LLMs), must often repeat their full set of preferences each time they make a similar request. We describe an approach to LLM-based dialogue modeling in which persistent user constraints and preferences {--} collectively termed standing instructions {--} are provided as additional context for such interfaces. For example, when a user states {``}I{'}m hungry{''}, a previously expressed preference for Persian food can be automatically added to the LLM prompt, influencing the search for relevant restaurants.We develop NLSI, a language-to-program dataset consisting of over 2.4K English dialogues spanning 17 domains, in which each dialogue is paired with a user profile (a set of user-specific standing instructions) and corresponding structured representations (a sequence of API calls). A key challenge in NLSI is to identify which subset of the standing instructions is applicable to a given dialogue. NLSI contains diverse phenomena, from simple preferences to interdependent instructions such as triggering a hotel search whenever the user is booking tickets to an event. We conduct experiments on NLSI using prompting with large language models and various retrieval approaches, achieving a maximum of 46{\%} exact match on API prediction. Our results demonstrate the challenges in identifying the relevant standing instructions and their interpretation into API calls", }
Users of natural language interfaces, frequently powered by Large Language Models (LLMs), must often repeat their full set of preferences each time they make a similar request. We describe an approach to LLM-based dialogue modeling in which persistent user constraints and preferences {--} collectively termed standing instructions {--} are provided as additional context for such interfaces. For example, when a user states {``}I{'}m hungry{''}, a previously expressed preference for Persian food can be automatically added to the LLM prompt, influencing the search for relevant restaurants.We develop NLSI, a language-to-program dataset consisting of over 2.4K English dialogues spanning 17 domains, in which each dialogue is paired with a user profile (a set of user-specific standing instructions) and corresponding structured representations (a sequence of API calls). A key challenge in NLSI is to identify which subset of the standing instructions is applicable to a given dialogue. NLSI contains diverse phenomena, from simple preferences to interdependent instructions such as triggering a hotel search whenever the user is booking tickets to an event. We conduct experiments on NLSI using prompting with large language models and various retrieval approaches, achieving a maximum of 46{\%} exact match on API prediction. Our results demonstrate the challenges in identifying the relevant standing instructions and their interpretation into API calls
[ "Moghe, Nikita", "Xia, Patrick", "Andreas, Jacob", "Eisner, Jason", "Van Durme, Benjamin", "Jhamtani, Harsh" ]
Interpreting User Requests in the Context of Natural Language Standing Instructions
findings-naacl.255
Poster
2311.09796
[ "https://github.com/nikitacs16/nlsi" ]
https://huggingface.co/papers/2311.09796
1
0
0
6
1
[]
[ "nikitam/nlsi" ]
[]
https://aclanthology.org/2024.findings-naacl.256.bib
https://aclanthology.org/2024.findings-naacl.256/
@inproceedings{tang-etal-2024-secure, title = "Secure Your Model: An Effective Key Prompt Protection Mechanism for Large Language Models", author = "Tang, Ruixiang and Chuang, Yu-Neng and Cai, Xuanting and Du, Mengnan and Hu, Xia", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.256", doi = "10.18653/v1/2024.findings-naacl.256", pages = "4061--4073", abstract = "Large language models (LLMs) have notably revolutionized many domains within natural language processing due to their exceptional performance. Their security has become increasingly vital. This study is centered on protecting LLMs against unauthorized access and potential theft. We propose a simple yet effective protective measure wherein a unique key prompt is embedded within the LLM. This mechanism enables the model to respond only when presented with the correct key prompt; otherwise, LLMs will refuse to react to any input instructions. This key prompt protection offers a robust solution to prevent the unauthorized use of LLMs, as the model becomes unusable without the correct key. We evaluated the proposed protection on multiple LLMs and NLP tasks. Results demonstrate that our method can successfully protect the LLM without significantly impacting the model{'}s original function. Moreover, we demonstrate potential attacks that attempt to bypass the protection mechanism will adversely affect the model{'}s performance, further emphasizing the effectiveness of the proposed protection method.", }
Large language models (LLMs) have notably revolutionized many domains within natural language processing due to their exceptional performance. Their security has become increasingly vital. This study is centered on protecting LLMs against unauthorized access and potential theft. We propose a simple yet effective protective measure wherein a unique key prompt is embedded within the LLM. This mechanism enables the model to respond only when presented with the correct key prompt; otherwise, LLMs will refuse to react to any input instructions. This key prompt protection offers a robust solution to prevent the unauthorized use of LLMs, as the model becomes unusable without the correct key. We evaluated the proposed protection on multiple LLMs and NLP tasks. Results demonstrate that our method can successfully protect the LLM without significantly impacting the model{'}s original function. Moreover, we demonstrate potential attacks that attempt to bypass the protection mechanism will adversely affect the model{'}s performance, further emphasizing the effectiveness of the proposed protection method.
[ "Tang, Ruixiang", "Chuang, Yu-Neng", "Cai, Xuanting", "Du, Mengnan", "Hu, Xia" ]
Secure Your Model: An Effective Key Prompt Protection Mechanism for Large Language Models
findings-naacl.256
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.257.bib
https://aclanthology.org/2024.findings-naacl.257/
@inproceedings{sun-etal-2024-enhancing, title = "Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models", author = "Sun, Jiashuo and Luo, Yi and Gong, Yeyun and Lin, Chen and Shen, Yelong and Guo, Jian and Duan, Nan", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.257", doi = "10.18653/v1/2024.findings-naacl.257", pages = "4074--4101", abstract = "Large language models (LLMs) can achieve impressive performance on various reasoning tasks by incorporating chain-of-thought (CoT) prompting, where step-by-step reasoning is provided to guide LLMs to generate answers to questions, and the question-rationale-answer triplets are utilized as demonstration exemplars. However, the reasoning chains of demonstrations generated by LLMs are observed to be prone to errors, which can subsequently lead to incorrect reasoning during inference. Furthermore, inappropriate exemplars, e.g., overly simplistic or complex exemplars depending on the question{'}s difficulty level, can affect the LLM{'}s performance. To address these issues, we introduce Iter-CoT (Iterative bootstrapping in Chain-of-Thoughts prompting). Iter-CoT has two advantages: (1) it adopts iterative bootstrapping that enables LLMs to rectify errors autonomously, resulting in more precise and comprehensive reasoning chains. (2) it selects exemplars of challenging yet answerable (i.e., the LLM has the potential to answer correctly) questions, enhancing the LLMs{'} generalizability to answer questions with varying difficulty levels. Experimental results exhibit Iter-CoT superior performance on three distinct reasoning tasks on ten datasets.", }
Large language models (LLMs) can achieve impressive performance on various reasoning tasks by incorporating chain-of-thought (CoT) prompting, where step-by-step reasoning is provided to guide LLMs to generate answers to questions, and the question-rationale-answer triplets are utilized as demonstration exemplars. However, the reasoning chains of demonstrations generated by LLMs are observed to be prone to errors, which can subsequently lead to incorrect reasoning during inference. Furthermore, inappropriate exemplars, e.g., overly simplistic or complex exemplars depending on the question{'}s difficulty level, can affect the LLM{'}s performance. To address these issues, we introduce Iter-CoT (Iterative bootstrapping in Chain-of-Thoughts prompting). Iter-CoT has two advantages: (1) it adopts iterative bootstrapping that enables LLMs to rectify errors autonomously, resulting in more precise and comprehensive reasoning chains. (2) it selects exemplars of challenging yet answerable (i.e., the LLM has the potential to answer correctly) questions, enhancing the LLMs{'} generalizability to answer questions with varying difficulty levels. Experimental results exhibit Iter-CoT superior performance on three distinct reasoning tasks on ten datasets.
[ "Sun, Jiashuo", "Luo, Yi", "Gong, Yeyun", "Lin, Chen", "Shen, Yelong", "Guo, Jian", "Duan, Nan" ]
Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models
findings-naacl.257
Poster
2304.11657
[ "https://github.com/gasolsun36/iter-cot" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.258.bib
https://aclanthology.org/2024.findings-naacl.258/
@inproceedings{mao-etal-2024-prompt, title = "Do Prompt Positions Really Matter?", author = "Mao, Junyu and Middleton, Stuart E. and Niranjan, Mahesan", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.258", doi = "10.18653/v1/2024.findings-naacl.258", pages = "4102--4130", abstract = "Prompt-based models have gathered a lot of attention from researchers due to their remarkable advancements in the fields of zero-shot and few-shot learning. Developing an effective prompt template plays a critical role. However, prior studies have mainly focused on prompt vocabulary searching or embedding initialization within a predefined template with the prompt position fixed. In this empirical study, we conduct the most comprehensive analysis to date of prompt position for diverse Natural Language Processing (NLP) tasks. Our findings quantify the substantial impact prompt position has on model performance. We observe that the prompt positions used in prior studies are often sub-optimal, and this observation is consistent even in widely used instruction-tuned models. These findings suggest prompt position optimisation as a valuable research direction to augment prompt engineering methodologies and prompt position-aware instruction tuning as a potential way to build more robust models in the future.", }
Prompt-based models have gathered a lot of attention from researchers due to their remarkable advancements in the fields of zero-shot and few-shot learning. Developing an effective prompt template plays a critical role. However, prior studies have mainly focused on prompt vocabulary searching or embedding initialization within a predefined template with the prompt position fixed. In this empirical study, we conduct the most comprehensive analysis to date of prompt position for diverse Natural Language Processing (NLP) tasks. Our findings quantify the substantial impact prompt position has on model performance. We observe that the prompt positions used in prior studies are often sub-optimal, and this observation is consistent even in widely used instruction-tuned models. These findings suggest prompt position optimisation as a valuable research direction to augment prompt engineering methodologies and prompt position-aware instruction tuning as a potential way to build more robust models in the future.
[ "Mao, Junyu", "Middleton, Stuart E.", "Niranjan, Mahesan" ]
Do Prompt Positions Really Matter?
findings-naacl.258
Poster
2305.14493
[ "https://github.com/milliemaoo/prompt-position" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.259.bib
https://aclanthology.org/2024.findings-naacl.259/
@inproceedings{zhang-etal-2024-natural, title = "Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning", author = "Zhang, Tianhua and Ge, Jiaxin and Luo, Hongyin and Chuang, Yung-Sung and Gao, Mingye and Gong, Yuan and Kim, Yoon and Wu, Xixin and Meng, Helen and Glass, James", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.259", doi = "10.18653/v1/2024.findings-naacl.259", pages = "4131--4155", abstract = "How can we perform computations over natural language representations to solve tasks that require symbolic and numeric reasoning? We propose natural language embedded programs (NLEP) as a unifying framework for addressing math/symbolic reasoning, natural language understanding, and instruction following tasks. Our approach prompts a language model to generate full Python programs that define functions over data structures which contain natural language representations of structured knowledge. A Python interpreter then executes the generated code and prints the output. Despite using a task-general prompt, we find that this approach can improve upon strong baselines across a range of different tasks including math and symbolic reasoning, text classification, question answering, and instruction following. We found that the generated programs are interpretable since they outline the exact reasoning process followed by the program interpreter.", }
How can we perform computations over natural language representations to solve tasks that require symbolic and numeric reasoning? We propose natural language embedded programs (NLEP) as a unifying framework for addressing math/symbolic reasoning, natural language understanding, and instruction following tasks. Our approach prompts a language model to generate full Python programs that define functions over data structures which contain natural language representations of structured knowledge. A Python interpreter then executes the generated code and prints the output. Despite using a task-general prompt, we find that this approach can improve upon strong baselines across a range of different tasks including math and symbolic reasoning, text classification, question answering, and instruction following. We found that the generated programs are interpretable since they outline the exact reasoning process followed by the program interpreter.
[ "Zhang, Tianhua", "Ge, Jiaxin", "Luo, Hongyin", "Chuang, Yung-Sung", "Gao, Mingye", "Gong, Yuan", "Kim, Yoon", "Wu, Xixin", "Meng, Helen", "Glass, James" ]
Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning
findings-naacl.259
Poster
2309.10814
[ "https://github.com/luohongyin/langcode" ]
https://huggingface.co/papers/2309.10814
1
3
0
10
1
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.260.bib
https://aclanthology.org/2024.findings-naacl.260/
@inproceedings{akter-anastasopoulos-2024-study, title = "A Study on Scaling Up Multilingual News Framing Analysis", author = "Akter, Syeda Sabrina and Anastasopoulos, Antonios", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.260", doi = "10.18653/v1/2024.findings-naacl.260", pages = "4156--4173", abstract = "Media framing is the study of strategically selecting and presenting specific aspects of political issues to shape public opinion. Despite its relevance to almost all societies around the world, research has been limited due to the lack of available datasets and other resources. This study explores the possibility of dataset creation through crowdsourcing, utilizing non-expert annotators to develop training corpora. We first extend framing analysis beyond English news to a multilingual context (12 typologically diverse languages) through automatic translation. We also present a novel benchmark in Bengali and Portuguese on the immigration and same-sex marriage domains.Additionally, we show that a system trained on our crowd-sourced dataset, combined with other existing ones, leads to a 5.32 percentage point increase from the baseline, showing that crowdsourcing is a viable option. Last, we study the performance of large language models (LLMs) for this task, finding that task-specific fine-tuning is a better approach than employing bigger non-specialized models.", }
Media framing is the study of strategically selecting and presenting specific aspects of political issues to shape public opinion. Despite its relevance to almost all societies around the world, research has been limited due to the lack of available datasets and other resources. This study explores the possibility of dataset creation through crowdsourcing, utilizing non-expert annotators to develop training corpora. We first extend framing analysis beyond English news to a multilingual context (12 typologically diverse languages) through automatic translation. We also present a novel benchmark in Bengali and Portuguese on the immigration and same-sex marriage domains.Additionally, we show that a system trained on our crowd-sourced dataset, combined with other existing ones, leads to a 5.32 percentage point increase from the baseline, showing that crowdsourcing is a viable option. Last, we study the performance of large language models (LLMs) for this task, finding that task-specific fine-tuning is a better approach than employing bigger non-specialized models.
[ "Akter, Syeda Sabrina", "Anastasopoulos, Antonios" ]
A Study on Scaling Up Multilingual News Framing Analysis
findings-naacl.260
Poster
2404.01481
[ "https://github.com/syedasabrina/scaling-up-multilingual-framing-analysis" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.261.bib
https://aclanthology.org/2024.findings-naacl.261/
@inproceedings{tran-etal-2024-viglue, title = "{V}i{GLUE}: A {V}ietnamese General Language Understanding Benchmark and Analysis of {V}ietnamese Language Models", author = "Tran, Minh-Nam and Nguyen, Phu-Vinh and Nguyen, Long and Dinh, Dien", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.261", doi = "10.18653/v1/2024.findings-naacl.261", pages = "4174--4189", abstract = "As the number of language models has increased, various benchmarks have been suggested to assess the proficiency of the models in natural language understanding. However, there is a lack of such a benchmark in Vietnamese due to the difficulty in accessing natural language processing datasets or the scarcity of task-specific datasets. **ViGLUE**, the proposed dataset collection, is a **Vi**etnamese **G**eneral **L**anguage **U**nderstanding **E**valuation benchmark developed using three methods: translating an existing benchmark, generating new corpora, and collecting available datasets. ViGLUE contains twelve tasks and encompasses over ten areas and subjects, enabling it to evaluate models comprehensively over a broad spectrum of aspects. Baseline models utilizing multilingual language models are also provided for all tasks in the proposed benchmarks. In addition, the study of the available Vietnamese large language models is conducted to explore the language models{'} ability in the few-shot learning framework, leading to the exploration of the relationship between specific tasks and the number of shots.", }
As the number of language models has increased, various benchmarks have been suggested to assess the proficiency of the models in natural language understanding. However, there is a lack of such a benchmark in Vietnamese due to the difficulty in accessing natural language processing datasets or the scarcity of task-specific datasets. **ViGLUE**, the proposed dataset collection, is a **Vi**etnamese **G**eneral **L**anguage **U**nderstanding **E**valuation benchmark developed using three methods: translating an existing benchmark, generating new corpora, and collecting available datasets. ViGLUE contains twelve tasks and encompasses over ten areas and subjects, enabling it to evaluate models comprehensively over a broad spectrum of aspects. Baseline models utilizing multilingual language models are also provided for all tasks in the proposed benchmarks. In addition, the study of the available Vietnamese large language models is conducted to explore the language models{'} ability in the few-shot learning framework, leading to the exploration of the relationship between specific tasks and the number of shots.
[ "Tran, Minh-Nam", "Nguyen, Phu-Vinh", "Nguyen, Long", "Dinh, Dien" ]
ViGLUE: A Vietnamese General Language Understanding Benchmark and Analysis of Vietnamese Language Models
findings-naacl.261
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.262.bib
https://aclanthology.org/2024.findings-naacl.262/
@inproceedings{resck-etal-2024-exploring, title = "Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales", author = "Resck, Lucas and M. Raimundo, Marcos and Poco, Jorge", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.262", doi = "10.18653/v1/2024.findings-naacl.262", pages = "4190--4216", abstract = "Saliency post-hoc explainability methods are important tools for understanding increasingly complex NLP models. While these methods can reflect the model{'}s reasoning, they may not align with human intuition, making the explanations not plausible. In this work, we present a methodology for incorporating rationales, which are text annotations explaining human decisions, into text classification models. This incorporation enhances the plausibility of post-hoc explanations while preserving their faithfulness. Our approach is agnostic to model architectures and explainability methods. We introduce the rationales during model training by augmenting the standard cross-entropy loss with a novel loss function inspired by contrastive learning. By leveraging a multi-objective optimization algorithm, we explore the trade-off between the two loss functions and generate a Pareto-optimal frontier of models that balance performance and plausibility. Through extensive experiments involving diverse models, datasets, and explainability methods, we demonstrate that our approach significantly enhances the quality of model explanations without causing substantial (sometimes negligible) degradation in the original model{'}s performance.", }
Saliency post-hoc explainability methods are important tools for understanding increasingly complex NLP models. While these methods can reflect the model{'}s reasoning, they may not align with human intuition, making the explanations not plausible. In this work, we present a methodology for incorporating rationales, which are text annotations explaining human decisions, into text classification models. This incorporation enhances the plausibility of post-hoc explanations while preserving their faithfulness. Our approach is agnostic to model architectures and explainability methods. We introduce the rationales during model training by augmenting the standard cross-entropy loss with a novel loss function inspired by contrastive learning. By leveraging a multi-objective optimization algorithm, we explore the trade-off between the two loss functions and generate a Pareto-optimal frontier of models that balance performance and plausibility. Through extensive experiments involving diverse models, datasets, and explainability methods, we demonstrate that our approach significantly enhances the quality of model explanations without causing substantial (sometimes negligible) degradation in the original model{'}s performance.
[ "Resck, Lucas", "M. Raimundo, Marcos", "Poco, Jorge" ]
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales
findings-naacl.262
Poster
2404.03098
[ "https://github.com/visual-ds/plausible-nlp-explanations" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.263.bib
https://aclanthology.org/2024.findings-naacl.263/
@inproceedings{su-etal-2024-unlocking, title = "Unlocking Parameter-Efficient Fine-Tuning for Low-Resource Language Translation", author = "Su, Tong and Peng, Xin and Thillainathan, Sarubi and Guzm{\'a}n, David and Ranathunga, Surangika and Lee, En-Shiun", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.263", doi = "10.18653/v1/2024.findings-naacl.263", pages = "4217--4225", abstract = "Parameter-efficient fine-tuning (PEFT) methods are increasingly vital in adapting large-scale pre-trained language models for diverse tasks, offering a balance between adaptability and computational efficiency. They are important in Low-Resource Language (LRL) Neural Machine Translation (NMT) to enhance translation accuracy with minimal resources. However, their practical effectiveness varies significantly across different languages. We conducted comprehensive empirical experiments with varying LRL domains and sizes to evaluate the performance of 8 PEFT methods with in total of 15 architectures using the SacreBLEU score. We showed that 6 PEFT architectures outperform the baseline for both in-domain and out-domain tests and the Houlsby+Inversion adapter has the best performance overall, proving the effectiveness of PEFT methods.", }
Parameter-efficient fine-tuning (PEFT) methods are increasingly vital in adapting large-scale pre-trained language models for diverse tasks, offering a balance between adaptability and computational efficiency. They are important in Low-Resource Language (LRL) Neural Machine Translation (NMT) to enhance translation accuracy with minimal resources. However, their practical effectiveness varies significantly across different languages. We conducted comprehensive empirical experiments with varying LRL domains and sizes to evaluate the performance of 8 PEFT methods with in total of 15 architectures using the SacreBLEU score. We showed that 6 PEFT architectures outperform the baseline for both in-domain and out-domain tests and the Houlsby+Inversion adapter has the best performance overall, proving the effectiveness of PEFT methods.
[ "Su, Tong", "Peng, Xin", "Thillainathan, Sarubi", "Guzm{\\'a}n, David", "Ranathunga, Surangika", "Lee, En-Shiun" ]
Unlocking Parameter-Efficient Fine-Tuning for Low-Resource Language Translation
findings-naacl.263
Poster
2404.04212
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.264.bib
https://aclanthology.org/2024.findings-naacl.264/
@inproceedings{prasad-etal-2024-adapt, title = "{AD}a{PT}: As-Needed Decomposition and Planning with Language Models", author = "Prasad, Archiki and Koller, Alexander and Hartmann, Mareike and Clark, Peter and Sabharwal, Ashish and Bansal, Mohit and Khot, Tushar", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.264", doi = "10.18653/v1/2024.findings-naacl.264", pages = "4226--4252", abstract = "Large Language Models (LLMs) are increasingly being used for interactive decision-making tasks requiring planning and adapting to the environment. Recent works employ LLMs-as-agents in broadly two ways: iteratively determining the next action (iterative executors) or generating plans and executing sub-tasks using LLMs (plan-and-execute). However, these methods struggle with task complexity, as the inability to execute any sub-task may lead to task failure. To address these shortcomings, we introduce As-Needed Decomposition and Planning for complex Tasks (ADaPT), an approach that explicitly plans and decomposes complex sub-tasks as-needed, i.e., when the LLM is unable to execute them. ADaPT recursively decomposes sub-tasks to adapt to both task complexity and LLM capability. Our results demonstrate that ADaPT substantially outperforms established strong baselines, achieving success rates up to 28.3{\%} higher in ALFWorld, 27{\%} in WebShop, and 33{\%} in TextCraft {--} a novel compositional dataset that we introduce. Through extensive analysis, we illustrate the importance of multilevel decomposition and establish that ADaPT dynamically adjusts to the capabilities of the executor LLM as well as to task complexity.", }
Large Language Models (LLMs) are increasingly being used for interactive decision-making tasks requiring planning and adapting to the environment. Recent works employ LLMs-as-agents in broadly two ways: iteratively determining the next action (iterative executors) or generating plans and executing sub-tasks using LLMs (plan-and-execute). However, these methods struggle with task complexity, as the inability to execute any sub-task may lead to task failure. To address these shortcomings, we introduce As-Needed Decomposition and Planning for complex Tasks (ADaPT), an approach that explicitly plans and decomposes complex sub-tasks as-needed, i.e., when the LLM is unable to execute them. ADaPT recursively decomposes sub-tasks to adapt to both task complexity and LLM capability. Our results demonstrate that ADaPT substantially outperforms established strong baselines, achieving success rates up to 28.3{\%} higher in ALFWorld, 27{\%} in WebShop, and 33{\%} in TextCraft {--} a novel compositional dataset that we introduce. Through extensive analysis, we illustrate the importance of multilevel decomposition and establish that ADaPT dynamically adjusts to the capabilities of the executor LLM as well as to task complexity.
[ "Prasad, Archiki", "Koller, Alex", "er", "Hartmann, Mareike", "Clark, Peter", "Sabharwal, Ashish", "Bansal, Mohit", "Khot, Tushar" ]
ADaPT: As-Needed Decomposition and Planning with Language Models
findings-naacl.264
Poster
2311.05772
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.265.bib
https://aclanthology.org/2024.findings-naacl.265/
@inproceedings{ki-carpuat-2024-guiding, title = "Guiding Large Language Models to Post-Edit Machine Translation with Error Annotations", author = "Ki, Dayeon and Carpuat, Marine", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.265", doi = "10.18653/v1/2024.findings-naacl.265", pages = "4253--4273", abstract = "Machine Translation (MT) remains one of the last NLP tasks where large language models (LLMs) have not yet replaced dedicated supervised systems. This work exploits the complementary strengths of LLMs and supervised MT by guiding LLMs to automatically post-edit MT with external feedback on its quality, derived from Multidimensional Quality Metric (MQM) annotations. Working with LLaMA-2 models, we consider prompting strategies varying the nature of feedback provided and then fine-tune the LLM to improve its ability to exploit the provided guidance. Through experiments on Chinese-English, English-German, and English-Russian MQM data, we demonstrate that prompting LLMs to post-edit MT improves TER, BLEU and COMET scores, although the benefits of fine-grained feedback are not clear. Fine-tuning helps integrate fine-grained feedback more effectively and further improves translation quality based on both automatic and human evaluation.", }
Machine Translation (MT) remains one of the last NLP tasks where large language models (LLMs) have not yet replaced dedicated supervised systems. This work exploits the complementary strengths of LLMs and supervised MT by guiding LLMs to automatically post-edit MT with external feedback on its quality, derived from Multidimensional Quality Metric (MQM) annotations. Working with LLaMA-2 models, we consider prompting strategies varying the nature of feedback provided and then fine-tune the LLM to improve its ability to exploit the provided guidance. Through experiments on Chinese-English, English-German, and English-Russian MQM data, we demonstrate that prompting LLMs to post-edit MT improves TER, BLEU and COMET scores, although the benefits of fine-grained feedback are not clear. Fine-tuning helps integrate fine-grained feedback more effectively and further improves translation quality based on both automatic and human evaluation.
[ "Ki, Dayeon", "Carpuat, Marine" ]
Guiding Large Language Models to Post-Edit Machine Translation with Error Annotations
findings-naacl.265
Poster
2404.07851
[ "https://github.com/dayeonki/mt_feedback" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.266.bib
https://aclanthology.org/2024.findings-naacl.266/
@inproceedings{pappadopulo-farina-2024-non, title = "Non-contrastive sentence representations via self-supervision", author = "Pappadopulo, Duccio and Farina, Marco", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.266", doi = "10.18653/v1/2024.findings-naacl.266", pages = "4274--4284", abstract = "Sample contrastive methods, typically referred to simply as contrastive are the foundation of most unsupervised methods to learn text and sentence embeddings. On the other hand, a different class of self-supervised non-contrastive loss functions and methods have been considered in the computer vision community and referred to as dimension contrastive. In this paper, we thoroughly compare this class of methods with the standard baseline for contrastive sentence embeddings, SimCSE. We find that self-supervised embeddings trained using dimension contrastive objectives can outperform SimCSE on downstream tasks without needing auxiliary loss functions.", }
Sample contrastive methods, typically referred to simply as contrastive are the foundation of most unsupervised methods to learn text and sentence embeddings. On the other hand, a different class of self-supervised non-contrastive loss functions and methods have been considered in the computer vision community and referred to as dimension contrastive. In this paper, we thoroughly compare this class of methods with the standard baseline for contrastive sentence embeddings, SimCSE. We find that self-supervised embeddings trained using dimension contrastive objectives can outperform SimCSE on downstream tasks without needing auxiliary loss functions.
[ "Pappadopulo, Duccio", "Farina, Marco" ]
Non-contrastive sentence representations via self-supervision
findings-naacl.266
Poster
2310.17690
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.267.bib
https://aclanthology.org/2024.findings-naacl.267/
@inproceedings{ogezi-etal-2024-semantically, title = "Semantically-Prompted Language Models Improve Visual Descriptions", author = "Ogezi, Michael and Hauer, Bradley and Kondrak, Grzegorz", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.267", doi = "10.18653/v1/2024.findings-naacl.267", pages = "4285--4302", abstract = "Language-vision models like CLIP have made significant strides in vision tasks, such as zero-shot image classification (ZSIC). However, generating specific and expressive visual descriptions remains challenging; descriptions produced by current methods are often ambiguous and lacking in granularity. To tackle these issues, we propose V-GLOSS: Visual Glosses, a novel method built upon two key ideas. The first is Semantic Prompting, which conditions a language model on structured semantic knowledge. The second is a new contrastive algorithm that elicits fine-grained distinctions between similar concepts. With both ideas, we demonstrate that V-GLOSS improves visual descriptions and achieves strong results in the zero-shot setting on general and fine-grained image-classification datasets, including ImageNet, STL-10, FGVC Aircraft, and Flowers 102. Moreover, these descriptive capabilities contribute to enhancing image-generation performance. Finally, we introduce a quality-tested silver dataset with descriptions generated with V-GLOSS for all ImageNet classes.", }
Language-vision models like CLIP have made significant strides in vision tasks, such as zero-shot image classification (ZSIC). However, generating specific and expressive visual descriptions remains challenging; descriptions produced by current methods are often ambiguous and lacking in granularity. To tackle these issues, we propose V-GLOSS: Visual Glosses, a novel method built upon two key ideas. The first is Semantic Prompting, which conditions a language model on structured semantic knowledge. The second is a new contrastive algorithm that elicits fine-grained distinctions between similar concepts. With both ideas, we demonstrate that V-GLOSS improves visual descriptions and achieves strong results in the zero-shot setting on general and fine-grained image-classification datasets, including ImageNet, STL-10, FGVC Aircraft, and Flowers 102. Moreover, these descriptive capabilities contribute to enhancing image-generation performance. Finally, we introduce a quality-tested silver dataset with descriptions generated with V-GLOSS for all ImageNet classes.
[ "Ogezi, Michael", "Hauer, Bradley", "Kondrak, Grzegorz" ]
Semantically-Prompted Language Models Improve Visual Descriptions
findings-naacl.267
Poster
2306.06077
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.268.bib
https://aclanthology.org/2024.findings-naacl.268/
@inproceedings{liao-etal-2024-gentkg, title = "{G}en{TKG}: Generative Forecasting on Temporal Knowledge Graph with Large Language Models", author = "Liao, Ruotong and Jia, Xu and Li, Yangzhe and Ma, Yunpu and Tresp, Volker", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.268", doi = "10.18653/v1/2024.findings-naacl.268", pages = "4303--4317", abstract = "The rapid advancements in large language models (LLMs) have ignited interest in the temporal knowledge graph (tKG) domain, where conventional embedding-based and rule-based methods dominate. The question remains open of whether pre-trained LLMs can understand structured temporal relational data and replace them as the foundation model for temporal relational forecasting. Therefore, we bring temporal knowledge forecasting into the generative setting. However, challenges occur in the huge chasms between complex temporal graph data structure and sequential natural expressions LLMs can handle, and between the enormous data sizes of tKGs and heavy computation costs of finetuning LLMs. To address these challenges, we propose a novel retrieval-augmented generation framework named GenTKG combining a temporal logical rule-based retrieval strategy and few-shot parameter-efficient instruction tuning to solve the above challenges, respectively. Extensive experiments have shown that GenTKG outperforms conventional methods of temporal relational forecasting with low computation resources using extremely limited training data as few as 16 samples. GenTKG also highlights remarkable cross-domain generalizability with outperforming performance on unseen datasets without re-training, and in-domain generalizability regardless of time split in the same dataset. Our work reveals the huge potential of LLMs in the tKG domain and opens a new frontier for generative forecasting on tKGs. The code and data are released here: \url{https://github.com/mayhugotong/GenTKG}.", }
The rapid advancements in large language models (LLMs) have ignited interest in the temporal knowledge graph (tKG) domain, where conventional embedding-based and rule-based methods dominate. The question remains open of whether pre-trained LLMs can understand structured temporal relational data and replace them as the foundation model for temporal relational forecasting. Therefore, we bring temporal knowledge forecasting into the generative setting. However, challenges occur in the huge chasms between complex temporal graph data structure and sequential natural expressions LLMs can handle, and between the enormous data sizes of tKGs and heavy computation costs of finetuning LLMs. To address these challenges, we propose a novel retrieval-augmented generation framework named GenTKG combining a temporal logical rule-based retrieval strategy and few-shot parameter-efficient instruction tuning to solve the above challenges, respectively. Extensive experiments have shown that GenTKG outperforms conventional methods of temporal relational forecasting with low computation resources using extremely limited training data as few as 16 samples. GenTKG also highlights remarkable cross-domain generalizability with outperforming performance on unseen datasets without re-training, and in-domain generalizability regardless of time split in the same dataset. Our work reveals the huge potential of LLMs in the tKG domain and opens a new frontier for generative forecasting on tKGs. The code and data are released here: \url{https://github.com/mayhugotong/GenTKG}.
[ "Liao, Ruotong", "Jia, Xu", "Li, Yangzhe", "Ma, Yunpu", "Tresp, Volker" ]
GenTKG: Generative Forecasting on Temporal Knowledge Graph with Large Language Models
findings-naacl.268
Poster
2310.07793
[ "https://github.com/mayhugotong/gentkg" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.269.bib
https://aclanthology.org/2024.findings-naacl.269/
@inproceedings{li-etal-2024-transformer, title = "A Transformer with Stack Attention", author = "Li, Jiaoda and White, Jennifer and Sachan, Mrinmaya and Cotterell, Ryan", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.269", doi = "10.18653/v1/2024.findings-naacl.269", pages = "4318--4335", abstract = "Natural languages are believed to be (mildly) context-sensitive. Despite underpinning remarkably capable large language models, transformers are unable to model many context-free language tasks. In an attempt to address this limitation in the modeling power of transformer-based language models, we propose augmenting them with a differentiable, stack-based attention mechanism. Our stack-basedattention mechanism can be incorporated into any transformer-based language model and adds a level of interpretability to the model. We show that the addition of our stack-based attention mechanism enables the transformer to model some, but not all, deterministic context-freelanguages.", }
Natural languages are believed to be (mildly) context-sensitive. Despite underpinning remarkably capable large language models, transformers are unable to model many context-free language tasks. In an attempt to address this limitation in the modeling power of transformer-based language models, we propose augmenting them with a differentiable, stack-based attention mechanism. Our stack-basedattention mechanism can be incorporated into any transformer-based language model and adds a level of interpretability to the model. We show that the addition of our stack-based attention mechanism enables the transformer to model some, but not all, deterministic context-freelanguages.
[ "Li, Jiaoda", "White, Jennifer", "Sachan, Mrinmaya", "Cotterell, Ryan" ]
A Transformer with Stack Attention
findings-naacl.269
Poster
2405.04515
[ "https://github.com/rycolab/stack-transformer" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.270.bib
https://aclanthology.org/2024.findings-naacl.270/
@inproceedings{ajith-etal-2024-instructeval, title = "{I}nstruct{E}val: Systematic Evaluation of Instruction Selection Methods", author = "Ajith, Anirudh and Pan, Chris and Xia, Mengzhou and Deshpande, Ameet and Narasimhan, Karthik", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.270", doi = "10.18653/v1/2024.findings-naacl.270", pages = "4336--4350", abstract = "In-context learning (ICL) performs tasks by prompting a large language model (LLM) using an instruction and a small set of annotated examples called demonstrations. Recent work has shown that precise details of the inputs used in the ICL prompt significantly impact performance, which has incentivized instruction selection algorithms. The effect of instruction-choice however is severely underexplored, with existing analyses restricted to shallow subsets of models and tasks, limiting the generalizability of their insights. We develop InstructEval, an ICL evaluation suite to conduct a thorough assessment of these techniques. The suite includes 13 open-sourced LLMs of varying scales from four model families, and covers nine tasks across three categories. Using the suite, we evaluate the relative performance of seven popular instruction selection methods over five metrics relevant to ICL. Our experiments reveal that using curated manually-written instructions or simple instructions without any task-specific descriptions often elicits superior ICL performance overall than that of automatic instruction-induction methods, pointing to a lack of generalizability among the latter. We release our evaluation suite (at https://github.com/princeton-nlp/InstructEval) for benchmarking instruction selection approaches and enabling more generalizable methods in this space.", }
In-context learning (ICL) performs tasks by prompting a large language model (LLM) using an instruction and a small set of annotated examples called demonstrations. Recent work has shown that precise details of the inputs used in the ICL prompt significantly impact performance, which has incentivized instruction selection algorithms. The effect of instruction-choice however is severely underexplored, with existing analyses restricted to shallow subsets of models and tasks, limiting the generalizability of their insights. We develop InstructEval, an ICL evaluation suite to conduct a thorough assessment of these techniques. The suite includes 13 open-sourced LLMs of varying scales from four model families, and covers nine tasks across three categories. Using the suite, we evaluate the relative performance of seven popular instruction selection methods over five metrics relevant to ICL. Our experiments reveal that using curated manually-written instructions or simple instructions without any task-specific descriptions often elicits superior ICL performance overall than that of automatic instruction-induction methods, pointing to a lack of generalizability among the latter. We release our evaluation suite (at https://github.com/princeton-nlp/InstructEval) for benchmarking instruction selection approaches and enabling more generalizable methods in this space.
[ "Ajith, Anirudh", "Pan, Chris", "Xia, Mengzhou", "Deshp", "e, Ameet", "Narasimhan, Karthik" ]
InstructEval: Systematic Evaluation of Instruction Selection Methods
findings-naacl.270
Poster
2307.00259
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.271.bib
https://aclanthology.org/2024.findings-naacl.271/
@inproceedings{wang-etal-2024-recmind, title = "{R}ec{M}ind: Large Language Model Powered Agent For Recommendation", author = "Wang, Yancheng and Jiang, Ziyan and Chen, Zheng and Yang, Fan and Zhou, Yingxue and Cho, Eunah and Fan, Xing and Lu, Yanbin and Huang, Xiaojiang and Yang, Yingzhen", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.271", doi = "10.18653/v1/2024.findings-naacl.271", pages = "4351--4364", abstract = "While the recommendation system (RS) has advanced significantly through deep learning, current RS approaches usually train and fine-tune models on task-specific datasets, limiting their generalizability to new recommendation tasks and their ability to leverage external knowledge due to model scale and data size constraints. Thus, we designed an LLM-powered autonomous recommender agent, RecMind, which is capable of leveraging external knowledge, utilizing tools with careful planning to provide zero-shot personalized recommendations. We propose a Self-Inspiring algorithm to improve the planning ability. At each intermediate step, the LLM {``}self-inspires{''} to consider all previously explored states to plan for the next step. This mechanism greatly improves the model{'}s ability to comprehend and utilize historical information in planning for recommendation. We evaluate RecMind{'}s performance in various recommendation scenarios. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation baseline methods in various tasks and achieves comparable performance to a fully trained recommendation model P5.", }
While the recommendation system (RS) has advanced significantly through deep learning, current RS approaches usually train and fine-tune models on task-specific datasets, limiting their generalizability to new recommendation tasks and their ability to leverage external knowledge due to model scale and data size constraints. Thus, we designed an LLM-powered autonomous recommender agent, RecMind, which is capable of leveraging external knowledge, utilizing tools with careful planning to provide zero-shot personalized recommendations. We propose a Self-Inspiring algorithm to improve the planning ability. At each intermediate step, the LLM {``}self-inspires{''} to consider all previously explored states to plan for the next step. This mechanism greatly improves the model{'}s ability to comprehend and utilize historical information in planning for recommendation. We evaluate RecMind{'}s performance in various recommendation scenarios. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation baseline methods in various tasks and achieves comparable performance to a fully trained recommendation model P5.
[ "Wang, Yancheng", "Jiang, Ziyan", "Chen, Zheng", "Yang, Fan", "Zhou, Yingxue", "Cho, Eunah", "Fan, Xing", "Lu, Yanbin", "Huang, Xiaojiang", "Yang, Yingzhen" ]
RecMind: Large Language Model Powered Agent For Recommendation
findings-naacl.271
Poster
2308.14296
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.272.bib
https://aclanthology.org/2024.findings-naacl.272/
@inproceedings{gholami-etal-2024-gold, title = "{GOLD}: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation", author = "Gholami, Mohsen and Akbari, Mohammad and Hu, Tianxi and Masrani, Vaden and Wang, Z. and Zhang, Yong", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.272", doi = "10.18653/v1/2024.findings-naacl.272", pages = "4365--4380", abstract = "Knowledge distillation from LLMs is essential for the efficient deployment of language models. Prior works have proposed data generation using LLMs for preparing distilled models. We argue that generating data with LLMs is prone to sampling mainly from the center of original content distribution. This limitation hinders the distilled model from learning the true underlying data distribution and to forget the tails of the distributions (samples with lower probability). To this end, we propose GOLD, a task-agnostic data generation and knowledge distillation framework, which employs an iterative out-of-distribution-guided feedback mechanism for the LLM. As a result, the generated data improves the generalizability of distilled models. An energy-based OOD evaluation approach is also introduced to deal with noisy generated data. Our extensive experiments on 10 different classification and sequence-to-sequence tasks in NLP show that GOLD respectively outperforms prior arts and the LLM with an average improvement of 5{\%} and 14{\%}. We will also show that the proposed method is applicable to less explored and novel tasks. Code is available in the Appendix.", }
Knowledge distillation from LLMs is essential for the efficient deployment of language models. Prior works have proposed data generation using LLMs for preparing distilled models. We argue that generating data with LLMs is prone to sampling mainly from the center of original content distribution. This limitation hinders the distilled model from learning the true underlying data distribution and to forget the tails of the distributions (samples with lower probability). To this end, we propose GOLD, a task-agnostic data generation and knowledge distillation framework, which employs an iterative out-of-distribution-guided feedback mechanism for the LLM. As a result, the generated data improves the generalizability of distilled models. An energy-based OOD evaluation approach is also introduced to deal with noisy generated data. Our extensive experiments on 10 different classification and sequence-to-sequence tasks in NLP show that GOLD respectively outperforms prior arts and the LLM with an average improvement of 5{\%} and 14{\%}. We will also show that the proposed method is applicable to less explored and novel tasks. Code is available in the Appendix.
[ "Gholami, Mohsen", "Akbari, Mohammad", "Hu, Tianxi", "Masrani, Vaden", "Wang, Z.", "Zhang, Yong" ]
GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation
findings-naacl.272
Poster
2403.19754
[ "https://github.com/mgholamikn/GOLD" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.273.bib
https://aclanthology.org/2024.findings-naacl.273/
@inproceedings{kohli-etal-2024-lexical, title = "How Lexical is Bilingual Lexicon Induction?", author = "Kohli, Harsh and Feng, Helian and Dronen, Nicholas and McCarter, Calvin and Moeini, Sina and Kebarighotbi, Ali", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.273", doi = "10.18653/v1/2024.findings-naacl.273", pages = "4381--4386", abstract = "In contemporary machine learning approaches to bilingual lexicon induction (BLI), a model learns a mapping between the embedding spaces of a language pair. Recently, retrieve-and-rank approach to BLI has achieved state of the art results on the task. However, the problem remains challenging in low-resource settings, due to the paucity of data. The task is complicated by factors such as lexical variation across languages. We argue that the incorporation of additional lexical information into the recent retrieve-and-rank approach should improve lexicon induction. We demonstrate the efficacy of our proposed approach on XLING, improving over the previous state of the art by an average of 2{\%} across all language pairs.", }
In contemporary machine learning approaches to bilingual lexicon induction (BLI), a model learns a mapping between the embedding spaces of a language pair. Recently, retrieve-and-rank approach to BLI has achieved state of the art results on the task. However, the problem remains challenging in low-resource settings, due to the paucity of data. The task is complicated by factors such as lexical variation across languages. We argue that the incorporation of additional lexical information into the recent retrieve-and-rank approach should improve lexicon induction. We demonstrate the efficacy of our proposed approach on XLING, improving over the previous state of the art by an average of 2{\%} across all language pairs.
[ "Kohli, Harsh", "Feng, Helian", "Dronen, Nicholas", "McCarter, Calvin", "Moeini, Sina", "Kebarighotbi, Ali" ]
How Lexical is Bilingual Lexicon Induction?
findings-naacl.273
Poster
2404.04221
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.274.bib
https://aclanthology.org/2024.findings-naacl.274/
@inproceedings{chen-etal-2024-fumbling, title = "Fumbling in Babel: An Investigation into {C}hat{GPT}{'}s Language Identification Ability", author = "Chen, Wei-Rui and Adebara, Ife and Doan, Khai and Liao, Qisheng and Abdul-Mageed, Muhammad", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.274", doi = "10.18653/v1/2024.findings-naacl.274", pages = "4387--4413", abstract = "ChatGPT has recently emerged as a powerful NLP tool that can carry out a variety of tasks. However, the range of languages ChatGPT can handle remains largely a mystery. To uncover which languages ChatGPT {`}knows{'}, we investigate its language identification (LID) abilities. For this purpose, we compile Babel-670, a benchmark comprising 670 languages representing 23 language families spoken in five continents. Languages in Babel-670 run the gamut from the very high-resource to the very low-resource. We then study ChatGPT{'}s (both GPT-3.5 and GPT-4) ability to (i) identify language names and language codes (ii) under zero- and few-shot conditions (iii) with and without provision of a label set. When compared to smaller finetuned LID tools, we find that ChatGPT lags behind. For example, it has poor performance on African languages. We conclude that current large language models would benefit from further development before they can sufficiently serve diverse communities.", }
ChatGPT has recently emerged as a powerful NLP tool that can carry out a variety of tasks. However, the range of languages ChatGPT can handle remains largely a mystery. To uncover which languages ChatGPT {`}knows{'}, we investigate its language identification (LID) abilities. For this purpose, we compile Babel-670, a benchmark comprising 670 languages representing 23 language families spoken in five continents. Languages in Babel-670 run the gamut from the very high-resource to the very low-resource. We then study ChatGPT{'}s (both GPT-3.5 and GPT-4) ability to (i) identify language names and language codes (ii) under zero- and few-shot conditions (iii) with and without provision of a label set. When compared to smaller finetuned LID tools, we find that ChatGPT lags behind. For example, it has poor performance on African languages. We conclude that current large language models would benefit from further development before they can sufficiently serve diverse communities.
[ "Chen, Wei-Rui", "Adebara, Ife", "Doan, Khai", "Liao, Qisheng", "Abdul-Mageed, Muhammad" ]
Fumbling in Babel: An Investigation into ChatGPT's Language Identification Ability
findings-naacl.274
Poster
2311.09696
[ "" ]
https://huggingface.co/papers/2311.09696
0
0
0
5
1
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.275.bib
https://aclanthology.org/2024.findings-naacl.275/
@inproceedings{wang-huang-2024-targeted, title = "Targeted Augmentation for Low-Resource Event Extraction", author = "Wang, Sijia and Huang, Lifu", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.275", doi = "10.18653/v1/2024.findings-naacl.275", pages = "4414--4428", abstract = "Addressing the challenge of low-resource information extraction remains an ongoing issue due to the inherent information scarcity within limited training examples. Existing data augmentation methods, considered potential solutions, struggle to strike a balance between weak augmentation (e.g., synonym augmentation) and drastic augmentation (e.g., conditional generation without proper guidance). This paper introduces a novel paradigm that employs targeted augmentation and back validation to produce augmented examples with enhanced diversity, polarity, accuracy, and coherence. Extensive experimental results demonstrate the effectiveness of the proposed paradigm. Furthermore, identified limitations are discussed, shedding light on areas for future improvement.", }
Addressing the challenge of low-resource information extraction remains an ongoing issue due to the inherent information scarcity within limited training examples. Existing data augmentation methods, considered potential solutions, struggle to strike a balance between weak augmentation (e.g., synonym augmentation) and drastic augmentation (e.g., conditional generation without proper guidance). This paper introduces a novel paradigm that employs targeted augmentation and back validation to produce augmented examples with enhanced diversity, polarity, accuracy, and coherence. Extensive experimental results demonstrate the effectiveness of the proposed paradigm. Furthermore, identified limitations are discussed, shedding light on areas for future improvement.
[ "Wang, Sijia", "Huang, Lifu" ]
Targeted Augmentation for Low-Resource Event Extraction
findings-naacl.275
Poster
2405.08729
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.276.bib
https://aclanthology.org/2024.findings-naacl.276/
@inproceedings{keh-etal-2024-asking, title = "Asking More Informative Questions for Grounded Retrieval", author = "Keh, Sedrick and Chiu, Justin and Fried, Daniel", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.276", doi = "10.18653/v1/2024.findings-naacl.276", pages = "4429--4442", abstract = "When a model is trying to gather information in an interactive setting, it benefits from asking informative questions. However, in the case of a grounded multi-turn image identification task, previous studies have been constrained to polar yes/no questions (White et al., 2021), limiting how much information the model can gain in a single turn. We present an approach that formulates more informative, open-ended questions. In doing so, we discover that off-the-shelf visual question answering (VQA) models often make presupposition errors, which standard information gain question selection methods fail to account for. To address this issue, we propose a method that can incorporate presupposition handling into both question selection and belief updates. Specifically, we use a two-stage process, where the model first filters out images which are irrelevant to a given question, then updates its beliefs about which image the user intends. Through self-play and human evaluations, we show that our method is successful in asking informative open-ended questions, increasing accuracy over the past state-of-the-art by 14{\%}, while resulting in 48{\%} more efficient games in human evaluations.", }
When a model is trying to gather information in an interactive setting, it benefits from asking informative questions. However, in the case of a grounded multi-turn image identification task, previous studies have been constrained to polar yes/no questions (White et al., 2021), limiting how much information the model can gain in a single turn. We present an approach that formulates more informative, open-ended questions. In doing so, we discover that off-the-shelf visual question answering (VQA) models often make presupposition errors, which standard information gain question selection methods fail to account for. To address this issue, we propose a method that can incorporate presupposition handling into both question selection and belief updates. Specifically, we use a two-stage process, where the model first filters out images which are irrelevant to a given question, then updates its beliefs about which image the user intends. Through self-play and human evaluations, we show that our method is successful in asking informative open-ended questions, increasing accuracy over the past state-of-the-art by 14{\%}, while resulting in 48{\%} more efficient games in human evaluations.
[ "Keh, Sedrick", "Chiu, Justin", "Fried, Daniel" ]
Asking More Informative Questions for Grounded Retrieval
findings-naacl.276
Poster
2311.08584
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.277.bib
https://aclanthology.org/2024.findings-naacl.277/
@inproceedings{tahaei-etal-2024-efficient, title = "Efficient Citer: Tuning Large Language Models for Enhanced Answer Quality and Verification", author = "Tahaei, Marzieh and Jafari, Aref and Rashid, Ahmad and Alfonso-Hermelo, David and Bibi, Khalil and Wu, Yimeng and Ghodsi, Ali and Chen, Boxing and Rezagholizadeh, Mehdi", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.277", doi = "10.18653/v1/2024.findings-naacl.277", pages = "4443--4450", abstract = "In recent years, there has been a growing interest in utilizing external knowledge to reduce hallucinations in large language models (LLMs) and provide them with updated information. Despite this improvement, a major challenge lies in the lack of explicit citations, which hampers the ability to verify the information generated by these models.This paper focuses on providing models with citation capabilities efficiently. By constructing a dataset of citations, we train two model architectures: an FID-style FLAN-T5 model for efficient answer composition and a 13B model known for its success in instruction following after tuning. Evaluation on fluency, correctness, and citation quality is conducted through human assessment and the newly introduced Automatic LLMs{'} Citation Evaluation (ALCE) benchmark.Results demonstrate significant improvements in answer quality and efficiency, surpassing the performance of the popular ChatGPT on some of the metrics. The models exhibit exceptional out-of-domain generalization in both human and automatic evaluation. Notably, the FID-style FLAN-T5 model with only 3B parameters performs impressively compared to the 13B model.", }
In recent years, there has been a growing interest in utilizing external knowledge to reduce hallucinations in large language models (LLMs) and provide them with updated information. Despite this improvement, a major challenge lies in the lack of explicit citations, which hampers the ability to verify the information generated by these models.This paper focuses on providing models with citation capabilities efficiently. By constructing a dataset of citations, we train two model architectures: an FID-style FLAN-T5 model for efficient answer composition and a 13B model known for its success in instruction following after tuning. Evaluation on fluency, correctness, and citation quality is conducted through human assessment and the newly introduced Automatic LLMs{'} Citation Evaluation (ALCE) benchmark.Results demonstrate significant improvements in answer quality and efficiency, surpassing the performance of the popular ChatGPT on some of the metrics. The models exhibit exceptional out-of-domain generalization in both human and automatic evaluation. Notably, the FID-style FLAN-T5 model with only 3B parameters performs impressively compared to the 13B model.
[ "Tahaei, Marzieh", "Jafari, Aref", "Rashid, Ahmad", "Alfonso-Hermelo, David", "Bibi, Khalil", "Wu, Yimeng", "Ghodsi, Ali", "Chen, Boxing", "Rezagholizadeh, Mehdi" ]
Efficient Citer: Tuning Large Language Models for Enhanced Answer Quality and Verification
findings-naacl.277
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.278.bib
https://aclanthology.org/2024.findings-naacl.278/
@inproceedings{xie-etal-2024-addressing, title = "Addressing Healthcare-related Racial and {LGBTQ}+ Biases in Pretrained Language Models", author = "Xie, Sean and Hassanpour, Saeed and Vosoughi, Soroush", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.278", doi = "10.18653/v1/2024.findings-naacl.278", pages = "4451--4464", abstract = "Recent studies have highlighted the issue of Pretrained Language Models (PLMs) inadvertently propagating social stigmas and stereotypes, a critical concern given their widespread use. This is particularly problematic in sensitive areas like healthcare, where such biases could lead to detrimental outcomes. Our research addresses this by adapting two intrinsic bias benchmarks to quantify racial and LGBTQ+ biases in prevalent PLMs. We also empirically evaluate the effectiveness of various debiasing methods in mitigating these biases. Furthermore, we assess the impact of debiasing on both Natural Language Understanding and specific biomedical applications. Our findings reveal that while PLMs commonly exhibit healthcare-related racial and LGBTQ+ biases, the applied debiasing techniques successfully reduce these biases without compromising the models{'} performance in downstream tasks.", }
Recent studies have highlighted the issue of Pretrained Language Models (PLMs) inadvertently propagating social stigmas and stereotypes, a critical concern given their widespread use. This is particularly problematic in sensitive areas like healthcare, where such biases could lead to detrimental outcomes. Our research addresses this by adapting two intrinsic bias benchmarks to quantify racial and LGBTQ+ biases in prevalent PLMs. We also empirically evaluate the effectiveness of various debiasing methods in mitigating these biases. Furthermore, we assess the impact of debiasing on both Natural Language Understanding and specific biomedical applications. Our findings reveal that while PLMs commonly exhibit healthcare-related racial and LGBTQ+ biases, the applied debiasing techniques successfully reduce these biases without compromising the models{'} performance in downstream tasks.
[ "Xie, Sean", "Hassanpour, Saeed", "Vosoughi, Soroush" ]
Addressing Healthcare-related Racial and LGBTQ+ Biases in Pretrained Language Models
findings-naacl.278
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.279.bib
https://aclanthology.org/2024.findings-naacl.279/
@inproceedings{lin-etal-2024-atg, title = "{ATG}: Benchmarking Automated Theorem Generation for Generative Language Models", author = "Lin, Xiaohan and Cao, Qingxing and Huang, Yinya and Yang, Zhicheng and Liu, Zhengying and Li, Zhenguo and Liang, Xiaodan", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.279", doi = "10.18653/v1/2024.findings-naacl.279", pages = "4465--4480", abstract = "Humans can develop new theorems to explore broader and more complex mathematical results.While current generative language models (LMs) have achieved significant improvement in automatically proving theorems, their ability to generate new or reusable theorems is still under-explored. Without the new theorems, current LMs struggle to prove harder theorems that are distant from the given hypotheses with the exponentially growing search space.More advanced theorem proving is if an agent (for instance, a generative LM) can leverage its creativity to generate new but also reasonable theorems that properly substitute part of a proof and also be saved as reusable knowledge for future theorem proving.Therefore, this paper proposes an Automated Theorem Generation (ATG) benchmark that evaluates whether an agent can automatically generate valuable (and possibly brand new) theorems that are applicable for downstream theorem proving as reusable knowledge. Specifically, we construct the ATG benchmark by splitting the Metamath library into three sets: axioms, library, and problem based on their proving depth.We conduct extensive experiments to investigate whether current LMs can generate theorems in the library and benefit the problem theorems proving. The results demonstrate that high-quality ATG data facilitates models{'} performances on downstream ATP. However, there is still room for current LMs to develop better ATG and generate more advanced and human-like theorems. We hope the new ATG challenge can shed some light on advanced complex theorem proving.", }
Humans can develop new theorems to explore broader and more complex mathematical results.While current generative language models (LMs) have achieved significant improvement in automatically proving theorems, their ability to generate new or reusable theorems is still under-explored. Without the new theorems, current LMs struggle to prove harder theorems that are distant from the given hypotheses with the exponentially growing search space.More advanced theorem proving is if an agent (for instance, a generative LM) can leverage its creativity to generate new but also reasonable theorems that properly substitute part of a proof and also be saved as reusable knowledge for future theorem proving.Therefore, this paper proposes an Automated Theorem Generation (ATG) benchmark that evaluates whether an agent can automatically generate valuable (and possibly brand new) theorems that are applicable for downstream theorem proving as reusable knowledge. Specifically, we construct the ATG benchmark by splitting the Metamath library into three sets: axioms, library, and problem based on their proving depth.We conduct extensive experiments to investigate whether current LMs can generate theorems in the library and benefit the problem theorems proving. The results demonstrate that high-quality ATG data facilitates models{'} performances on downstream ATP. However, there is still room for current LMs to develop better ATG and generate more advanced and human-like theorems. We hope the new ATG challenge can shed some light on advanced complex theorem proving.
[ "Lin, Xiaohan", "Cao, Qingxing", "Huang, Yinya", "Yang, Zhicheng", "Liu, Zhengying", "Li, Zhenguo", "Liang, Xiaodan" ]
ATG: Benchmarking Automated Theorem Generation for Generative Language Models
findings-naacl.279
Poster
2405.06677
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.280.bib
https://aclanthology.org/2024.findings-naacl.280/
@inproceedings{liu-etal-2024-benchmarking, title = "Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization", author = "Liu, Yixin and Fabbri, Alexander and Chen, Jiawen and Zhao, Yilun and Han, Simeng and Joty, Shafiq and Liu, Pengfei and Radev, Dragomir and Wu, Chien-Sheng and Cohan, Arman", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.280", doi = "10.18653/v1/2024.findings-naacl.280", pages = "4481--4501", abstract = "While large language models (LLMs) can already achieve strong performance on standard generic summarization benchmarks, their performance on more complex summarization task settings is less studied. Therefore, we benchmark LLMs on instruction controllable text summarization, where the model input consists of both a source article and a natural language requirement for desired summary characteristics. To this end, we curate an evaluation-only dataset for this task setting and conduct human evaluations of five LLM-based systems to assess their instruction-following capabilities in controllable summarization. We then benchmark LLM-based automatic evaluation for this task with 4 different evaluation protocols and 11 LLMs, resulting in 40 evaluation methods. Our study reveals that instruction controllable text summarization remains a challenging task for LLMs, since (1) all LLMs evaluated still make factual and other types of errors in their summaries; (2) no LLM-based evaluation methods can achieve a strong alignment with human annotators when judging the quality of candidate summaries; (3) different LLMs show large performance gaps in summary generation and evaluation capabilities. We make our collected benchmark InstruSum publicly available to facilitate future research in this direction.", }
While large language models (LLMs) can already achieve strong performance on standard generic summarization benchmarks, their performance on more complex summarization task settings is less studied. Therefore, we benchmark LLMs on instruction controllable text summarization, where the model input consists of both a source article and a natural language requirement for desired summary characteristics. To this end, we curate an evaluation-only dataset for this task setting and conduct human evaluations of five LLM-based systems to assess their instruction-following capabilities in controllable summarization. We then benchmark LLM-based automatic evaluation for this task with 4 different evaluation protocols and 11 LLMs, resulting in 40 evaluation methods. Our study reveals that instruction controllable text summarization remains a challenging task for LLMs, since (1) all LLMs evaluated still make factual and other types of errors in their summaries; (2) no LLM-based evaluation methods can achieve a strong alignment with human annotators when judging the quality of candidate summaries; (3) different LLMs show large performance gaps in summary generation and evaluation capabilities. We make our collected benchmark InstruSum publicly available to facilitate future research in this direction.
[ "Liu, Yixin", "Fabbri, Alex", "er", "Chen, Jiawen", "Zhao, Yilun", "Han, Simeng", "Joty, Shafiq", "Liu, Pengfei", "Radev, Dragomir", "Wu, Chien-Sheng", "Cohan, Arman" ]
Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization
findings-naacl.280
Poster
2311.09184
[ "https://github.com/yale-nlp/instrusum" ]
https://huggingface.co/papers/2311.09184
4
1
0
10
1
[]
[ "Salesforce/InstruSum" ]
[]
https://aclanthology.org/2024.findings-naacl.281.bib
https://aclanthology.org/2024.findings-naacl.281/
@inproceedings{howard-etal-2024-neurocomparatives, title = "{N}euro{C}omparatives: Neuro-Symbolic Distillation of Comparative Knowledge", author = "Howard, Phillip and Wang, Junlin and Lal, Vasudev and Singer, Gadi and Choi, Yejin and Swayamdipta, Swabha", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.281", doi = "10.18653/v1/2024.findings-naacl.281", pages = "4502--4520", abstract = "Comparative knowledge (e.g., steel is stronger and heavier than styrofoam) is an essential component of our world knowledge, yet understudied in prior literature. In this paper, we harvest the dramatic improvements in knowledge capabilities of language models into a large-scale comparative knowledge base. While the ease of acquisition of such comparative knowledge is much higher from extreme-scale models like GPT-4, compared to their considerably smaller and weaker counterparts such as GPT-2, not even the most powerful models are exempt from making errors. We thus ask: to what extent are models at different scales able to generate valid and diverse comparative knowledge?We introduce NeuroComparatives, a novel framework for comparative knowledge distillation overgenerated from language models such as GPT-variants and LLaMA, followed by stringent filtering of the generated knowledge. Our framework acquires comparative knowledge between everyday objects, producing a corpus of up to 8.8M comparisons over 1.74M entity pairs - 10X larger and 30{\%} more diverse than existing resources. Moreover, human evaluations show that NeuroComparatives outperform existing resources in terms of validity (up to 32{\%} absolute improvement). Our acquired NeuroComparatives leads to performance improvements on five downstream tasks.We find that neuro-symbolic manipulation of smaller models offers complementary benefits to the currently dominant practice of prompting extreme-scale language models for knowledge distillation.", }
Comparative knowledge (e.g., steel is stronger and heavier than styrofoam) is an essential component of our world knowledge, yet understudied in prior literature. In this paper, we harvest the dramatic improvements in knowledge capabilities of language models into a large-scale comparative knowledge base. While the ease of acquisition of such comparative knowledge is much higher from extreme-scale models like GPT-4, compared to their considerably smaller and weaker counterparts such as GPT-2, not even the most powerful models are exempt from making errors. We thus ask: to what extent are models at different scales able to generate valid and diverse comparative knowledge?We introduce NeuroComparatives, a novel framework for comparative knowledge distillation overgenerated from language models such as GPT-variants and LLaMA, followed by stringent filtering of the generated knowledge. Our framework acquires comparative knowledge between everyday objects, producing a corpus of up to 8.8M comparisons over 1.74M entity pairs - 10X larger and 30{\%} more diverse than existing resources. Moreover, human evaluations show that NeuroComparatives outperform existing resources in terms of validity (up to 32{\%} absolute improvement). Our acquired NeuroComparatives leads to performance improvements on five downstream tasks.We find that neuro-symbolic manipulation of smaller models offers complementary benefits to the currently dominant practice of prompting extreme-scale language models for knowledge distillation.
[ "Howard, Phillip", "Wang, Junlin", "Lal, Vasudev", "Singer, Gadi", "Choi, Yejin", "Swayamdipta, Swabha" ]
NeuroComparatives: Neuro-Symbolic Distillation of Comparative Knowledge
findings-naacl.281
Poster
2305.04978
[ "https://github.com/intellabs/multimodal_cognitive_ai" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.282.bib
https://aclanthology.org/2024.findings-naacl.282/
@inproceedings{yu-etal-2024-emotion, title = "Emotion-Anchored Contrastive Learning Framework for Emotion Recognition in Conversation", author = "Yu, Fangxu and Guo, Junjie and Wu, Zhen and Dai, Xinyu", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.282", doi = "10.18653/v1/2024.findings-naacl.282", pages = "4521--4534", abstract = "Emotion Recognition in Conversation (ERC) involves detecting the underlying emotion behind each utterance within a conversation. Effectively generating representations for utterances remains a significant challenge in this task. Recent works propose various models to address this issue, but they still struggle with differentiating similar emotions such as excitement and happiness. To alleviate this problem, We propose an Emotion-Anchored Contrastive Learning (EACL) framework that can generate more distinguishable utterance representations for similar emotions. To achieve this, we utilize label encodings as anchors to guide the learning of utterance representations and design an auxiliary loss to ensure the effective separation of anchors for similar emotions. Moreover, an additional adaptation process is proposed to adapt anchors to serve as effective classifiers to improve classification performance. Across extensive experiments, our proposed EACL achieves state-of-the-art emotion recognition performance and exhibits superior performance on similar emotions. Our code is available at https://github.com/Yu-Fangxu/EACL.", }
Emotion Recognition in Conversation (ERC) involves detecting the underlying emotion behind each utterance within a conversation. Effectively generating representations for utterances remains a significant challenge in this task. Recent works propose various models to address this issue, but they still struggle with differentiating similar emotions such as excitement and happiness. To alleviate this problem, We propose an Emotion-Anchored Contrastive Learning (EACL) framework that can generate more distinguishable utterance representations for similar emotions. To achieve this, we utilize label encodings as anchors to guide the learning of utterance representations and design an auxiliary loss to ensure the effective separation of anchors for similar emotions. Moreover, an additional adaptation process is proposed to adapt anchors to serve as effective classifiers to improve classification performance. Across extensive experiments, our proposed EACL achieves state-of-the-art emotion recognition performance and exhibits superior performance on similar emotions. Our code is available at https://github.com/Yu-Fangxu/EACL.
[ "Yu, Fangxu", "Guo, Junjie", "Wu, Zhen", "Dai, Xinyu" ]
Emotion-Anchored Contrastive Learning Framework for Emotion Recognition in Conversation
findings-naacl.282
Poster
2403.20289
[ "https://github.com/yu-fangxu/eacl" ]
https://huggingface.co/papers/2403.20289
1
1
0
4
1
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.283.bib
https://aclanthology.org/2024.findings-naacl.283/
@inproceedings{liu-etal-2024-suql, title = "{SUQL}: Conversational Search over Structured and Unstructured Data with Large Language Models", author = "Liu, Shicheng and Xu, Jialiang and Tjangnaka, Wesley and Semnani, Sina and Yu, Chen and Lam, Monica", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.283", doi = "10.18653/v1/2024.findings-naacl.283", pages = "4535--4555", abstract = "While most conversational agents are grounded on either free-text or structured knowledge, many knowledge corpora consist of hybrid sources.This paper presents the first conversational agent that supports the full generality of hybrid data access for large knowledge corpora, through a language we developed called SUQL ($\textbf{S}$tructured and $\textbf{U}$nstructured $\textbf{Q}$uery $\textbf{L}$anguage). Specifically, SUQL extends SQL with free-text primitives (${\small \text{SUMMARY}}$ and ${\small \text{ANSWER}}$), so information retrieval can be composed with structured data accesses arbitrarily in a formal, succinct, precise, and interpretable notation. With SUQL, we propose the first semantic parser, an LLM with in-context learning, that can handle hybrid data sources.Our in-context learning-based approach, when applied to the HybridQA dataset, comes within 8.9{\%} Exact Match and 7.1{\%} F1 of the SOTA, which was trained on 62K data samples. More significantly, unlike previous approaches, our technique is applicable to large databases and free-text corpora. We introduce a dataset consisting of crowdsourced questions and conversations on Yelp, a large, real restaurant knowledge base with structured and unstructured data. We show that our few-shot conversational agent based on SUQL finds an entity satisfying all user requirements 90.3{\%} of the time, compared to 63.4{\%} for a baseline based on linearization.", }
While most conversational agents are grounded on either free-text or structured knowledge, many knowledge corpora consist of hybrid sources.This paper presents the first conversational agent that supports the full generality of hybrid data access for large knowledge corpora, through a language we developed called SUQL ($\textbf{S}$tructured and $\textbf{U}$nstructured $\textbf{Q}$uery $\textbf{L}$anguage). Specifically, SUQL extends SQL with free-text primitives (${\small \text{SUMMARY}}$ and ${\small \text{ANSWER}}$), so information retrieval can be composed with structured data accesses arbitrarily in a formal, succinct, precise, and interpretable notation. With SUQL, we propose the first semantic parser, an LLM with in-context learning, that can handle hybrid data sources.Our in-context learning-based approach, when applied to the HybridQA dataset, comes within 8.9{\%} Exact Match and 7.1{\%} F1 of the SOTA, which was trained on 62K data samples. More significantly, unlike previous approaches, our technique is applicable to large databases and free-text corpora. We introduce a dataset consisting of crowdsourced questions and conversations on Yelp, a large, real restaurant knowledge base with structured and unstructured data. We show that our few-shot conversational agent based on SUQL finds an entity satisfying all user requirements 90.3{\%} of the time, compared to 63.4{\%} for a baseline based on linearization.
[ "Liu, Shicheng", "Xu, Jialiang", "Tjangnaka, Wesley", "Semnani, Sina", "Yu, Chen", "Lam, Monica" ]
SUQL: Conversational Search over Structured and Unstructured Data with Large Language Models
findings-naacl.283
Poster
2311.09818
[ "https://github.com/stanford-oval/suql" ]
https://huggingface.co/papers/2311.09818
1
0
0
6
1
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.284.bib
https://aclanthology.org/2024.findings-naacl.284/
@inproceedings{nan-etal-2024-evaluating, title = "On Evaluating the Integration of Reasoning and Action in {LLM} Agents with Database Question Answering", author = "Nan, Linyong and Zhang, Ellen and Zou, Weijin and Zhao, Yilun and Zhou, Wenfei and Cohan, Arman", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.284", doi = "10.18653/v1/2024.findings-naacl.284", pages = "4556--4579", abstract = "This study introduces a new long-form database question answering dataset designed to evaluate how Large Language Models (LLMs) interact with a SQL interpreter. The task necessitates LLMs to strategically generate multiple SQL queries to retrieve sufficient data from a database, to reason with the acquired context, and to synthesize them into a comprehensive analytical narrative. Our findings highlight that this task poses great challenges even for the state-of-the-art **GPT-4** model. We propose and evaluate two interaction strategies, and provide a fine-grained analysis of the individual stages within the interaction. A key discovery is the identification of two primary bottlenecks hindering effective interaction: the capacity for planning and the ability to generate multiple SQL queries. To address the challenge of accurately assessing answer quality, we introduce a multi-agent evaluation framework that simulates the academic peer-review process, enhancing the precision and reliability of our evaluations. This framework allows for a more nuanced understanding of the strengths and limitations of current LLMs in complex retrieval and reasoning tasks.", }
This study introduces a new long-form database question answering dataset designed to evaluate how Large Language Models (LLMs) interact with a SQL interpreter. The task necessitates LLMs to strategically generate multiple SQL queries to retrieve sufficient data from a database, to reason with the acquired context, and to synthesize them into a comprehensive analytical narrative. Our findings highlight that this task poses great challenges even for the state-of-the-art **GPT-4** model. We propose and evaluate two interaction strategies, and provide a fine-grained analysis of the individual stages within the interaction. A key discovery is the identification of two primary bottlenecks hindering effective interaction: the capacity for planning and the ability to generate multiple SQL queries. To address the challenge of accurately assessing answer quality, we introduce a multi-agent evaluation framework that simulates the academic peer-review process, enhancing the precision and reliability of our evaluations. This framework allows for a more nuanced understanding of the strengths and limitations of current LLMs in complex retrieval and reasoning tasks.
[ "Nan, Linyong", "Zhang, Ellen", "Zou, Weijin", "Zhao, Yilun", "Zhou, Wenfei", "Cohan, Arman" ]
On Evaluating the Integration of Reasoning and Action in LLM Agents with Database Question Answering
findings-naacl.284
Poster
2311.09721
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.285.bib
https://aclanthology.org/2024.findings-naacl.285/
@inproceedings{naik-etal-2024-care, title = "{CARE}: Extracting Experimental Findings From Clinical Literature", author = "Naik, Aakanksha and Kuehl, Bailey and Bransom, Erin and Downey, Doug and Hope, Tom", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.285", doi = "10.18653/v1/2024.findings-naacl.285", pages = "4580--4596", abstract = "Extracting fine-grained experimental findings from literature can provide dramatic utility for scientific applications. Prior work has developed annotation schemas and datasets for limited aspects of this problem, failing to capture the real-world complexity and nuance required. Focusing on biomedicine, this work presents CARE{---}a new IE dataset for the task of extracting clinical findings. We develop a new annotation schema capturing fine-grained findings as n-ary relations between entities and attributes, which unifies phenomena challenging for current IE systems such as discontinuous entity spans, nested relations, variable arity n-ary relations and numeric results in a single schema. We collect extensive annotations for 700 abstracts from two sources: clinical trials and case reports. We also demonstrate the generalizability of our schema to the computer science and materials science domains. We benchmark state-of-the-art IE systems on CARE, showing that even models such as GPT4 struggle. We release our resources to advance research on extracting and aggregating literature findings.", }
Extracting fine-grained experimental findings from literature can provide dramatic utility for scientific applications. Prior work has developed annotation schemas and datasets for limited aspects of this problem, failing to capture the real-world complexity and nuance required. Focusing on biomedicine, this work presents CARE{---}a new IE dataset for the task of extracting clinical findings. We develop a new annotation schema capturing fine-grained findings as n-ary relations between entities and attributes, which unifies phenomena challenging for current IE systems such as discontinuous entity spans, nested relations, variable arity n-ary relations and numeric results in a single schema. We collect extensive annotations for 700 abstracts from two sources: clinical trials and case reports. We also demonstrate the generalizability of our schema to the computer science and materials science domains. We benchmark state-of-the-art IE systems on CARE, showing that even models such as GPT4 struggle. We release our resources to advance research on extracting and aggregating literature findings.
[ "Naik, Aakanksha", "Kuehl, Bailey", "Bransom, Erin", "Downey, Doug", "Hope, Tom" ]
CARE: Extracting Experimental Findings From Clinical Literature
findings-naacl.285
Poster
2311.09736
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.286.bib
https://aclanthology.org/2024.findings-naacl.286/
@inproceedings{wang-etal-2024-personalized, title = "Personalized Federated Learning for Text Classification with Gradient-Free Prompt Tuning", author = "Wang, Rui and Yu, Tong and Zhang, Ruiyi and Kim, Sungchul and Rossi, Ryan and Zhao, Handong and Wu, Junda and Mitra, Subrata and Yao, Lina and Henao, Ricardo", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.286", doi = "10.18653/v1/2024.findings-naacl.286", pages = "4597--4612", abstract = "In this paper, we study personalized federated learning for text classification with Pretrained Language Models (PLMs). We identify two challenges in efficiently leveraging PLMs for personalized federated learning: 1) Communication. PLMs are usually large in size, e.g., with hundreds of millions of parameters, inducing huge communication cost in a federated setting. 2) Local Training. Training with PLMs generally requires back-propagation, during which memory consumption can be several times that of the forward-propagation. This may not be affordable when the PLMs are trained locally on the clients that are resource constrained, e.g., mobile devices with limited access to memory resources. Additionally, the proprietary PLMs can be provided as concealed APIs, for which the back-propagation operations may not be available. In solving these, we propose a training framework that includes an approach of discrete local search for gradient-free local training, along with a compression mechanism inspired from the linear word analogy that allows communicating with discretely indexed tokens, thus significantly reducing the communication cost. Experiments show that our gradient-free framework achieves superior performance compared with baselines.", }
In this paper, we study personalized federated learning for text classification with Pretrained Language Models (PLMs). We identify two challenges in efficiently leveraging PLMs for personalized federated learning: 1) Communication. PLMs are usually large in size, e.g., with hundreds of millions of parameters, inducing huge communication cost in a federated setting. 2) Local Training. Training with PLMs generally requires back-propagation, during which memory consumption can be several times that of the forward-propagation. This may not be affordable when the PLMs are trained locally on the clients that are resource constrained, e.g., mobile devices with limited access to memory resources. Additionally, the proprietary PLMs can be provided as concealed APIs, for which the back-propagation operations may not be available. In solving these, we propose a training framework that includes an approach of discrete local search for gradient-free local training, along with a compression mechanism inspired from the linear word analogy that allows communicating with discretely indexed tokens, thus significantly reducing the communication cost. Experiments show that our gradient-free framework achieves superior performance compared with baselines.
[ "Wang, Rui", "Yu, Tong", "Zhang, Ruiyi", "Kim, Sungchul", "Rossi, Ryan", "Zhao, H", "ong", "Wu, Junda", "Mitra, Subrata", "Yao, Lina", "Henao, Ricardo" ]
Personalized Federated Learning for Text Classification with Gradient-Free Prompt Tuning
findings-naacl.286
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.287.bib
https://aclanthology.org/2024.findings-naacl.287/
@inproceedings{guo-etal-2024-sgsh, title = "{SGSH}: Stimulate Large Language Models with Skeleton Heuristics for Knowledge Base Question Generation", author = "Guo, Shasha and Liao, Lizi and Zhang, Jing and Wang, Yanling and Li, Cuiping and Chen, Hong", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.287", doi = "10.18653/v1/2024.findings-naacl.287", pages = "4613--4625", abstract = "Knowledge base question generation (KBQG) aims to generate natural language questions from a set of triplet facts extracted from KB. Existing methods have significantly boosted the performance of KBQG via pre-trained language models (PLMs) thanks to the richly endowed semantic knowledge. With the advance of pre-training techniques, large language models (LLMs) (e.g., GPT-3.5) undoubtedly possess much more semantic knowledge. Therefore, how to effectively organize and exploit the abundant knowledge for KBQG becomes the focus of our study. In this work, we propose SGSH {---} a simple and effective framework to Stimulate GPT-3.5 with Skeleton Heuristics to enhance KBQG. The framework incorporates {``}skeleton heuristics{''}, which provides more fine-grained guidance associated with each input to stimulate LLMs to generate optimal questions, encompassing essential elements like the question phrase and the auxiliary verb.More specifically, we devise an automatic data construction strategy leveraging ChatGPT to construct a skeleton training dataset, based on which we employ a soft prompting approach to train a BART model dedicated to generating the skeleton associated with each input.Subsequently, skeleton heuristics are encoded into the prompt to incentivize GPT-3.5 to generate desired questions. Extensive experiments demonstrate that SGSH derives the new state-of-the-art performance on the KBQG tasks.", }
Knowledge base question generation (KBQG) aims to generate natural language questions from a set of triplet facts extracted from KB. Existing methods have significantly boosted the performance of KBQG via pre-trained language models (PLMs) thanks to the richly endowed semantic knowledge. With the advance of pre-training techniques, large language models (LLMs) (e.g., GPT-3.5) undoubtedly possess much more semantic knowledge. Therefore, how to effectively organize and exploit the abundant knowledge for KBQG becomes the focus of our study. In this work, we propose SGSH {---} a simple and effective framework to Stimulate GPT-3.5 with Skeleton Heuristics to enhance KBQG. The framework incorporates {``}skeleton heuristics{''}, which provides more fine-grained guidance associated with each input to stimulate LLMs to generate optimal questions, encompassing essential elements like the question phrase and the auxiliary verb.More specifically, we devise an automatic data construction strategy leveraging ChatGPT to construct a skeleton training dataset, based on which we employ a soft prompting approach to train a BART model dedicated to generating the skeleton associated with each input.Subsequently, skeleton heuristics are encoded into the prompt to incentivize GPT-3.5 to generate desired questions. Extensive experiments demonstrate that SGSH derives the new state-of-the-art performance on the KBQG tasks.
[ "Guo, Shasha", "Liao, Lizi", "Zhang, Jing", "Wang, Yanling", "Li, Cuiping", "Chen, Hong" ]
SGSH: Stimulate Large Language Models with Skeleton Heuristics for Knowledge Base Question Generation
findings-naacl.287
Poster
2404.01923
[ "https://github.com/ruckbreasoning/sgsh" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.288.bib
https://aclanthology.org/2024.findings-naacl.288/
@inproceedings{sakhovskiy-etal-2024-biomedical, title = "Biomedical Entity Representation with Graph-Augmented Multi-Objective Transformer", author = "Sakhovskiy, Andrey and Semenova, Natalia and Kadurin, Artur and Tutubalina, Elena", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.288", doi = "10.18653/v1/2024.findings-naacl.288", pages = "4626--4643", abstract = "Modern biomedical concept representations are mostly trained on synonymous concept names from a biomedical knowledge base, ignoring the inter-concept interactions and a concept{'}s local neighborhood in a knowledge base graph. In this paper, we introduce Biomedical Entity Representation with a Graph-Augmented Multi-Objective Transformer (BERGAMOT), which adopts the power of pre-trained language models (LMs) and graph neural networks to capture both inter-concept and intra-concept interactions from the multilingual UMLS graph. To obtain fine-grained graph representations, we introduce two additional graph-based objectives: (i) a node-level contrastive objective and (ii) the Deep Graph Infomax (DGI) loss, which maximizes the mutual information between a local subgraph and a high-level graph summary. We apply contrastive loss on textual and graph representations to make them less sensitive to surface forms and enable intermodal knowledge exchange. BERGAMOT achieves state-of-the-art results in zero-shot entity linking without task-specific supervision on 4 of 5 languages of the Mantra corpus and on 8 of 10 languages of the XL-BEL benchmark.", }
Modern biomedical concept representations are mostly trained on synonymous concept names from a biomedical knowledge base, ignoring the inter-concept interactions and a concept{'}s local neighborhood in a knowledge base graph. In this paper, we introduce Biomedical Entity Representation with a Graph-Augmented Multi-Objective Transformer (BERGAMOT), which adopts the power of pre-trained language models (LMs) and graph neural networks to capture both inter-concept and intra-concept interactions from the multilingual UMLS graph. To obtain fine-grained graph representations, we introduce two additional graph-based objectives: (i) a node-level contrastive objective and (ii) the Deep Graph Infomax (DGI) loss, which maximizes the mutual information between a local subgraph and a high-level graph summary. We apply contrastive loss on textual and graph representations to make them less sensitive to surface forms and enable intermodal knowledge exchange. BERGAMOT achieves state-of-the-art results in zero-shot entity linking without task-specific supervision on 4 of 5 languages of the Mantra corpus and on 8 of 10 languages of the XL-BEL benchmark.
[ "Sakhovskiy, Andrey", "Semenova, Natalia", "Kadurin, Artur", "Tutubalina, Elena" ]
Biomedical Entity Representation with Graph-Augmented Multi-Objective Transformer
findings-naacl.288
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.289.bib
https://aclanthology.org/2024.findings-naacl.289/
@inproceedings{le-2024-cross, title = "Cross-Lingual Summarization with Pseudo-Label Regularization", author = "Le, Thang", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.289", doi = "10.18653/v1/2024.findings-naacl.289", pages = "4644--4677", abstract = "Cross-Lingual Summarization (XLS) aims to summarize a document in the source language into a condensed version in the target language, effectively removing language barriers for non-native readers. Previous approaches, however, have the same limitation that only a single reference (gold summary) is exploited during model training, making the base model exposed to an underrepresented hypothesis space since the actual number of possible hypotheses is exponentially large. To alleviate this problem, we present a study adopting pseudo-labels in regularizing standard cross-lingual summarization training. We investigate several components leading to the gains in regularization training with verified experiments involving 8 diverse languages from different families. Conclusively, we show that pseudo-labeling is a simple and effective approach that significantly improves over standard gold reference training in XLS.", }
Cross-Lingual Summarization (XLS) aims to summarize a document in the source language into a condensed version in the target language, effectively removing language barriers for non-native readers. Previous approaches, however, have the same limitation that only a single reference (gold summary) is exploited during model training, making the base model exposed to an underrepresented hypothesis space since the actual number of possible hypotheses is exponentially large. To alleviate this problem, we present a study adopting pseudo-labels in regularizing standard cross-lingual summarization training. We investigate several components leading to the gains in regularization training with verified experiments involving 8 diverse languages from different families. Conclusively, we show that pseudo-labeling is a simple and effective approach that significantly improves over standard gold reference training in XLS.
[ "Le, Thang" ]
Cross-Lingual Summarization with Pseudo-Label Regularization
findings-naacl.289
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.290.bib
https://aclanthology.org/2024.findings-naacl.290/
@inproceedings{priya-etal-2024-way, title = "On the Way to Gentle {AI} Counselor: Politeness Cause Elicitation and Intensity Tagging in Code-mixed {H}inglish Conversations for Social Good", author = "Priya, Priyanshu and Singh, Gopendra and Firdaus, Mauajama and Agrawal, Jyotsna and Ekbal, Asif", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.290", doi = "10.18653/v1/2024.findings-naacl.290", pages = "4678--4696", abstract = "Politeness is a multifaceted concept influenced by individual perceptions of what is considered polite or impolite. With this objective, we introduce a novel task - Politeness Cause Elicitation and Intensity Tagging (PCEIT). This task focuses on conversations and aims to identify the underlying reasons behind the use of politeness and gauge the degree of politeness conveyed. To address this objective, we create HING-POEM, a new conversational dataset in Hinglish (a blend of Hindi and English) for mental health and legal counseling of crime victims. The rationale for the domain selection lies in the paramount importance of politeness in mental health and legal counseling of crime victims to ensure a compassionate and cordial atmosphere for them. We enrich the HING-POEM dataset by annotating it with politeness labels, politeness causal spans, and intensity values at the level of individual utterances. In the context of the introduced PCEIT task, we present PAANTH (Politeness CAuse ElicitAion and INtensity Tagging in Hinglish), a comprehensive framework based on Contextual Enhanced Attentive Convolution Transformer. We conduct extensive quantitative and qualitative evaluations to establish the effectiveness of our proposed approach using the newly constructed dataset. Our approach is compared against state-of-the-art baselines, and these analyses help demonstrate the superiority of our method.", }
Politeness is a multifaceted concept influenced by individual perceptions of what is considered polite or impolite. With this objective, we introduce a novel task - Politeness Cause Elicitation and Intensity Tagging (PCEIT). This task focuses on conversations and aims to identify the underlying reasons behind the use of politeness and gauge the degree of politeness conveyed. To address this objective, we create HING-POEM, a new conversational dataset in Hinglish (a blend of Hindi and English) for mental health and legal counseling of crime victims. The rationale for the domain selection lies in the paramount importance of politeness in mental health and legal counseling of crime victims to ensure a compassionate and cordial atmosphere for them. We enrich the HING-POEM dataset by annotating it with politeness labels, politeness causal spans, and intensity values at the level of individual utterances. In the context of the introduced PCEIT task, we present PAANTH (Politeness CAuse ElicitAion and INtensity Tagging in Hinglish), a comprehensive framework based on Contextual Enhanced Attentive Convolution Transformer. We conduct extensive quantitative and qualitative evaluations to establish the effectiveness of our proposed approach using the newly constructed dataset. Our approach is compared against state-of-the-art baselines, and these analyses help demonstrate the superiority of our method.
[ "Priya, Priyanshu", "Singh, Gopendra", "Firdaus, Mauajama", "Agrawal, Jyotsna", "Ekbal, Asif" ]
On the Way to Gentle AI Counselor: Politeness Cause Elicitation and Intensity Tagging in Code-mixed Hinglish Conversations for Social Good
findings-naacl.290
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.291.bib
https://aclanthology.org/2024.findings-naacl.291/
@inproceedings{artemiev-etal-2024-leveraging, title = "Leveraging Summarization for Unsupervised Dialogue Topic Segmentation", author = "Artemiev, Aleksei and Parinov, Daniil and Grishanov, Alexey and Borisov, Ivan and Vasilev, Alexey and Muravetskii, Daniil and Rezvykh, Aleksey and Goncharov, Aleksei and Savchenko, Andrey", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.291", doi = "10.18653/v1/2024.findings-naacl.291", pages = "4697--4704", abstract = "Traditional approaches to dialogue segmentation perform reasonably well on synthetic or written dialogues but suffer when dealing with spoken, noisy dialogs. In addition, such methods require careful tuning of hyperparameters. We propose to leverage a novel approach that is based on dialogue summaries. Experiments on different datasets showed that the new approach outperforms popular state-of-the-art algorithms in unsupervised topic segmentation and requires less setup.", }
Traditional approaches to dialogue segmentation perform reasonably well on synthetic or written dialogues but suffer when dealing with spoken, noisy dialogs. In addition, such methods require careful tuning of hyperparameters. We propose to leverage a novel approach that is based on dialogue summaries. Experiments on different datasets showed that the new approach outperforms popular state-of-the-art algorithms in unsupervised topic segmentation and requires less setup.
[ "Artemiev, Aleksei", "Parinov, Daniil", "Grishanov, Alexey", "Borisov, Ivan", "Vasilev, Alexey", "Muravetskii, Daniil", "Rezvykh, Aleksey", "Goncharov, Aleksei", "Savchenko, Andrey" ]
Leveraging Summarization for Unsupervised Dialogue Topic Segmentation
findings-naacl.291
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.292.bib
https://aclanthology.org/2024.findings-naacl.292/
@inproceedings{feng-etal-2024-llama, title = "{LL}a{MA}-Rider: Spurring Large Language Models to Explore the Open World", author = "Feng, Yicheng and Wang, Yuxuan and Liu, Jiazheng and Zheng, Sipeng and Lu, Zongqing", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.292", doi = "10.18653/v1/2024.findings-naacl.292", pages = "4705--4724", abstract = "Recently, various studies have leveraged Large Language Models (LLMs) to help decision-making and planning in environments and try to align the LLMs{'} knowledge with the world conditions. Nonetheless, the capacity of LLMs to continuously acquire environmental knowledge and adapt in an open world remains uncertain. In this paper, we propose an approach to spur LLMs to explore the open world, gather experiences, and learn to improve their task-solving capabilities. In this approach, a multi-round feedback-revision mechanism is utilized to encourage LLMs to actively select appropriate revision actions guided by feedback information from the environment. This facilitates exploration and enhances the model{'}s performance. Besides, we integrate sub-task relabeling to assist LLMs in maintaining consistency in sub-task planning and help the model learn the combinatorial nature between tasks, enabling it to complete a wider range of tasks through training based on the acquired exploration experiences. By evaluation in Minecraft, an open-ended sandbox world, we demonstrate that our approach LLaMA-Rider enhances the efficiency of the LLM in exploring the environment, and effectively improves the LLM{'}s ability to accomplish more tasks through fine-tuning with merely 1.3k instances of collected data, showing minimal training costs compared to the baseline using reinforcement learning. The code is available at https://github.com/PKU-RL/LLaMA-Rider.", }
Recently, various studies have leveraged Large Language Models (LLMs) to help decision-making and planning in environments and try to align the LLMs{'} knowledge with the world conditions. Nonetheless, the capacity of LLMs to continuously acquire environmental knowledge and adapt in an open world remains uncertain. In this paper, we propose an approach to spur LLMs to explore the open world, gather experiences, and learn to improve their task-solving capabilities. In this approach, a multi-round feedback-revision mechanism is utilized to encourage LLMs to actively select appropriate revision actions guided by feedback information from the environment. This facilitates exploration and enhances the model{'}s performance. Besides, we integrate sub-task relabeling to assist LLMs in maintaining consistency in sub-task planning and help the model learn the combinatorial nature between tasks, enabling it to complete a wider range of tasks through training based on the acquired exploration experiences. By evaluation in Minecraft, an open-ended sandbox world, we demonstrate that our approach LLaMA-Rider enhances the efficiency of the LLM in exploring the environment, and effectively improves the LLM{'}s ability to accomplish more tasks through fine-tuning with merely 1.3k instances of collected data, showing minimal training costs compared to the baseline using reinforcement learning. The code is available at https://github.com/PKU-RL/LLaMA-Rider.
[ "Feng, Yicheng", "Wang, Yuxuan", "Liu, Jiazheng", "Zheng, Sipeng", "Lu, Zongqing" ]
LLaMA-Rider: Spurring Large Language Models to Explore the Open World
findings-naacl.292
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.293.bib
https://aclanthology.org/2024.findings-naacl.293/
@inproceedings{park-etal-2024-contrastive, title = "Contrastive Learning as a Polarizer: Mitigating Gender Bias by Fair and Biased sentences", author = "Park, Kyungmin and Oh, Sihyun and Kim, Daehyun and Kim, Juae", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.293", doi = "10.18653/v1/2024.findings-naacl.293", pages = "4725--4736", abstract = "Recently, language models have accelerated the improvement in natural language processing. However, recent studies have highlighted a significant issue: social biases inherent in training data can lead models to learn and propagate these biases. In this study, we propose a contrastive learning method for bias mitigation, utilizing anchor points to push further negatives and pull closer positives within the representation space. This approach employs stereotypical data as negatives and stereotype-free data as positives, enhancing debiasing performance. Our model attained state-of-the-art performance in the ICAT score on the StereoSet, a benchmark for measuring bias in models. In addition, we observed that effective debiasing is achieved through an awareness of biases, as evidenced by improved hate speech detection scores. The implementation code and trained models are available at https://github.com/HUFS-NLP/CL{\_}Polarizer.git.", }
Recently, language models have accelerated the improvement in natural language processing. However, recent studies have highlighted a significant issue: social biases inherent in training data can lead models to learn and propagate these biases. In this study, we propose a contrastive learning method for bias mitigation, utilizing anchor points to push further negatives and pull closer positives within the representation space. This approach employs stereotypical data as negatives and stereotype-free data as positives, enhancing debiasing performance. Our model attained state-of-the-art performance in the ICAT score on the StereoSet, a benchmark for measuring bias in models. In addition, we observed that effective debiasing is achieved through an awareness of biases, as evidenced by improved hate speech detection scores. The implementation code and trained models are available at https://github.com/HUFS-NLP/CL{\_}Polarizer.git.
[ "Park, Kyungmin", "Oh, Sihyun", "Kim, Daehyun", "Kim, Juae" ]
Contrastive Learning as a Polarizer: Mitigating Gender Bias by Fair and Biased sentences
findings-naacl.293
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.294.bib
https://aclanthology.org/2024.findings-naacl.294/
@inproceedings{zhu-etal-2024-pollmgraph, title = "{P}o{LLM}graph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics", author = "Zhu, Derui and Chen, Dingfan and Li, Qing and Chen, Zongxiong and Ma, Lei and Grossklags, Jens and Fritz, Mario", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.294", doi = "10.18653/v1/2024.findings-naacl.294", pages = "4737--4751", abstract = "Despite tremendous advancements in large language models (LLMs) over recent years, a notably urgent challenge for their practical deployment is the phenomenon of ''$\textit{hallucination}${''}, where the model fabricates facts and produces non-factual statements. In response, we propose $\texttt{PoLLMgraph}${---}a Polygraph for LLMs{---}as an effective model-based white-box detection and forecasting approach. $\texttt{PoLLMgraph}$ distinctly differs from the large body of existing research that concentrates on addressing such challenges through black-box evaluations. In particular, we demonstrate that hallucination can be effectively detected by analyzing the LLM{'}s internal state transition dynamics during generation via tractable probabilistic models. Experimental results on various open-source LLMs confirm the efficacy of $\texttt{PoLLMgraph}$, outperforming state-of-the-art methods by a considerable margin, evidenced by over 20{\%} improvement in AUC-ROC on common benchmarking datasets like TruthfulQA. Our work paves a new way for model-based white-box analysis of LLMs, motivating the research community to further explore, understand, and refine the intricate dynamics of LLM behaviors.", }
Despite tremendous advancements in large language models (LLMs) over recent years, a notably urgent challenge for their practical deployment is the phenomenon of ''$\textit{hallucination}${''}, where the model fabricates facts and produces non-factual statements. In response, we propose $\texttt{PoLLMgraph}${---}a Polygraph for LLMs{---}as an effective model-based white-box detection and forecasting approach. $\texttt{PoLLMgraph}$ distinctly differs from the large body of existing research that concentrates on addressing such challenges through black-box evaluations. In particular, we demonstrate that hallucination can be effectively detected by analyzing the LLM{'}s internal state transition dynamics during generation via tractable probabilistic models. Experimental results on various open-source LLMs confirm the efficacy of $\texttt{PoLLMgraph}$, outperforming state-of-the-art methods by a considerable margin, evidenced by over 20{\%} improvement in AUC-ROC on common benchmarking datasets like TruthfulQA. Our work paves a new way for model-based white-box analysis of LLMs, motivating the research community to further explore, understand, and refine the intricate dynamics of LLM behaviors.
[ "Zhu, Derui", "Chen, Dingfan", "Li, Qing", "Chen, Zongxiong", "Ma, Lei", "Grossklags, Jens", "Fritz, Mario" ]
PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics
findings-naacl.294
Poster
2404.04722
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.295.bib
https://aclanthology.org/2024.findings-naacl.295/
@inproceedings{vladika-matthes-2024-improving, title = "Improving Health Question Answering with Reliable and Time-Aware Evidence Retrieval", author = "Vladika, Juraj and Matthes, Florian", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.295", doi = "10.18653/v1/2024.findings-naacl.295", pages = "4752--4763", abstract = "In today{'}s digital world, seeking answers to health questions on the Internet is a common practice. However, existing question answering (QA) systems often rely on using pre-selected and annotated evidence documents, thus making them inadequate for addressing novel questions. Our study focuses on the open-domain QA setting, where the key challenge is to first uncover relevant evidence in large knowledge bases. By utilizing the common retrieve-then-read QA pipeline and PubMed as a trustworthy collection of medical research documents, we answer health questions from three diverse datasets. We modify different retrieval settings to observe their influence on the QA pipeline{'}s performance, including the number of retrieved documents, sentence selection process, the publication year of articles, and their number of citations. Our results reveal that cutting down on the amount of retrieved documents and favoring more recent and highly cited documents can improve the final macro F1 score up to 10{\%}. We discuss the results, highlight interesting examples, and outline challenges for future research, like managing evidence disagreement and crafting user-friendly explanations.", }
In today{'}s digital world, seeking answers to health questions on the Internet is a common practice. However, existing question answering (QA) systems often rely on using pre-selected and annotated evidence documents, thus making them inadequate for addressing novel questions. Our study focuses on the open-domain QA setting, where the key challenge is to first uncover relevant evidence in large knowledge bases. By utilizing the common retrieve-then-read QA pipeline and PubMed as a trustworthy collection of medical research documents, we answer health questions from three diverse datasets. We modify different retrieval settings to observe their influence on the QA pipeline{'}s performance, including the number of retrieved documents, sentence selection process, the publication year of articles, and their number of citations. Our results reveal that cutting down on the amount of retrieved documents and favoring more recent and highly cited documents can improve the final macro F1 score up to 10{\%}. We discuss the results, highlight interesting examples, and outline challenges for future research, like managing evidence disagreement and crafting user-friendly explanations.
[ "Vladika, Juraj", "Matthes, Florian" ]
Improving Health Question Answering with Reliable and Time-Aware Evidence Retrieval
findings-naacl.295
Poster
2404.08359
[ "https://github.com/jvladika/improving-health-qa" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.296.bib
https://aclanthology.org/2024.findings-naacl.296/
@inproceedings{langedijk-etal-2024-decoderlens, title = "{D}ecoder{L}ens: Layerwise Interpretation of Encoder-Decoder Transformers", author = "Langedijk, Anna and Mohebbi, Hosein and Sarti, Gabriele and Zuidema, Willem and Jumelet, Jaap", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.296", doi = "10.18653/v1/2024.findings-naacl.296", pages = "4764--4780", abstract = "In recent years, several interpretability methods have been proposed to interpret the inner workings of Transformer models at different levels of precision and complexity.In this work, we propose a simple but effective technique to analyze encoder-decoder Transformers. Our method, which we name DecoderLens, allows the decoder to cross-attend representations of intermediate encoder activations instead of using the default final encoder output.The method thus maps uninterpretable intermediate vector representations to human-interpretable sequences of words or symbols, shedding new light on the information flow in this popular but understudied class of models.We apply DecoderLens to question answering, logical reasoning, speech recognition and machine translation models, finding that simpler subtasks are solved with high precision by low and intermediate encoder layers.", }
In recent years, several interpretability methods have been proposed to interpret the inner workings of Transformer models at different levels of precision and complexity.In this work, we propose a simple but effective technique to analyze encoder-decoder Transformers. Our method, which we name DecoderLens, allows the decoder to cross-attend representations of intermediate encoder activations instead of using the default final encoder output.The method thus maps uninterpretable intermediate vector representations to human-interpretable sequences of words or symbols, shedding new light on the information flow in this popular but understudied class of models.We apply DecoderLens to question answering, logical reasoning, speech recognition and machine translation models, finding that simpler subtasks are solved with high precision by low and intermediate encoder layers.
[ "Langedijk, Anna", "Mohebbi, Hosein", "Sarti, Gabriele", "Zuidema, Willem", "Jumelet, Jaap" ]
DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
findings-naacl.296
Poster
2310.03686
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.1.bib
https://aclanthology.org/2024.americasnlp-1.1/
@inproceedings{gessler-von-der-wense-2024-nlp, title = "{NLP} for Language Documentation: Two Reasons for the Gap between Theory and Practice", author = "Gessler, Luke and von der Wense, Katharina", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.1", doi = "10.18653/v1/2024.americasnlp-1.1", pages = "1--6", abstract = "Both NLP researchers and linguists have expressed a desire to use language technologies in language documentation, but most documentary work still proceeds without them, presenting a lost opportunity to hasten the preservation of the world{'}s endangered languages, such as those spoken in Latin America. In this work, we empirically measure two factors that have previously been identified as explanations of this low utilization: curricular offerings in graduate programs, and rates of interdisciplinary collaboration in publications related to NLP in language documentation. Our findings verify the claim that interdisciplinary training and collaborations are scarce and support the view that interdisciplinary curricular offerings facilitate interdisciplinary collaborations.", }
Both NLP researchers and linguists have expressed a desire to use language technologies in language documentation, but most documentary work still proceeds without them, presenting a lost opportunity to hasten the preservation of the world{'}s endangered languages, such as those spoken in Latin America. In this work, we empirically measure two factors that have previously been identified as explanations of this low utilization: curricular offerings in graduate programs, and rates of interdisciplinary collaboration in publications related to NLP in language documentation. Our findings verify the claim that interdisciplinary training and collaborations are scarce and support the view that interdisciplinary curricular offerings facilitate interdisciplinary collaborations.
[ "Gessler, Luke", "von der Wense, Katharina" ]
NLP for Language Documentation: Two Reasons for the Gap between Theory and Practice
americasnlp-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.2.bib
https://aclanthology.org/2024.americasnlp-1.2/
@inproceedings{prieto-etal-2024-translation, title = "Translation systems for low-resource Colombian Indigenous languages, a first step towards cultural preservation", author = "Prieto, Juan and Martinez, Cristian and Robles, Melissa and Moreno, Alberto and Palacios, Sara and Manrique, Rub{\'e}n", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.2", doi = "10.18653/v1/2024.americasnlp-1.2", pages = "7--14", abstract = "The use of machine learning and Natural Language Processing (NLP) technologies can assist in the preservation and revitalization of indigenous languages, particularly those classified as {``}low-resource.{''} Given the increasing digitization of information, the development of translation tools for these languages is of significant importance. These tools not only facilitate better access to digital resources for indigenous communities but also stimulate language preservation efforts and potentially foster more inclusive, equitable societies, as demonstrated by the AmericasNLP workshop since 2021. The focus of this paper is Colombia, a country home to 65 distinct indigenous languages, presenting a vast spectrum of linguistic characteristics. This cultural and linguistic diversity is an inherent pillar of the nation{'}s identity, and safeguarding it has been increasingly challenging given the dwindling number of native speakers and the communities{'} inclination towards oral traditions. Considering this context, scattered initiatives exist to develop translation systems for these languages. However, these endeavors suffer from a lack of consolidated, comparable data. This paper consolidates a dataset of parallel data in four Colombian indigenous languages - Wayuunaiki, Arhuaco, Inga, and Nasa - gathered from existing digital resources. It also presents the creation of baseline models for future translation and comparison, ultimately serving as a catalyst for incorporating more digital resources progressively.", }
The use of machine learning and Natural Language Processing (NLP) technologies can assist in the preservation and revitalization of indigenous languages, particularly those classified as {``}low-resource.{''} Given the increasing digitization of information, the development of translation tools for these languages is of significant importance. These tools not only facilitate better access to digital resources for indigenous communities but also stimulate language preservation efforts and potentially foster more inclusive, equitable societies, as demonstrated by the AmericasNLP workshop since 2021. The focus of this paper is Colombia, a country home to 65 distinct indigenous languages, presenting a vast spectrum of linguistic characteristics. This cultural and linguistic diversity is an inherent pillar of the nation{'}s identity, and safeguarding it has been increasingly challenging given the dwindling number of native speakers and the communities{'} inclination towards oral traditions. Considering this context, scattered initiatives exist to develop translation systems for these languages. However, these endeavors suffer from a lack of consolidated, comparable data. This paper consolidates a dataset of parallel data in four Colombian indigenous languages - Wayuunaiki, Arhuaco, Inga, and Nasa - gathered from existing digital resources. It also presents the creation of baseline models for future translation and comparison, ultimately serving as a catalyst for incorporating more digital resources progressively.
[ "Prieto, Juan", "Martinez, Cristian", "Robles, Melissa", "Moreno, Alberto", "Palacios, Sara", "Manrique, Rub{\\'e}n" ]
Translation systems for low-resource Colombian Indigenous languages, a first step towards cultural preservation
americasnlp-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.3.bib
https://aclanthology.org/2024.americasnlp-1.3/
@inproceedings{kriukova-arppe-2024-word, title = "Word-level prediction in {P}lains {C}ree: First steps", author = "Kriukova, Olga and Arppe, Antti", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.3", doi = "10.18653/v1/2024.americasnlp-1.3", pages = "15--23", abstract = "Plains Cree (n{\^e}hiyaw{\^e}win) is a morphologically complex and predominantly prefixing language. The combinatory potential of inflectional and derivational/lexical prefixes and verb stems in Plains Cree makes it challenging for traditional auto-completion (or word suggestion) approaches to handle. The lack of a large corpus of Plains Cree also complicates the situation. This study attempts to investigate how well a BiLSTM model trained on a small Cree corpus can handle a word suggestion task. Moreover, this study evaluates whether the use of semantically and morphosyntactically refined Word2Vec embeddings can improve the overall accuracy and quality of BiLSTM suggestions. The results show that some models trained with the refined vectors provide semantically and morphosyntactically better suggestions. They are also more accurate in predictions of content words. The model trained with the non-refined vectors, in contrast, was better at predicting conjunctions, particles, and other non-inflecting words. The models trained with different refined vector combinations provide the expected next word among top-10 predictions in 36.73 to 37.88{\%} of cases (depending on the model).", }
Plains Cree (n{\^e}hiyaw{\^e}win) is a morphologically complex and predominantly prefixing language. The combinatory potential of inflectional and derivational/lexical prefixes and verb stems in Plains Cree makes it challenging for traditional auto-completion (or word suggestion) approaches to handle. The lack of a large corpus of Plains Cree also complicates the situation. This study attempts to investigate how well a BiLSTM model trained on a small Cree corpus can handle a word suggestion task. Moreover, this study evaluates whether the use of semantically and morphosyntactically refined Word2Vec embeddings can improve the overall accuracy and quality of BiLSTM suggestions. The results show that some models trained with the refined vectors provide semantically and morphosyntactically better suggestions. They are also more accurate in predictions of content words. The model trained with the non-refined vectors, in contrast, was better at predicting conjunctions, particles, and other non-inflecting words. The models trained with different refined vector combinations provide the expected next word among top-10 predictions in 36.73 to 37.88{\%} of cases (depending on the model).
[ "Kriukova, Olga", "Arppe, Antti" ]
Word-level prediction in Plains Cree: First steps
americasnlp-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.4.bib
https://aclanthology.org/2024.americasnlp-1.4/
@inproceedings{pedrazzini-2024-mapping, title = "Mapping {`}when{'}-clauses in {L}atin {A}merican and {C}aribbean languages: an experiment in subtoken-based typology", author = "Pedrazzini, Nilo", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.4", doi = "10.18653/v1/2024.americasnlp-1.4", pages = "24--33", abstract = "Languages can encode temporal subordination lexically, via subordinating conjunctions, and morphologically, by marking the relation on the predicate. Systematic cross-linguistic variation among the former can be studied using well-established token-based typological approaches to token-aligned parallel corpora. Variation among different morphological means is instead much harder to tackle and therefore more poorly understood, despite being predominant in several language groups. This paper explores variation in the expression of generic temporal subordination ({`}when{'}-clauses) among the languages of Latin America and the Caribbean, where morphological marking is particularly common. It presents probabilistic semantic maps computed on the basis of the languages of the region, thus avoiding bias towards the many world{'}s languages that exclusively use lexified connectors, incorporating associations between character in/i-grams and English iwhen/i. The approach allows capturing morphological clause-linkage devices in addition to lexified connectors, paving the way for larger-scale, strategy-agnostic analyses of typological variation in temporal subordination.", }
Languages can encode temporal subordination lexically, via subordinating conjunctions, and morphologically, by marking the relation on the predicate. Systematic cross-linguistic variation among the former can be studied using well-established token-based typological approaches to token-aligned parallel corpora. Variation among different morphological means is instead much harder to tackle and therefore more poorly understood, despite being predominant in several language groups. This paper explores variation in the expression of generic temporal subordination ({`}when{'}-clauses) among the languages of Latin America and the Caribbean, where morphological marking is particularly common. It presents probabilistic semantic maps computed on the basis of the languages of the region, thus avoiding bias towards the many world{'}s languages that exclusively use lexified connectors, incorporating associations between character in/i-grams and English iwhen/i. The approach allows capturing morphological clause-linkage devices in addition to lexified connectors, paving the way for larger-scale, strategy-agnostic analyses of typological variation in temporal subordination.
[ "Pedrazzini, Nilo" ]
Mapping `when'-clauses in Latin American and Caribbean languages: an experiment in subtoken-based typology
americasnlp-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.5.bib
https://aclanthology.org/2024.americasnlp-1.5/
@inproceedings{adelani-etal-2024-comparing, title = "Comparing {LLM} prompting with Cross-lingual transfer performance on Indigenous and Low-resource {B}razilian Languages", author = {Adelani, David Ifeoluwa and Do{\u{g}}ru{\"o}z, A. Seza and Coneglian, Andr{\'e} and Ojha, Atul Kr.}, editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.5", doi = "10.18653/v1/2024.americasnlp-1.5", pages = "34--41", abstract = "Large Language Models are transforming NLP for a lot of tasks. However, how LLMs perform NLP tasks for LRLs is less explored. In alliance with the theme track of the NAACL{'}24, we focus on 12 low-resource languages (LRLs) from Brazil, 2 LRLs from Africa and 2 high-resource languages (HRLs) (e.g., English and Brazilian Portuguese). Our results indicate that the LLMs perform worse for the labeling of LRLs in comparison to HRLs in general. We explain the reasons behind this failure and provide an error analyses through examples from 2 Brazilian LRLs.", }
Large Language Models are transforming NLP for a lot of tasks. However, how LLMs perform NLP tasks for LRLs is less explored. In alliance with the theme track of the NAACL{'}24, we focus on 12 low-resource languages (LRLs) from Brazil, 2 LRLs from Africa and 2 high-resource languages (HRLs) (e.g., English and Brazilian Portuguese). Our results indicate that the LLMs perform worse for the labeling of LRLs in comparison to HRLs in general. We explain the reasons behind this failure and provide an error analyses through examples from 2 Brazilian LRLs.
[ "Adelani, David Ifeoluwa", "Do{\\u{g}}ru{\\\"o}z, A. Seza", "Coneglian, Andr{\\'e}", "Ojha, Atul Kr." ]
Comparing LLM prompting with Cross-lingual transfer performance on Indigenous and Low-resource Brazilian Languages
americasnlp-1.5
Poster
2404.18286
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.6.bib
https://aclanthology.org/2024.americasnlp-1.6/
@inproceedings{webber-etal-2024-analyzing, title = "Analyzing Finetuned Vision Models for {M}ixtec Codex Interpretation", author = "Webber, Alexander and Sayers, Zachary and Wu, Amy and Thorner, Elizabeth and Witter, Justin and Ayoubi, Gabriel and Grant, Christan", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.6", doi = "10.18653/v1/2024.americasnlp-1.6", pages = "42--49", abstract = "Throughout history, pictorial record-keeping has been used to document events, stories, and concepts. A popular example of this is the Tzolk{'}in Maya Calendar. The pre-Columbian Mixtec society also recorded many works through graphical media called codices that depict both stories and real events. Mixtec codices are unique because the depicted scenes are highly structured within and across documents. As a first effort toward translation, we created two binary classification tasks over Mixtec codices, namely, gender and pose. The composition of figures within a codex is essential for understanding the codex{'}s narrative. We labeled a dataset with around 1300 figures drawn from three codices of varying qualities. We finetuned the Visual Geometry Group 16 (VGG-16) and Vision Transformer 16 (ViT-16) models, measured their performance, and compared learned features with expert opinions found in literature. The results show that when finetuned, both VGG and ViT perform well, with the transformer-based architecture (ViT) outperforming the CNN-based architecture (VGG) at higher learning rates. We are releasing this work to allow collaboration with the Mixtec community and domain scientists.", }
Throughout history, pictorial record-keeping has been used to document events, stories, and concepts. A popular example of this is the Tzolk{'}in Maya Calendar. The pre-Columbian Mixtec society also recorded many works through graphical media called codices that depict both stories and real events. Mixtec codices are unique because the depicted scenes are highly structured within and across documents. As a first effort toward translation, we created two binary classification tasks over Mixtec codices, namely, gender and pose. The composition of figures within a codex is essential for understanding the codex{'}s narrative. We labeled a dataset with around 1300 figures drawn from three codices of varying qualities. We finetuned the Visual Geometry Group 16 (VGG-16) and Vision Transformer 16 (ViT-16) models, measured their performance, and compared learned features with expert opinions found in literature. The results show that when finetuned, both VGG and ViT perform well, with the transformer-based architecture (ViT) outperforming the CNN-based architecture (VGG) at higher learning rates. We are releasing this work to allow collaboration with the Mixtec community and domain scientists.
[ "Webber, Alex", "er", "Sayers, Zachary", "Wu, Amy", "Thorner, Elizabeth", "Witter, Justin", "Ayoubi, Gabriel", "Grant, Christan" ]
Analyzing Finetuned Vision Models for Mixtec Codex Interpretation
americasnlp-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.7.bib
https://aclanthology.org/2024.americasnlp-1.7/
@inproceedings{kristensen-mclachlan-nedergard-2024-new, title = "A New Benchmark for {K}alaallisut-{D}anish Neural Machine Translation", author = "Kristensen-Mclachlan, Ross and Nederg{\aa}rd, Johanne", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.7", doi = "10.18653/v1/2024.americasnlp-1.7", pages = "50--55", abstract = "Kalaallisut, also known as (West) Greenlandic, poses a number of unique challenges to contemporary natural language processing (NLP). In particular, the language has historically lacked benchmarking datasets and robust evaluation of specific NLP tasks, such as neural machine translation (NMT). In this paper, we present a new benchmark dataset for Greenlandic to Danish NMT comprising over 1.2m words of Greenlandic and 2.1m words of parallel Danish translations. We provide initial metrics for models trained on this dataset and conclude by suggesting how these findings can be taken forward to other NLP tasks for the Greenlandic language.", }
Kalaallisut, also known as (West) Greenlandic, poses a number of unique challenges to contemporary natural language processing (NLP). In particular, the language has historically lacked benchmarking datasets and robust evaluation of specific NLP tasks, such as neural machine translation (NMT). In this paper, we present a new benchmark dataset for Greenlandic to Danish NMT comprising over 1.2m words of Greenlandic and 2.1m words of parallel Danish translations. We provide initial metrics for models trained on this dataset and conclude by suggesting how these findings can be taken forward to other NLP tasks for the Greenlandic language.
[ "Kristensen-Mclachlan, Ross", "Nederg{\\aa}rd, Johanne" ]
A New Benchmark for Kalaallisut-Danish Neural Machine Translation
americasnlp-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.8.bib
https://aclanthology.org/2024.americasnlp-1.8/
@inproceedings{karson-coto-solano-2024-morphological, title = "Morphological Tagging in {B}ribri Using {U}niversal {D}ependency Features", author = "Karson, Jessica and Coto-Solano, Rolando", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.8", doi = "10.18653/v1/2024.americasnlp-1.8", pages = "56--66", abstract = "This paper outlines the Universal Features tagging of a dependency treebank for Bribri, an Indigenous language of Costa Rica. Universal Features are a morphosyntactic tagging component of Universal Dependencies, which is a framework that aims to provide an annotation system inclusive of all languages and their diverse structures (Nivre et al., 2016; de Marneffe et al., 2021). We used a rule-based system to do a first-pass tagging of a treebank of 1572 words. After manual corrections, the treebank contained 3051 morphological features. We then used this morphologically-tagged treebank to train a UDPipe 2 parsing and tagging model. This model has a UFEATS precision of 80.5 {\mbox{$\pm$}} 3.6, which is a statistically significant improvement upon the previously available FOMA-based morphological tagger for Bribri. An error analysis suggests that missing TAM and case markers are the most common problem for the model. We hope to use this model to expand upon existing treebanks and facilitate the construction of linguistically-annotated corpora for the language.", }
This paper outlines the Universal Features tagging of a dependency treebank for Bribri, an Indigenous language of Costa Rica. Universal Features are a morphosyntactic tagging component of Universal Dependencies, which is a framework that aims to provide an annotation system inclusive of all languages and their diverse structures (Nivre et al., 2016; de Marneffe et al., 2021). We used a rule-based system to do a first-pass tagging of a treebank of 1572 words. After manual corrections, the treebank contained 3051 morphological features. We then used this morphologically-tagged treebank to train a UDPipe 2 parsing and tagging model. This model has a UFEATS precision of 80.5 {\mbox{$\pm$}} 3.6, which is a statistically significant improvement upon the previously available FOMA-based morphological tagger for Bribri. An error analysis suggests that missing TAM and case markers are the most common problem for the model. We hope to use this model to expand upon existing treebanks and facilitate the construction of linguistically-annotated corpora for the language.
[ "Karson, Jessica", "Coto-Solano, Rol", "o" ]
Morphological Tagging in Bribri Using Universal Dependency Features
americasnlp-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.9.bib
https://aclanthology.org/2024.americasnlp-1.9/
@inproceedings{coleman-etal-2024-llm, title = "{LLM}-Assisted Rule Based Machine Translation for Low/No-Resource Languages", author = "Coleman, Jared and Krishnamachari, Bhaskar and Rosales, Ruben and Iskarous, Khalil", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.9", doi = "10.18653/v1/2024.americasnlp-1.9", pages = "67--87", abstract = "We propose a new paradigm for machine translation that is particularly useful for no-resource languages (those without any publicly available bilingual or monolingual corpora): LLM-RBMT (LLM-Assisted Rule Based Machine Translation). Using the LLM-RBMT paradigm, we design the first language education/revitalization-oriented machine translator for Owens Valley Paiute (OVP), a critically endangered Indigenous American language for which there is virtually no publicly available data. We present a detailed evaluation of the translator{'}s components: a rule-based sentence builder, an OVP to English translator, and an English to OVP translator. We also discuss the potential of the paradigm, its limitations, and the many avenues for future research that it opens up.", }
We propose a new paradigm for machine translation that is particularly useful for no-resource languages (those without any publicly available bilingual or monolingual corpora): LLM-RBMT (LLM-Assisted Rule Based Machine Translation). Using the LLM-RBMT paradigm, we design the first language education/revitalization-oriented machine translator for Owens Valley Paiute (OVP), a critically endangered Indigenous American language for which there is virtually no publicly available data. We present a detailed evaluation of the translator{'}s components: a rule-based sentence builder, an OVP to English translator, and an English to OVP translator. We also discuss the potential of the paradigm, its limitations, and the many avenues for future research that it opens up.
[ "Coleman, Jared", "Krishnamachari, Bhaskar", "Rosales, Ruben", "Iskarous, Khalil" ]
LLM-Assisted Rule Based Machine Translation for Low/No-Resource Languages
americasnlp-1.9
Poster
2405.08997
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.10.bib
https://aclanthology.org/2024.americasnlp-1.10/
@inproceedings{agarwal-anastasopoulos-2024-concise, title = "A Concise Survey of {OCR} for Low-Resource Languages", author = "Agarwal, Milind and Anastasopoulos, Antonios", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.10", doi = "10.18653/v1/2024.americasnlp-1.10", pages = "88--102", abstract = "Modern natural language processing (NLP) techniques increasingly require substantial amounts of data to train robust algorithms. Building such technologies for low-resource languages requires focusing on data creation efforts and data-efficient algorithms. For a large number of low-resource languages, especially Indigenous languages of the Americas, this data exists in image-based non-machine-readable documents. This includes scanned copies of comprehensive dictionaries, linguistic field notes, children{'}s stories, and other textual material. To digitize these resources, Optical Character Recognition (OCR) has played a major role but it comes with certain challenges in low-resource settings. In this paper, we share the first survey of OCR techniques specific to low-resource data creation settings and outline several open challenges, with a special focus on Indigenous Languages of the Americas. Based on experiences and results from previous research, we conclude with recommendations on utilizing and improving OCR for the benefit of computational researchers, linguists, and language communities.", }
Modern natural language processing (NLP) techniques increasingly require substantial amounts of data to train robust algorithms. Building such technologies for low-resource languages requires focusing on data creation efforts and data-efficient algorithms. For a large number of low-resource languages, especially Indigenous languages of the Americas, this data exists in image-based non-machine-readable documents. This includes scanned copies of comprehensive dictionaries, linguistic field notes, children{'}s stories, and other textual material. To digitize these resources, Optical Character Recognition (OCR) has played a major role but it comes with certain challenges in low-resource settings. In this paper, we share the first survey of OCR techniques specific to low-resource data creation settings and outline several open challenges, with a special focus on Indigenous Languages of the Americas. Based on experiences and results from previous research, we conclude with recommendations on utilizing and improving OCR for the benefit of computational researchers, linguists, and language communities.
[ "Agarwal, Milind", "Anastasopoulos, Antonios" ]
A Concise Survey of OCR for Low-Resource Languages
americasnlp-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.11.bib
https://aclanthology.org/2024.americasnlp-1.11/
@inproceedings{sanchez-carrera-etal-2024-unlocking, title = "Unlocking Knowledge with {OCR}-Driven Document Digitization for {P}eruvian Indigenous Languages", author = "Sanchez Carrera, Shadya and Zariquiey, Roberto and Oncevay, Arturo", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.11", doi = "10.18653/v1/2024.americasnlp-1.11", pages = "103--111", abstract = "The current focus on resource-rich languages poses a challenge to linguistic diversity, affecting minority languages with limited digital presence and relatively old published and unpublished resources. In addressing this issue, this study targets the digitalization of old scanned textbooks written in four Peruvian indigenous languages (Ash{\'a}ninka, Shipibo-Konibo, Yanesha, and Yine) using Optical Character Recognition (OCR) technology. This is complemented with text correction methods to minimize extraction errors. Contributions include the creation of an annotated dataset with 454 scanned page images, for a rigorous evaluation, and the development of a module to correct OCR-generated transcription alignments.", }
The current focus on resource-rich languages poses a challenge to linguistic diversity, affecting minority languages with limited digital presence and relatively old published and unpublished resources. In addressing this issue, this study targets the digitalization of old scanned textbooks written in four Peruvian indigenous languages (Ash{\'a}ninka, Shipibo-Konibo, Yanesha, and Yine) using Optical Character Recognition (OCR) technology. This is complemented with text correction methods to minimize extraction errors. Contributions include the creation of an annotated dataset with 454 scanned page images, for a rigorous evaluation, and the development of a module to correct OCR-generated transcription alignments.
[ "Sanchez Carrera, Shadya", "Zariquiey, Roberto", "Oncevay, Arturo" ]
Unlocking Knowledge with OCR-Driven Document Digitization for Peruvian Indigenous Languages
americasnlp-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.12.bib
https://aclanthology.org/2024.americasnlp-1.12/
@inproceedings{moreno-etal-2024-awajun, title = "Awajun-{OP}: Multi-domain dataset for {S}panish{--}Awajun Machine Translation", author = "Moreno, Oscar and Atamain, Yanua and Oncevay, Arturo", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.12", doi = "10.18653/v1/2024.americasnlp-1.12", pages = "112--120", abstract = "We introduce a Spanish-Awajun parallel dataset of 22k high-quality sentence pairs with the help of the journalistic organization Company C. This dataset consists of parallel data obtained from various web sources such as poems, stories, laws, protocols, guidelines, handbooks, the Bible, and news published by Company C. The study also includes an analysis of the dataset{'}s performance for Spanish-Awajun translation using a Transformer architecture with transfer learning from a parent model, utilizing Spanish-English and Spanish-Finnish as high-resource language-pairs. As far as we know, this is the first Spanish-Awajun machine translation study, and we hope that this work will serve as a starting point for future research on this neglected Peruvian language.", }
We introduce a Spanish-Awajun parallel dataset of 22k high-quality sentence pairs with the help of the journalistic organization Company C. This dataset consists of parallel data obtained from various web sources such as poems, stories, laws, protocols, guidelines, handbooks, the Bible, and news published by Company C. The study also includes an analysis of the dataset{'}s performance for Spanish-Awajun translation using a Transformer architecture with transfer learning from a parent model, utilizing Spanish-English and Spanish-Finnish as high-resource language-pairs. As far as we know, this is the first Spanish-Awajun machine translation study, and we hope that this work will serve as a starting point for future research on this neglected Peruvian language.
[ "Moreno, Oscar", "Atamain, Yanua", "Oncevay, Arturo" ]
Awajun-OP: Multi-domain dataset for Spanish–Awajun Machine Translation
americasnlp-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.13.bib
https://aclanthology.org/2024.americasnlp-1.13/
@inproceedings{pugh-etal-2024-wav2pos, title = "Wav2pos: Exploring syntactic analysis from audio for {H}ighland {P}uebla {N}ahuatl", author = "Pugh, Robert and Sreedhar, Varun and Tyers, Francis", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.13", doi = "10.18653/v1/2024.americasnlp-1.13", pages = "121--126", abstract = "We describe an approach to part-of-speech tagging from audio with very little human-annotated data, for Highland Puebla Nahuatl, a low-resource language of Mexico. While automatic morphosyntactic analysis is typically trained on annotated textual data, large amounts of text is rarely available for low-resource, marginalized, and/or minority languages, and morphosyntactically-annotated data is even harder to come by. Much of the data from these languages may exist in the form of recordings, often only partially-transcribed or analyzed by field linguists working on language documentation projects. Given this relatively low-availability of text in the low-resource language scenario, we explore end-to-end automated morphosyntactic analysis directly from audio. The experiments described in this paper focus on one piece of morphosyntax, part-of-speech tagging, and builds on existing work in a high-resource setting. We use weak supervision to increase training volume, and explore a few techniques for generating word-level predictions from the acoustic features. Our experiments show promising results, despite less than 400 sentences of audio-aligned, manually-labeled text.", }
We describe an approach to part-of-speech tagging from audio with very little human-annotated data, for Highland Puebla Nahuatl, a low-resource language of Mexico. While automatic morphosyntactic analysis is typically trained on annotated textual data, large amounts of text is rarely available for low-resource, marginalized, and/or minority languages, and morphosyntactically-annotated data is even harder to come by. Much of the data from these languages may exist in the form of recordings, often only partially-transcribed or analyzed by field linguists working on language documentation projects. Given this relatively low-availability of text in the low-resource language scenario, we explore end-to-end automated morphosyntactic analysis directly from audio. The experiments described in this paper focus on one piece of morphosyntax, part-of-speech tagging, and builds on existing work in a high-resource setting. We use weak supervision to increase training volume, and explore a few techniques for generating word-level predictions from the acoustic features. Our experiments show promising results, despite less than 400 sentences of audio-aligned, manually-labeled text.
[ "Pugh, Robert", "Sreedhar, Varun", "Tyers, Francis" ]
Wav2pos: Exploring syntactic analysis from audio for Highland Puebla Nahuatl
americasnlp-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.14.bib
https://aclanthology.org/2024.americasnlp-1.14/
@inproceedings{reyes-garcia-2024-field, title = "From Field Linguistics to {NLP}: Creating a curated dataset in Amuzgo language", author = "Reyes, Antonio and Garc{\'\i}a, Hamlet Antonio", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.14", doi = "10.18653/v1/2024.americasnlp-1.14", pages = "127--131", abstract = "This article presents an ongoing research on one of the several native languages of the Americas: Amuzgo or jny{'}on3 nda3 . This language is spoken in Southern Mexico and belongs to the Otomanguean family. Although Amuzgo vitality is stable and there are some available resources, such as grammars, dictionaries, or literature, its digital inclusion is emerging (cf. Eberhard et al. (2024)). In this respect, here is described the creation of a curated dataset in Amuzgo. This resource is intended to contribute the development of tools for scarce resources languages by providing fine-grained linguistic information in different layers: From data collection with native speakers to data annotation. The dataset was built according to the following method: i) data collection in Amuzgo by means of linguistic fieldwork; ii) acoustic data processing; iii) data transcription; iv) glossing and translating data into Spanish; v) semiautomatic alignment of translations; and vi) data systematization. This resource is released as an open access dataset to foster the academic community to explore the richness of this language.", }
This article presents an ongoing research on one of the several native languages of the Americas: Amuzgo or jny{'}on3 nda3 . This language is spoken in Southern Mexico and belongs to the Otomanguean family. Although Amuzgo vitality is stable and there are some available resources, such as grammars, dictionaries, or literature, its digital inclusion is emerging (cf. Eberhard et al. (2024)). In this respect, here is described the creation of a curated dataset in Amuzgo. This resource is intended to contribute the development of tools for scarce resources languages by providing fine-grained linguistic information in different layers: From data collection with native speakers to data annotation. The dataset was built according to the following method: i) data collection in Amuzgo by means of linguistic fieldwork; ii) acoustic data processing; iii) data transcription; iv) glossing and translating data into Spanish; v) semiautomatic alignment of translations; and vi) data systematization. This resource is released as an open access dataset to foster the academic community to explore the richness of this language.
[ "Reyes, Antonio", "Garc{\\'\\i}a, Hamlet Antonio" ]
From Field Linguistics to NLP: Creating a curated dataset in Amuzgo language
americasnlp-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.15.bib
https://aclanthology.org/2024.americasnlp-1.15/
@inproceedings{le-ferrand-etal-2024-enenlhet, title = "Enenlhet as a case-study to investigate {ASR} model generalizability for language documentation", author = "Le Ferrand, {\'E}ric and Heaton, Raina and Prud{'}hommeaux, Emily", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.15", doi = "10.18653/v1/2024.americasnlp-1.15", pages = "132--137", abstract = "Although both linguists and language community members recognize the potential utility of automatic speech recognition (ASR) for documentation, one of the obstacles to using these technologies is the scarcity of data necessary to train effective systems. Recent advances in ASR, particularly the ability to fine-tune large multilingual acoustic models to small amounts of data from a new language, have demonstrated the potential of ASR for transcription. However, many proof-of-concept demonstrations of ASR in low-resource settings rely on a single data collection project, which may yield models that are biased toward that particular data scenario, whether in content, recording quality, transcription conventions, or speaker population. In this paper, we investigate the performance of two state-of-the art ASR architectures for fine-tuning acoustic models to small speech datasets with the goal of transcribing recordings of Enenlhet, an endangered Indigenous language spoken in South America. Our results suggest that while ASR offers utility for generating first-pass transcriptions of speech collected in the course of linguistic fieldwork, individual vocabulary diversity and data quality have an outsized impact on ASR accuracy.", }
Although both linguists and language community members recognize the potential utility of automatic speech recognition (ASR) for documentation, one of the obstacles to using these technologies is the scarcity of data necessary to train effective systems. Recent advances in ASR, particularly the ability to fine-tune large multilingual acoustic models to small amounts of data from a new language, have demonstrated the potential of ASR for transcription. However, many proof-of-concept demonstrations of ASR in low-resource settings rely on a single data collection project, which may yield models that are biased toward that particular data scenario, whether in content, recording quality, transcription conventions, or speaker population. In this paper, we investigate the performance of two state-of-the art ASR architectures for fine-tuning acoustic models to small speech datasets with the goal of transcribing recordings of Enenlhet, an endangered Indigenous language spoken in South America. Our results suggest that while ASR offers utility for generating first-pass transcriptions of speech collected in the course of linguistic fieldwork, individual vocabulary diversity and data quality have an outsized impact on ASR accuracy.
[ "Le Ferr", ", {\\'E}ric", "Heaton, Raina", "Prud{'}hommeaux, Emily" ]
Enenlhet as a case-study to investigate ASR model generalizability for language documentation
americasnlp-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.16.bib
https://aclanthology.org/2024.americasnlp-1.16/
@inproceedings{rangel-kobayashi-2024-advancing, title = "Advancing {NMT} for Indigenous Languages: A Case Study on {Y}ucatec {M}ayan and {C}hol", author = "Rangel, Julio and Kobayashi, Norio", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.16", doi = "10.18653/v1/2024.americasnlp-1.16", pages = "138--142", abstract = "This study leverages Spanish-trained large language models (LLMs) to develop neural machine translation (NMT) systems for Mayan languages. For this, we first compile and process a low-resource dataset of 28,135 translation pairs of Chol and Yucatec Mayan extracted from documents in the CPLM Corpus (Mart{\'\i}nez et al.). Then, we implement a prompt-based approach to train one-to-many and many-to-many models. By comparing several training strategies for two LLMs, we found that, on average, training multilingual models is better, as shown by the ChrF++ reaching 50 on the test set in the best case. This study reinforces the viability of using LLMs to improve accessibility and preservation for languages with limited digital resources. We share our code, datasets, and models to promote collaboration and progress in this field: https://github.com/RIKEN-DKO/iikim{\_}translator.", }
This study leverages Spanish-trained large language models (LLMs) to develop neural machine translation (NMT) systems for Mayan languages. For this, we first compile and process a low-resource dataset of 28,135 translation pairs of Chol and Yucatec Mayan extracted from documents in the CPLM Corpus (Mart{\'\i}nez et al.). Then, we implement a prompt-based approach to train one-to-many and many-to-many models. By comparing several training strategies for two LLMs, we found that, on average, training multilingual models is better, as shown by the ChrF++ reaching 50 on the test set in the best case. This study reinforces the viability of using LLMs to improve accessibility and preservation for languages with limited digital resources. We share our code, datasets, and models to promote collaboration and progress in this field: https://github.com/RIKEN-DKO/iikim{\_}translator.
[ "Rangel, Julio", "Kobayashi, Norio" ]
Advancing NMT for Indigenous Languages: A Case Study on Yucatec Mayan and Chol
americasnlp-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.17.bib
https://aclanthology.org/2024.americasnlp-1.17/
@inproceedings{garcia-gilabert-etal-2024-bsc, title = "{BSC} Submission to the {A}mericas{NLP} 2024 Shared Task", author = "Garcia Gilabert, Javier and Sant, Aleix and Escolano, Carlos and De Luca Fornaciari, Francesca and Mash, Audrey and Melero, Maite", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.17", doi = "10.18653/v1/2024.americasnlp-1.17", pages = "143--149", abstract = "This paper describes the BSC{'}s submission to the AmericasNLP 2024 Shared Task. We participated in the Spanish to Quechua and Spanish to Guarani tasks. In this paper we show that by using LoRA adapters we can achieve similar performance as a full parameter fine-tuning by only training 14.2{\%} of the total number of parameters. Our systems achieved the highest ChrF++ scores and ranked first for both directions in the final results outperforming strong baseline systems in the provided development and test datasets.", }
This paper describes the BSC{'}s submission to the AmericasNLP 2024 Shared Task. We participated in the Spanish to Quechua and Spanish to Guarani tasks. In this paper we show that by using LoRA adapters we can achieve similar performance as a full parameter fine-tuning by only training 14.2{\%} of the total number of parameters. Our systems achieved the highest ChrF++ scores and ranked first for both directions in the final results outperforming strong baseline systems in the provided development and test datasets.
[ "Garcia Gilabert, Javier", "Sant, Aleix", "Escolano, Carlos", "De Luca Fornaciari, Francesca", "Mash, Audrey", "Melero, Maite" ]
BSC Submission to the AmericasNLP 2024 Shared Task
americasnlp-1.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.18.bib
https://aclanthology.org/2024.americasnlp-1.18/
@inproceedings{attieh-etal-2024-system, title = "System Description of the {N}ordics{A}lps Submission to the {A}mericas{NLP} 2024 Machine Translation Shared Task", author = "Attieh, Joseph and Hopton, Zachary and Scherrer, Yves and Samard{\v{z}}i{\'c}, Tanja", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.18", doi = "10.18653/v1/2024.americasnlp-1.18", pages = "150--158", abstract = "This paper presents the system description of the NordicsAlps team for the AmericasNLP 2024 Machine Translation Shared Task 1. We investigate the effect of tokenization on translation quality by exploring two different tokenization schemes: byte-level and redundancy-driven tokenization. We submitted three runs per language pair. The redundancy-driven tokenization ranked first among all submissions, scoring the highest average chrF2++, chrF, and BLEU metrics (averaged across all languages). These findings demonstrate the importance of carefully tailoring the tokenization strategies of machine translation systems, particularly in resource-constrained scenarios.", }
This paper presents the system description of the NordicsAlps team for the AmericasNLP 2024 Machine Translation Shared Task 1. We investigate the effect of tokenization on translation quality by exploring two different tokenization schemes: byte-level and redundancy-driven tokenization. We submitted three runs per language pair. The redundancy-driven tokenization ranked first among all submissions, scoring the highest average chrF2++, chrF, and BLEU metrics (averaged across all languages). These findings demonstrate the importance of carefully tailoring the tokenization strategies of machine translation systems, particularly in resource-constrained scenarios.
[ "Attieh, Joseph", "Hopton, Zachary", "Scherrer, Yves", "Samard{\\v{z}}i{\\'c}, Tanja" ]
System Description of the NordicsAlps Submission to the AmericasNLP 2024 Machine Translation Shared Task
americasnlp-1.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.19.bib
https://aclanthology.org/2024.americasnlp-1.19/
@inproceedings{ginn-etal-2024-robustness, title = "On the Robustness of Neural Models for Full Sentence Transformation", author = "Ginn, Michael and Marashian, Ali and Shandilya, Bhargav and Post, Claire and Rice, Enora and V{\'a}squez, Juan and Mcgregor, Marie and Buchholz, Matthew and Hulden, Mans and Palmer, Alexis", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.19", doi = "10.18653/v1/2024.americasnlp-1.19", pages = "159--173", abstract = "This paper describes the LECS Lab submission to the AmericasNLP 2024 Shared Task on the Creation of Educational Materials for Indigenous Languages. The task requires transforming a base sentence with regards to one or more linguistic properties (such as negation or tense). We observe that this task shares many similarities with the well-studied task of word-level morphological inflection, and we explore whether the findings from inflection research are applicable to this task. In particular, we experiment with a number of augmentation strategies, finding that they can significantly benefit performance, but that not all augmented data is necessarily beneficial. Furthermore, we find that our character-level neural models show high variability with regards to performance on unseen data, and may not be the best choice when training data is limited.", }
This paper describes the LECS Lab submission to the AmericasNLP 2024 Shared Task on the Creation of Educational Materials for Indigenous Languages. The task requires transforming a base sentence with regards to one or more linguistic properties (such as negation or tense). We observe that this task shares many similarities with the well-studied task of word-level morphological inflection, and we explore whether the findings from inflection research are applicable to this task. In particular, we experiment with a number of augmentation strategies, finding that they can significantly benefit performance, but that not all augmented data is necessarily beneficial. Furthermore, we find that our character-level neural models show high variability with regards to performance on unseen data, and may not be the best choice when training data is limited.
[ "Ginn, Michael", "Marashian, Ali", "Sh", "ilya, Bhargav", "Post, Claire", "Rice, Enora", "V{\\'a}squez, Juan", "Mcgregor, Marie", "Buchholz, Matthew", "Hulden, Mans", "Palmer, Alexis" ]
On the Robustness of Neural Models for Full Sentence Transformation
americasnlp-1.19
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.20.bib
https://aclanthology.org/2024.americasnlp-1.20/
@inproceedings{haley-2024-unreasonable, title = "The unreasonable effectiveness of large language models for low-resource clause-level morphology: In-context generalization or prior exposure?", author = "Haley, Coleman", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.20", doi = "10.18653/v1/2024.americasnlp-1.20", pages = "174--178", abstract = "This paper describes the submission of Team {``}Giving it a Shot{''} to the AmericasNLP 2024 Shared Task on Creation of Educational Materials for Indigenous Languages. We use a simple few-shot prompting approach with several state of the art large language models, achieving competitive performance on the shared task, with our best system placing third overall. We perform a preliminary analysis to determine to what degree the performance of our model is due to prior exposure to the task languages, finding that generally our performance is better explained as being derived from in-context learning capabilities.", }
This paper describes the submission of Team {``}Giving it a Shot{''} to the AmericasNLP 2024 Shared Task on Creation of Educational Materials for Indigenous Languages. We use a simple few-shot prompting approach with several state of the art large language models, achieving competitive performance on the shared task, with our best system placing third overall. We perform a preliminary analysis to determine to what degree the performance of our model is due to prior exposure to the task languages, finding that generally our performance is better explained as being derived from in-context learning capabilities.
[ "Haley, Coleman" ]
The unreasonable effectiveness of large language models for low-resource clause-level morphology: In-context generalization or prior exposure?
americasnlp-1.20
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.21.bib
https://aclanthology.org/2024.americasnlp-1.21/
@inproceedings{su-etal-2024-comparison, title = "A Comparison of Fine-Tuning and In-Context Learning for Clause-Level Morphosyntactic Alternation", author = "Su, Jim and Ho, Justin and Broadwell, George and Moeller, Sarah and Dorr, Bonnie", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.21", doi = "10.18653/v1/2024.americasnlp-1.21", pages = "179--187", abstract = "This paper presents our submission to the AmericasNLP 2024 Shared Task on the Creation of Educational Materials for Indigenous Languages. We frame this task as one of morphological inflection generation, treating each sentence as a single word. We investigate and compare two distinct approaches: fine-tuning neural encoder-decoder models such as NLLB- 200, and in-context learning with proprietary large language models (LLMs). Our findings demonstrate that for this task, no one approach is perfect. Anthropic{'}s Claude 3 Opus, when supplied with grammatical description entries, achieves the highest performance on Bribri among the evaluated models. This outcome corroborates and extends previous research exploring the efficacy of in-context learning in low- resource settings. For Maya, fine-tuning NLLB- 200-3.3B using StemCorrupt augmented data yielded the best performance.", }
This paper presents our submission to the AmericasNLP 2024 Shared Task on the Creation of Educational Materials for Indigenous Languages. We frame this task as one of morphological inflection generation, treating each sentence as a single word. We investigate and compare two distinct approaches: fine-tuning neural encoder-decoder models such as NLLB- 200, and in-context learning with proprietary large language models (LLMs). Our findings demonstrate that for this task, no one approach is perfect. Anthropic{'}s Claude 3 Opus, when supplied with grammatical description entries, achieves the highest performance on Bribri among the evaluated models. This outcome corroborates and extends previous research exploring the efficacy of in-context learning in low- resource settings. For Maya, fine-tuning NLLB- 200-3.3B using StemCorrupt augmented data yielded the best performance.
[ "Su, Jim", "Ho, Justin", "Broadwell, George", "Moeller, Sarah", "Dorr, Bonnie" ]
A Comparison of Fine-Tuning and In-Context Learning for Clause-Level Morphosyntactic Alternation
americasnlp-1.21
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.22.bib
https://aclanthology.org/2024.americasnlp-1.22/
@inproceedings{degenaro-lupicki-2024-experiments, title = "Experiments in Mamba Sequence Modeling and {NLLB}-200 Fine-Tuning for Low Resource Multilingual Machine Translation", author = "Degenaro, Dan and Lupicki, Tom", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.22", doi = "10.18653/v1/2024.americasnlp-1.22", pages = "188--194", abstract = "This paper presents DC{\_}DMV{'}s submission to the AmericasNLP 2024 Shared Task 1: Machine Translation Systems for Indigenous Languages. Our submission consists of two multilingual approaches to building machine translation systems from Spanish to eleven Indigenous languages: fine-tuning the 600M distilled variant of NLLB-200, and an experiment in training from scratch a neural network using the Mamba State Space Modeling architecture. We achieve the best results on the test set for a total of 4 of the language pairs between two checkpoints by fine-tuning NLLB-200, and outperform the baseline score on the test set for 2 languages.", }
This paper presents DC{\_}DMV{'}s submission to the AmericasNLP 2024 Shared Task 1: Machine Translation Systems for Indigenous Languages. Our submission consists of two multilingual approaches to building machine translation systems from Spanish to eleven Indigenous languages: fine-tuning the 600M distilled variant of NLLB-200, and an experiment in training from scratch a neural network using the Mamba State Space Modeling architecture. We achieve the best results on the test set for a total of 4 of the language pairs between two checkpoints by fine-tuning NLLB-200, and outperform the baseline score on the test set for 2 languages.
[ "Degenaro, Dan", "Lupicki, Tom" ]
Experiments in Mamba Sequence Modeling and NLLB-200 Fine-Tuning for Low Resource Multilingual Machine Translation
americasnlp-1.22
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.23.bib
https://aclanthology.org/2024.americasnlp-1.23/
@inproceedings{bui-von-der-wense-2024-jgu, title = "{JGU} Mainz{'}s Submission to the {A}mericas{NLP} 2024 Shared Task on the Creation of Educational Materials for Indigenous Languages", author = "Bui, Minh Duc and von der Wense, Katharina", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.23", doi = "10.18653/v1/2024.americasnlp-1.23", pages = "195--200", abstract = "In this paper, we present the four systems developed by the Meenzer team from JGU for the AmericasNLP 2024 shared task on the creation of educational materials for Indigenous languages. The task involves accurately applying specific grammatical modifications to given source sentences across three low-resource Indigenous languages: Bribri, Guarani, and Maya. We train two types of model architectures: finetuning a sequence-to-sequence pointer-generator LSTM and finetuning the Mixtral 8x7B model by incorporating in-context examples into the training phase. System 1, an ensemble combining finetuned LSTMs, finetuned Mixtral models, and GPT-4, achieves the best performance on Guarani. Meanwhile, system 4, another ensemble consisting solely of fine-tuned Mixtral models, outperforms all other teams on Maya and secures the second place overall. Additionally, we conduct an ablation study to understand the performance of our system 4.", }
In this paper, we present the four systems developed by the Meenzer team from JGU for the AmericasNLP 2024 shared task on the creation of educational materials for Indigenous languages. The task involves accurately applying specific grammatical modifications to given source sentences across three low-resource Indigenous languages: Bribri, Guarani, and Maya. We train two types of model architectures: finetuning a sequence-to-sequence pointer-generator LSTM and finetuning the Mixtral 8x7B model by incorporating in-context examples into the training phase. System 1, an ensemble combining finetuned LSTMs, finetuned Mixtral models, and GPT-4, achieves the best performance on Guarani. Meanwhile, system 4, another ensemble consisting solely of fine-tuned Mixtral models, outperforms all other teams on Maya and secures the second place overall. Additionally, we conduct an ablation study to understand the performance of our system 4.
[ "Bui, Minh Duc", "von der Wense, Katharina" ]
JGU Mainz's Submission to the AmericasNLP 2024 Shared Task on the Creation of Educational Materials for Indigenous Languages
americasnlp-1.23
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.24.bib
https://aclanthology.org/2024.americasnlp-1.24/
@inproceedings{vasselli-etal-2024-applying, title = "Applying Linguistic Expertise to {LLM}s for Educational Material Development in Indigenous Languages", author = "Vasselli, Justin and Mart{\'\i}nez Peguero, Arturo and Sung, Junehwan and Watanabe, Taro", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.24", doi = "10.18653/v1/2024.americasnlp-1.24", pages = "201--208", abstract = "This paper presents our approach to the AmericasNLP 2024 Shared Task 2 as the JAJ (/dʒ{\ae}z/) team. The task aimed at creating educational materials for indigenous languages, and we focused on Maya and Bribri. Given the unique linguistic features and challenges of these languages, and the limited size of the training datasets, we developed a hybrid methodology combining rule-based NLP methods with prompt-based techniques. This approach leverages the meta-linguistic capabilities of large language models, enabling us to blend broad, language-agnostic processing with customized solutions. Our approach lays a foundational framework that can be expanded to other indigenous languages languages in future work.", }
This paper presents our approach to the AmericasNLP 2024 Shared Task 2 as the JAJ (/dʒ{\ae}z/) team. The task aimed at creating educational materials for indigenous languages, and we focused on Maya and Bribri. Given the unique linguistic features and challenges of these languages, and the limited size of the training datasets, we developed a hybrid methodology combining rule-based NLP methods with prompt-based techniques. This approach leverages the meta-linguistic capabilities of large language models, enabling us to blend broad, language-agnostic processing with customized solutions. Our approach lays a foundational framework that can be expanded to other indigenous languages languages in future work.
[ "Vasselli, Justin", "Mart{\\'\\i}nez Peguero, Arturo", "Sung, Junehwan", "Watanabe, Taro" ]
Applying Linguistic Expertise to LLMs for Educational Material Development in Indigenous Languages
americasnlp-1.24
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.25.bib
https://aclanthology.org/2024.americasnlp-1.25/
@inproceedings{iyer-etal-2024-exploring, title = "Exploring Very Low-Resource Translation with {LLM}s: The {U}niversity of {E}dinburgh{'}s Submission to {A}mericas{NLP} 2024 Translation Task", author = "Iyer, Vivek and Malik, Bhavitvya and Zhu, Wenhao and Stepachev, Pavel and Chen, Pinzhen and Haddow, Barry and Birch, Alexandra", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.25", doi = "10.18653/v1/2024.americasnlp-1.25", pages = "209--220", abstract = "This paper describes the University of Edinburgh{'}s submission to the AmericasNLP 2024 shared task on the translation of Spanish into 11 indigenous American languages. We explore the ability of multilingual Large Language Models (LLMs) to model low-resource languages by continued pre-training with LoRA, and conduct instruction fine-tuning using a variety of datasets, demonstrating that this improves LLM performance. Furthermore, we demonstrate the efficacy of checkpoint averaging alongside decoding techniques like beam search and sampling, resulting in further improvements. We participate in all 11 translation directions.", }
This paper describes the University of Edinburgh{'}s submission to the AmericasNLP 2024 shared task on the translation of Spanish into 11 indigenous American languages. We explore the ability of multilingual Large Language Models (LLMs) to model low-resource languages by continued pre-training with LoRA, and conduct instruction fine-tuning using a variety of datasets, demonstrating that this improves LLM performance. Furthermore, we demonstrate the efficacy of checkpoint averaging alongside decoding techniques like beam search and sampling, resulting in further improvements. We participate in all 11 translation directions.
[ "Iyer, Vivek", "Malik, Bhavitvya", "Zhu, Wenhao", "Stepachev, Pavel", "Chen, Pinzhen", "Haddow, Barry", "Birch, Alex", "ra" ]
Exploring Very Low-Resource Translation with LLMs: The University of Edinburgh's Submission to AmericasNLP 2024 Translation Task
americasnlp-1.25
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.26.bib
https://aclanthology.org/2024.americasnlp-1.26/
@inproceedings{hammond-2024-role, title = "The role of morphosyntactic similarity in generating related sentences", author = "Hammond, Michael", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.26", doi = "10.18653/v1/2024.americasnlp-1.26", pages = "221--223", abstract = "In this paper we describe our work on Task{\textasciitilde}2: Creation of Educational Materials. We tried three approaches, but only the third approach yielded improvement over the baseline system. The first system was a fairly generic transformer model. The second system was our own implementation of the edit tree approach from the baseline system. Our final attempt was a version of the baseline system where if no transformation succeeded, we applied transformations from similar morphosyntactic relations. We describe all three here, but, in the end, we only submitted the third system.", }
In this paper we describe our work on Task{\textasciitilde}2: Creation of Educational Materials. We tried three approaches, but only the third approach yielded improvement over the baseline system. The first system was a fairly generic transformer model. The second system was our own implementation of the edit tree approach from the baseline system. Our final attempt was a version of the baseline system where if no transformation succeeded, we applied transformations from similar morphosyntactic relations. We describe all three here, but, in the end, we only submitted the third system.
[ "Hammond, Michael" ]
The role of morphosyntactic similarity in generating related sentences
americasnlp-1.26
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.27.bib
https://aclanthology.org/2024.americasnlp-1.27/
@inproceedings{chiruzzo-etal-2024-findings, title = "Findings of the {A}mericas{NLP} 2024 Shared Task on the Creation of Educational Materials for Indigenous Languages", author = {Chiruzzo, Luis and Denisov, Pavel and Molina-Villegas, Alejandro and Fernandez-Sabido, Silvia and Coto-Solano, Rolando and Ag{\"u}ero-Torales, Marvin and Alvarez, Aldo and Canul-Yah, Samuel and Hau-Uc{\'a}n, Lorena and Ebrahimi, Abteen and Pugh, Robert and Oncevay, Arturo and Rijhwani, Shruti and von der Wense, Katharina and Mager, Manuel}, editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.27", doi = "10.18653/v1/2024.americasnlp-1.27", pages = "224--235", abstract = "This paper presents the results of the first shared task about the creation of educational materials for three indigenous languages of the Americas.The task proposes to automatically generate variations of sentences according to linguistic features that could be used for grammar exercises.The languages involved in this task are Bribri, Maya, and Guarani.Seven teams took part in the challenge, submitting a total of 22 systems, obtaining very promising results.", }
This paper presents the results of the first shared task about the creation of educational materials for three indigenous languages of the Americas.The task proposes to automatically generate variations of sentences according to linguistic features that could be used for grammar exercises.The languages involved in this task are Bribri, Maya, and Guarani.Seven teams took part in the challenge, submitting a total of 22 systems, obtaining very promising results.
[ "Chiruzzo, Luis", "Denisov, Pavel", "Molina-Villegas, Alej", "ro", "Fern", "ez-Sabido, Silvia", "Coto-Solano, Rol", "o", "Ag{\\\"u}ero-Torales, Marvin", "Alvarez, Aldo", "Canul-Yah, Samuel", "Hau-Uc{\\'a}n, Lorena", "Ebrahimi, Abteen", "Pugh, Robert", "Oncevay, Arturo", "Rijhwani, Shruti", "von der Wense, Katharina", "Mager, Manuel" ]
Findings of the AmericasNLP 2024 Shared Task on the Creation of Educational Materials for Indigenous Languages
americasnlp-1.27
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.americasnlp-1.28.bib
https://aclanthology.org/2024.americasnlp-1.28/
@inproceedings{ebrahimi-etal-2024-findings, title = "Findings of the {A}mericas{NLP} 2024 Shared Task on Machine Translation into Indigenous Languages", author = "Ebrahimi, Abteen and de Gibert, Ona and Vazquez, Raul and Coto-Solano, Rolando and Denisov, Pavel and Pugh, Robert and Mager, Manuel and Oncevay, Arturo and Chiruzzo, Luis and von der Wense, Katharina and Rijhwani, Shruti", editor = "Mager, Manuel and Ebrahimi, Abteen and Rijhwani, Shruti and Oncevay, Arturo and Chiruzzo, Luis and Pugh, Robert and von der Wense, Katharina", booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.americasnlp-1.28", doi = "10.18653/v1/2024.americasnlp-1.28", pages = "236--246", abstract = "This paper presents the findings of the third iteration of the AmericasNLP Shared Task on Machine Translation. This year{'}s competition features eleven Indigenous languages found across North, Central, and South America. A total of six teams participate with a total of 157 submissions across all languages and models. Two baselines {--} the Sheffield and Helsinki systems from 2023 {--} are provided and represent hard-to-beat starting points for the competition. In addition to the baselines, teams are given access to a new repository of training data which consists of data collected by teams in prior shared tasks. Using ChrF++ as the main competition metric, we see improvements over the baseline for 4 languages: Chatino, Guarani, Quechua, and Rar{\'a}muri, with performance increases over the best baseline of 4.2 ChrF++. In this work, we present a summary of the submitted systems, results, and a human evaluation of system outputs for Bribri, which consists of both (1) a rating of meaning and fluency and (2) a qualitative error analysis of outputs from the best submitted system.", }
This paper presents the findings of the third iteration of the AmericasNLP Shared Task on Machine Translation. This year{'}s competition features eleven Indigenous languages found across North, Central, and South America. A total of six teams participate with a total of 157 submissions across all languages and models. Two baselines {--} the Sheffield and Helsinki systems from 2023 {--} are provided and represent hard-to-beat starting points for the competition. In addition to the baselines, teams are given access to a new repository of training data which consists of data collected by teams in prior shared tasks. Using ChrF++ as the main competition metric, we see improvements over the baseline for 4 languages: Chatino, Guarani, Quechua, and Rar{\'a}muri, with performance increases over the best baseline of 4.2 ChrF++. In this work, we present a summary of the submitted systems, results, and a human evaluation of system outputs for Bribri, which consists of both (1) a rating of meaning and fluency and (2) a qualitative error analysis of outputs from the best submitted system.
[ "Ebrahimi, Abteen", "de Gibert, Ona", "Vazquez, Raul", "Coto-Solano, Rol", "o", "Denisov, Pavel", "Pugh, Robert", "Mager, Manuel", "Oncevay, Arturo", "Chiruzzo, Luis", "von der Wense, Katharina", "Rijhwani, Shruti" ]
Findings of the AmericasNLP 2024 Shared Task on Machine Translation into Indigenous Languages
americasnlp-1.28
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.1.bib
https://aclanthology.org/2024.bea-1.1/
@inproceedings{scaria-etal-2024-good, title = "How Good are {M}odern {LLM}s in Generating Relevant and High-Quality Questions at Different Bloom{'}s Skill Levels for {I}ndian High School Social Science Curriculum?", author = "Scaria, Nicy and Chenna, Suma Dharani and Subramani, Deepak", editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.1", pages = "1--10", abstract = "The creation of pedagogically effective questions is a challenge for teachers and requires significant time and meticulous planning, especially in resource-constrained economies. For example, in India, assessments for social science in high schools are characterized by rote memorization without regard to higher-order skill levels. Automated educational question generation (AEQG) using large language models (LLMs) has the potential to help teachers develop assessments at scale. However, it is important to evaluate the quality and relevance of these questions. In this study, we examine the ability of different LLMs (Falcon 40B, Llama2 70B, Palm 2, GPT 3.5, and GPT 4) to generate relevant and high-quality questions of different cognitive levels, as defined by Bloom{'}s taxonomy. We prompt each model with the same instructions and different contexts to generate 510 questions in the social science curriculum of a state educational board in India. Two human experts used a nine-item rubric to assess linguistic correctness, pedagogical relevance and quality, and adherence to Bloom{'}s skill levels. Our results showed that 91.56{\%} of the LLM-generated questions were relevant and of high quality. This suggests that LLMs can generate relevant and high-quality questions at different cognitive levels, making them useful for creating assessments for scaling education in resource-constrained economies.", }
The creation of pedagogically effective questions is a challenge for teachers and requires significant time and meticulous planning, especially in resource-constrained economies. For example, in India, assessments for social science in high schools are characterized by rote memorization without regard to higher-order skill levels. Automated educational question generation (AEQG) using large language models (LLMs) has the potential to help teachers develop assessments at scale. However, it is important to evaluate the quality and relevance of these questions. In this study, we examine the ability of different LLMs (Falcon 40B, Llama2 70B, Palm 2, GPT 3.5, and GPT 4) to generate relevant and high-quality questions of different cognitive levels, as defined by Bloom{'}s taxonomy. We prompt each model with the same instructions and different contexts to generate 510 questions in the social science curriculum of a state educational board in India. Two human experts used a nine-item rubric to assess linguistic correctness, pedagogical relevance and quality, and adherence to Bloom{'}s skill levels. Our results showed that 91.56{\%} of the LLM-generated questions were relevant and of high quality. This suggests that LLMs can generate relevant and high-quality questions at different cognitive levels, making them useful for creating assessments for scaling education in resource-constrained economies.
[ "Scaria, Nicy", "Chenna, Suma Dharani", "Subramani, Deepak" ]
How Good are Modern LLMs in Generating Relevant and High-Quality Questions at Different Bloom's Skill Levels for Indian High School Social Science Curriculum?
bea-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.2.bib
https://aclanthology.org/2024.bea-1.2/
@inproceedings{stahlberg-kumar-2024-synthetic, title = "Synthetic Data Generation for Low-resource Grammatical Error Correction with Tagged Corruption Models", author = "Stahlberg, Felix and Kumar, Shankar", editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.2", pages = "11--16", abstract = "Tagged corruption models provide precise control over the introduction of grammatical errors into clean text. This capability has made them a powerful tool for generating pre-training data for grammatical error correction (GEC) in English. In this work, we demonstrate their application to four languages with substantially fewer GEC resources than English: German, Romanian, Russian, and Spanish. We release a new tagged-corruption dataset consisting of 2.5M examples per language that was generated by a fine-tuned PaLM 2 foundation model. Pre-training on tagged corruptions yields consistent gains across all four languages, especially for small model sizes and languages with limited human-labelled data.", }
Tagged corruption models provide precise control over the introduction of grammatical errors into clean text. This capability has made them a powerful tool for generating pre-training data for grammatical error correction (GEC) in English. In this work, we demonstrate their application to four languages with substantially fewer GEC resources than English: German, Romanian, Russian, and Spanish. We release a new tagged-corruption dataset consisting of 2.5M examples per language that was generated by a fine-tuned PaLM 2 foundation model. Pre-training on tagged corruptions yields consistent gains across all four languages, especially for small model sizes and languages with limited human-labelled data.
[ "Stahlberg, Felix", "Kumar, Shankar" ]
Synthetic Data Generation for Low-resource Grammatical Error Correction with Tagged Corruption Models
bea-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.3.bib
https://aclanthology.org/2024.bea-1.3/
@inproceedings{omelianchuk-etal-2024-pillars, title = "Pillars of Grammatical Error Correction: Comprehensive Inspection Of Contemporary Approaches In The Era of Large Language Models", author = "Omelianchuk, Kostiantyn and Liubonko, Andrii and Skurzhanskyi, Oleksandr and Chernodub, Artem and Korniienko, Oleksandr and Samokhin, Igor", editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.3", pages = "17--33", abstract = "In this paper, we carry out experimental research on Grammatical Error Correction, delving into the nuances of single-model systems, comparing the efficiency of ensembling and ranking methods, and exploring the application of large language models to GEC as single-model systems, as parts of ensembles, and as ranking methods. We set new state-of-the-art records with F{\_}0.5 scores of 72.8 on CoNLL-2014-test and 81.4 on BEA-test, respectively. To support further advancements in GEC and ensure the reproducibility of our research, we make our code, trained models, and systems{'} outputs publicly available, facilitating future findings.", }
In this paper, we carry out experimental research on Grammatical Error Correction, delving into the nuances of single-model systems, comparing the efficiency of ensembling and ranking methods, and exploring the application of large language models to GEC as single-model systems, as parts of ensembles, and as ranking methods. We set new state-of-the-art records with F{\_}0.5 scores of 72.8 on CoNLL-2014-test and 81.4 on BEA-test, respectively. To support further advancements in GEC and ensure the reproducibility of our research, we make our code, trained models, and systems{'} outputs publicly available, facilitating future findings.
[ "Omelianchuk, Kostiantyn", "Liubonko, Andrii", "Skurzhanskyi, Oleks", "r", "Chernodub, Artem", "Korniienko, Oleks", "r", "Samokhin, Igor" ]
Pillars of Grammatical Error Correction: Comprehensive Inspection Of Contemporary Approaches In The Era of Large Language Models
bea-1.3
Poster
2404.14914
[ "https://github.com/grammarly/pillars-of-gec" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.4.bib
https://aclanthology.org/2024.bea-1.4/
@inproceedings{siyan-etal-2024-using, title = "Using Adaptive Empathetic Responses for Teaching {E}nglish", author = "Siyan, Li and Shao, Teresa and Hirschberg, Julia and Yu, Zhou", editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.4", pages = "34--53", abstract = "Existing English-teaching chatbots rarely incorporate empathy explicitly in their feedback, but empathetic feedback could help keep students engaged and reduce learner anxiety. Toward this end, we propose the task of negative emotion detection via audio, for recognizing empathetic feedback opportunities in language learning. We then build the first spoken English-teaching chatbot with adaptive, empathetic feedback. This feedback is synthesized through automatic prompt optimization of ChatGPT and is evaluated with English learners. We demonstrate the effectiveness of our system through a preliminary user study.", }
Existing English-teaching chatbots rarely incorporate empathy explicitly in their feedback, but empathetic feedback could help keep students engaged and reduce learner anxiety. Toward this end, we propose the task of negative emotion detection via audio, for recognizing empathetic feedback opportunities in language learning. We then build the first spoken English-teaching chatbot with adaptive, empathetic feedback. This feedback is synthesized through automatic prompt optimization of ChatGPT and is evaluated with English learners. We demonstrate the effectiveness of our system through a preliminary user study.
[ "Siyan, Li", "Shao, Teresa", "Hirschberg, Julia", "Yu, Zhou" ]
Using Adaptive Empathetic Responses for Teaching English
bea-1.4
Poster
2404.13764
[ "https://github.com/siyan-sylvia-li/adaptive_empathetic_bea2024" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.5.bib
https://aclanthology.org/2024.bea-1.5/
@inproceedings{rooein-etal-2024-beyond, title = "Beyond Flesch-Kincaid: Prompt-based Metrics Improve Difficulty Classification of Educational Texts", author = {Rooein, Donya and R{\"o}ttger, Paul and Shaitarova, Anastassia and Hovy, Dirk}, editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.5", pages = "54--67", abstract = "Using large language models (LLMs) for educational applications like dialogue-based teaching is a hot topic. Effective teaching, however, requires teachers to adapt the difficulty of content and explanations to the education level of their students. Even the best LLMs today struggle to do this well. If we want to improve LLMs on this adaptation task, we need to be able to measure adaptation success reliably. However, current Static metrics for text difficulty, like the Flesch-Kincaid Reading Ease score, are known to be crude and brittle. We, therefore, introduce and evaluate a new set of Prompt-based metrics for text difficulty. Based on a user study, we create Prompt-based metrics as inputs for LLMs. They leverage LLM{'}s general language understanding capabilities to capture more abstract and complex features than Static metrics. Regression experiments show that adding our Prompt-based metrics significantly improves text difficulty classification over Static metrics alone. Our results demonstrate the promise of using LLMs to evaluate text adaptation to different education levels.", }
Using large language models (LLMs) for educational applications like dialogue-based teaching is a hot topic. Effective teaching, however, requires teachers to adapt the difficulty of content and explanations to the education level of their students. Even the best LLMs today struggle to do this well. If we want to improve LLMs on this adaptation task, we need to be able to measure adaptation success reliably. However, current Static metrics for text difficulty, like the Flesch-Kincaid Reading Ease score, are known to be crude and brittle. We, therefore, introduce and evaluate a new set of Prompt-based metrics for text difficulty. Based on a user study, we create Prompt-based metrics as inputs for LLMs. They leverage LLM{'}s general language understanding capabilities to capture more abstract and complex features than Static metrics. Regression experiments show that adding our Prompt-based metrics significantly improves text difficulty classification over Static metrics alone. Our results demonstrate the promise of using LLMs to evaluate text adaptation to different education levels.
[ "Rooein, Donya", "R{\\\"o}ttger, Paul", "Shaitarova, Anastassia", "Hovy, Dirk" ]
Beyond Flesch-Kincaid: Prompt-based Metrics Improve Difficulty Classification of Educational Texts
bea-1.5
Poster
2405.09482
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.6.bib
https://aclanthology.org/2024.bea-1.6/
@inproceedings{kobayashi-etal-2024-large, title = "Large Language Models Are State-of-the-Art Evaluator for Grammatical Error Correction", author = "Kobayashi, Masamune and Mita, Masato and Komachi, Mamoru", editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.6", pages = "68--77", abstract = "Large Language Models (LLMs) have been reported to outperform existing automatic evaluation metrics in some tasks, such as text summarization and machine translation. However, there has been a lack of research on LLMs as evaluators in grammatical error correction (GEC). In this study, we investigate the performance of LLMs in GEC evaluation by employing prompts designed to incorporate various evaluation criteria inspired by previous research. Our extensive experimental results demonstrate that GPT-4 achieved Kendall{'}s rank correlation of 0.662 with human judgments, surpassing all existing methods. Furthermore, in recent GEC evaluations, we have underscored the significance of the LLMs scale and particularly emphasized the importance of fluency among evaluation criteria.", }
Large Language Models (LLMs) have been reported to outperform existing automatic evaluation metrics in some tasks, such as text summarization and machine translation. However, there has been a lack of research on LLMs as evaluators in grammatical error correction (GEC). In this study, we investigate the performance of LLMs in GEC evaluation by employing prompts designed to incorporate various evaluation criteria inspired by previous research. Our extensive experimental results demonstrate that GPT-4 achieved Kendall{'}s rank correlation of 0.662 with human judgments, surpassing all existing methods. Furthermore, in recent GEC evaluations, we have underscored the significance of the LLMs scale and particularly emphasized the importance of fluency among evaluation criteria.
[ "Kobayashi, Masamune", "Mita, Masato", "Komachi, Mamoru" ]
Large Language Models Are State-of-the-Art Evaluator for Grammatical Error Correction
bea-1.6
Poster
2403.17540
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.7.bib
https://aclanthology.org/2024.bea-1.7/
@inproceedings{kwako-ormerod-2024-language, title = "Can Language Models Guess Your Identity? Analyzing Demographic Biases in {AI} Essay Scoring", author = "Kwako, Alexander and Ormerod, Christopher", editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.7", pages = "78--86", abstract = "Large language models (LLMs) are increasingly used for automated scoring of student essays. However, these models may perpetuate societal biases if not carefully monitored. This study analyzes potential biases in an LLM (XLNet) trained to score persuasive student essays, based on data from the PERSUADE corpus. XLNet achieved strong performance based on quadratic weighted kappa, standardized mean difference, and exact agreement with human scores. Using available metadata, we performed analyses of scoring differences across gender, race/ethnicity, English language learning status, socioeconomic status, and disability status. Automated scores exhibited small magnifications of marginal differences in human scoring, favoring female students over males and White students over Black students. To further probe potential biases, we found that separate XLNet classifiers and XLNet hidden states weakly predicted demographic membership. Overall, results reinforce the need for continued fairness analyses as use of LLMs expands in education.", }
Large language models (LLMs) are increasingly used for automated scoring of student essays. However, these models may perpetuate societal biases if not carefully monitored. This study analyzes potential biases in an LLM (XLNet) trained to score persuasive student essays, based on data from the PERSUADE corpus. XLNet achieved strong performance based on quadratic weighted kappa, standardized mean difference, and exact agreement with human scores. Using available metadata, we performed analyses of scoring differences across gender, race/ethnicity, English language learning status, socioeconomic status, and disability status. Automated scores exhibited small magnifications of marginal differences in human scoring, favoring female students over males and White students over Black students. To further probe potential biases, we found that separate XLNet classifiers and XLNet hidden states weakly predicted demographic membership. Overall, results reinforce the need for continued fairness analyses as use of LLMs expands in education.
[ "Kwako, Alex", "er", "Ormerod, Christopher" ]
Can Language Models Guess Your Identity? Analyzing Demographic Biases in AI Essay Scoring
bea-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.8.bib
https://aclanthology.org/2024.bea-1.8/
@inproceedings{yaneva-etal-2024-automated, title = "Automated Scoring of Clinical Patient Notes: Findings From the {K}aggle Competition and Their Translation into Practice", author = "Yaneva, Victoria and Suen, King Yiu and Ha, Le An and Mee, Janet and Quranda, Milton and Harik, Polina", editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.8", pages = "87--98", abstract = "Scoring clinical patient notes (PNs) written by medical students is a necessary but resource-intensive task in medical education. This paper describes the organization and key lessons from a Kaggle competition on automated scoring of such notes. 1,471 teams took part in the competition and developed an extensive, publicly available code repository of varying solutions evaluated over the first public dataset for this task. The most successful approaches from this community effort are described and utilized in the development of a PN scoring system. We discuss the choice of models and system architecture with a view to operational use and scalability, and evaluate its performance on both the public Kaggle data (10 clinical cases, 43,985 PNs) and an extended internal dataset (178 clinical cases, 6,940 PNs). The results show that the system significantly outperforms a state-of-the-art existing tool for PN scoring and that task-adaptive pretraining using masked language modeling can be an effective approach even for small training samples.", }
Scoring clinical patient notes (PNs) written by medical students is a necessary but resource-intensive task in medical education. This paper describes the organization and key lessons from a Kaggle competition on automated scoring of such notes. 1,471 teams took part in the competition and developed an extensive, publicly available code repository of varying solutions evaluated over the first public dataset for this task. The most successful approaches from this community effort are described and utilized in the development of a PN scoring system. We discuss the choice of models and system architecture with a view to operational use and scalability, and evaluate its performance on both the public Kaggle data (10 clinical cases, 43,985 PNs) and an extended internal dataset (178 clinical cases, 6,940 PNs). The results show that the system significantly outperforms a state-of-the-art existing tool for PN scoring and that task-adaptive pretraining using masked language modeling can be an effective approach even for small training samples.
[ "Yaneva, Victoria", "Suen, King Yiu", "Ha, Le An", "Mee, Janet", "Qur", "a, Milton", "Harik, Polina" ]
Automated Scoring of Clinical Patient Notes: Findings From the Kaggle Competition and Their Translation into Practice
bea-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.9.bib
https://aclanthology.org/2024.bea-1.9/
@inproceedings{crossley-etal-2024-world, title = "A World {CLASSE} Student Summary Corpus", author = "Crossley, Scott and Baffour, Perpetual and Dascalu, Mihai and Ruseti, Stefan", editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.9", pages = "99--107", abstract = "This paper introduces the Common Lit Augmented Student Summary Evaluation (CLASSE) corpus. The corpus comprises 11,213 summaries written over six prompts by students in grades 3-12 while using the CommonLit website. Each summary was scored by expert human raters on analytic features related to main points, details, organization, voice, paraphrasing, and language beyond the source text. The human scores were aggregated into two component scores related to content and wording. The final corpus was the focus of a Kaggle competition hosted in late 2022 and completed in 2023 in which over 2,000 teams participated. The paper includes a baseline scoring model for the corpus based on a Large Language Model (Longformer model). The paper also provides an overview of the winning models from the Kaggle competition.", }
This paper introduces the Common Lit Augmented Student Summary Evaluation (CLASSE) corpus. The corpus comprises 11,213 summaries written over six prompts by students in grades 3-12 while using the CommonLit website. Each summary was scored by expert human raters on analytic features related to main points, details, organization, voice, paraphrasing, and language beyond the source text. The human scores were aggregated into two component scores related to content and wording. The final corpus was the focus of a Kaggle competition hosted in late 2022 and completed in 2023 in which over 2,000 teams participated. The paper includes a baseline scoring model for the corpus based on a Large Language Model (Longformer model). The paper also provides an overview of the winning models from the Kaggle competition.
[ "Crossley, Scott", "Baffour, Perpetual", "Dascalu, Mihai", "Ruseti, Stefan" ]
A World CLASSE Student Summary Corpus
bea-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.10.bib
https://aclanthology.org/2024.bea-1.10/
@inproceedings{ashok-kumar-lan-2024-improving, title = "Improving Socratic Question Generation using Data Augmentation and Preference Optimization", author = "Ashok Kumar, Nischal and Lan, Andrew", editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.10", pages = "108--118", abstract = "The Socratic method is a way of guiding students toward solving a problem independently without directly revealing the solution to the problem by asking incremental questions. Although this method has been shown to significantly improve student learning outcomes, it remains a complex labor-intensive task for instructors. Large language models (LLMs) can be used to augment human effort by automatically generating Socratic questions for students. However, existing methods that involve prompting these LLMs sometimes produce invalid outputs, e.g., those that directly reveal the solution to the problem or provide irrelevant or premature questions. To alleviate this problem, inspired by reinforcement learning with AI feedback (RLAIF), we first propose a data augmentation method to enrich existing Socratic questioning datasets with questions that are invalid in specific ways. Also, we propose a method to optimize open-source LLMs such as LLama 2 to prefer ground-truth questions over generated invalid ones, using direct preference optimization (DPO). Our experiments on a Socratic questions dataset for student code debugging show that a DPO-optimized LLama 2-7B model can effectively avoid generating invalid questions, and as a result, outperforms existing state-of-the-art prompting methods.", }
The Socratic method is a way of guiding students toward solving a problem independently without directly revealing the solution to the problem by asking incremental questions. Although this method has been shown to significantly improve student learning outcomes, it remains a complex labor-intensive task for instructors. Large language models (LLMs) can be used to augment human effort by automatically generating Socratic questions for students. However, existing methods that involve prompting these LLMs sometimes produce invalid outputs, e.g., those that directly reveal the solution to the problem or provide irrelevant or premature questions. To alleviate this problem, inspired by reinforcement learning with AI feedback (RLAIF), we first propose a data augmentation method to enrich existing Socratic questioning datasets with questions that are invalid in specific ways. Also, we propose a method to optimize open-source LLMs such as LLama 2 to prefer ground-truth questions over generated invalid ones, using direct preference optimization (DPO). Our experiments on a Socratic questions dataset for student code debugging show that a DPO-optimized LLama 2-7B model can effectively avoid generating invalid questions, and as a result, outperforms existing state-of-the-art prompting methods.
[ "Ashok Kumar, Nischal", "Lan, Andrew" ]
Improving Socratic Question Generation using Data Augmentation and Preference Optimization
bea-1.10
Poster
2403.00199
[ "https://github.com/umass-ml4ed/socratic-quest-gen" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.11.bib
https://aclanthology.org/2024.bea-1.11/
@inproceedings{bexte-etal-2024-scoring, title = "Scoring with Confidence? {--} Exploring High-confidence Scoring for Saving Manual Grading Effort", author = {Bexte, Marie and Horbach, Andrea and Sch{\"u}tzler, Lena and Christ, Oliver and Zesch, Torsten}, editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.11", pages = "119--124", abstract = "A possible way to save manual grading effort in short answer scoring is to automatically score answers for which the classifier is highly confident. We explore the feasibility of this approach in a high-stakes exam setting, evaluating three different similarity-based scoring methods, where the similarity score is a direct proxy for model confidence. The decision on an appropriate level of confidence should ideally be made before scoring a new prompt. We thus probe to what extent confidence thresholds are consistent across different datasets and prompts. We find that high-confidence thresholds vary on a prompt-to-prompt basis, and that the overall potential of increased performance at a reasonable cost of additional manual effort is limited.", }
A possible way to save manual grading effort in short answer scoring is to automatically score answers for which the classifier is highly confident. We explore the feasibility of this approach in a high-stakes exam setting, evaluating three different similarity-based scoring methods, where the similarity score is a direct proxy for model confidence. The decision on an appropriate level of confidence should ideally be made before scoring a new prompt. We thus probe to what extent confidence thresholds are consistent across different datasets and prompts. We find that high-confidence thresholds vary on a prompt-to-prompt basis, and that the overall potential of increased performance at a reasonable cost of additional manual effort is limited.
[ "Bexte, Marie", "Horbach, Andrea", "Sch{\\\"u}tzler, Lena", "Christ, Oliver", "Zesch, Torsten" ]
Scoring with Confidence? – Exploring High-confidence Scoring for Saving Manual Grading Effort
bea-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.12.bib
https://aclanthology.org/2024.bea-1.12/
@inproceedings{de-vrindt-etal-2024-predicting, title = "Predicting Initial Essay Quality Scores to Increase the Efficiency of Comparative Judgment Assessments", author = {De Vrindt, Michiel and Tack, Ana{\"\i}s and Bouwer, Renske and Van Den Noortgate, Wim and Lesterhuis, Marije}, editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.12", pages = "125--136", abstract = "Comparative judgment (CJ) is a method that can be used to assess the writing quality of student essays based on repeated pairwise comparisons by multiple assessors. Although the assessment method is known to have high validity and reliability, it can be particularly inefficient, as assessors must make many judgments before the scores become reliable. Prior research has investigated methods to improve the efficiency of CJ, yet these methods introduce additional challenges, notably stemming from the initial lack of information at the start of the assessment, which is known as a cold-start problem. This paper reports on a study in which we predict the initial quality scores of essays to establish a warm start for CJ. To achieve this, we construct informative prior distributions for the quality scores based on the predicted initial quality scores. Through simulation studies, we demonstrate that our approach increases the efficiency of CJ: On average, assessors need to make 30{\%} fewer judgments for each essay to reach an overall reliability level of 0.70.", }
Comparative judgment (CJ) is a method that can be used to assess the writing quality of student essays based on repeated pairwise comparisons by multiple assessors. Although the assessment method is known to have high validity and reliability, it can be particularly inefficient, as assessors must make many judgments before the scores become reliable. Prior research has investigated methods to improve the efficiency of CJ, yet these methods introduce additional challenges, notably stemming from the initial lack of information at the start of the assessment, which is known as a cold-start problem. This paper reports on a study in which we predict the initial quality scores of essays to establish a warm start for CJ. To achieve this, we construct informative prior distributions for the quality scores based on the predicted initial quality scores. Through simulation studies, we demonstrate that our approach increases the efficiency of CJ: On average, assessors need to make 30{\%} fewer judgments for each essay to reach an overall reliability level of 0.70.
[ "De Vrindt, Michiel", "Tack, Ana{\\\"\\i}s", "Bouwer, Renske", "Van Den Noortgate, Wim", "Lesterhuis, Marije" ]
Predicting Initial Essay Quality Scores to Increase the Efficiency of Comparative Judgment Assessments
bea-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.13.bib
https://aclanthology.org/2024.bea-1.13/
@inproceedings{hayat-etal-2024-improving, title = "Improving Transfer Learning for Early Forecasting of Academic Performance by Contextualizing Language Models", author = "Hayat, Ahatsham and Khan, Bilal and Hasan, Mohammad", editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.13", pages = "137--148", abstract = "This paper presents a cutting-edge method that harnesses contextualized language models (LMs) to significantly enhance the prediction of early academic performance in STEM fields. Our approach uniquely tackles the challenge of transfer learning with limited-domain data. Specifically, we overcome this challenge by contextualizing students{'} cognitive trajectory data through the integration of both distal background factors (comprising academic information, demographic details, and socioeconomic indicators) and proximal non-cognitive factors (such as emotional engagement). By tapping into the rich prior knowledge encoded within pre-trained LMs, we effectively reframe academic performance forecasting as a task ideally suited for natural language processing.Our research rigorously examines three key aspects: the impact of data contextualization on prediction improvement, the effectiveness of our approach compared to traditional numeric-based models, and the influence of LM capacity on prediction accuracy. The results underscore the significant advantages of utilizing larger LMs with contextualized inputs, representing a notable advancement in the precision of early performance forecasts. These findings emphasize the importance of employing contextualized LMs to enhance artificial intelligence-driven educational support systems and overcome data scarcity challenges.", }
This paper presents a cutting-edge method that harnesses contextualized language models (LMs) to significantly enhance the prediction of early academic performance in STEM fields. Our approach uniquely tackles the challenge of transfer learning with limited-domain data. Specifically, we overcome this challenge by contextualizing students{'} cognitive trajectory data through the integration of both distal background factors (comprising academic information, demographic details, and socioeconomic indicators) and proximal non-cognitive factors (such as emotional engagement). By tapping into the rich prior knowledge encoded within pre-trained LMs, we effectively reframe academic performance forecasting as a task ideally suited for natural language processing.Our research rigorously examines three key aspects: the impact of data contextualization on prediction improvement, the effectiveness of our approach compared to traditional numeric-based models, and the influence of LM capacity on prediction accuracy. The results underscore the significant advantages of utilizing larger LMs with contextualized inputs, representing a notable advancement in the precision of early performance forecasts. These findings emphasize the importance of employing contextualized LMs to enhance artificial intelligence-driven educational support systems and overcome data scarcity challenges.
[ "Hayat, Ahatsham", "Khan, Bilal", "Hasan, Mohammad" ]
Improving Transfer Learning for Early Forecasting of Academic Performance by Contextualizing Language Models
bea-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.bea-1.14.bib
https://aclanthology.org/2024.bea-1.14/
@inproceedings{banno-etal-2024-gpt, title = "Can {GPT}-4 do {L}2 analytic assessment?", author = "Banno, Stefano and Vydana, Hari Krishna and Knill, Kate and Gales, Mark", editor = {Kochmar, Ekaterina and Bexte, Marie and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Tack, Ana{\"\i}s and Yaneva, Victoria and Yuan, Zheng}, booktitle = "Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bea-1.14", pages = "149--164", abstract = "Automated essay scoring (AES) to evaluate second language (L2) proficiency has been a firmly established technology used in educational contexts for decades. Although holistic scoring has seen advancements in AES that match or even exceed human performance, analytic scoring still encounters issues as it inherits flaws and shortcomings from the human scoring process. The recent introduction of large language models presents new opportunities for automating the evaluation of specific aspects of L2 writing proficiency. In this paper, we perform a series of experiments using GPT-4 in a zero-shot fashion on a publicly available dataset annotated with holistic scores based on the Common European Framework of Reference and aim to extract detailed information about their underlying analytic components. We observe significant correlations between the automatically predicted analytic scores and multiple features associated with the individual proficiency components.", }
Automated essay scoring (AES) to evaluate second language (L2) proficiency has been a firmly established technology used in educational contexts for decades. Although holistic scoring has seen advancements in AES that match or even exceed human performance, analytic scoring still encounters issues as it inherits flaws and shortcomings from the human scoring process. The recent introduction of large language models presents new opportunities for automating the evaluation of specific aspects of L2 writing proficiency. In this paper, we perform a series of experiments using GPT-4 in a zero-shot fashion on a publicly available dataset annotated with holistic scores based on the Common European Framework of Reference and aim to extract detailed information about their underlying analytic components. We observe significant correlations between the automatically predicted analytic scores and multiple features associated with the individual proficiency components.
[ "Banno, Stefano", "Vydana, Hari Krishna", "Knill, Kate", "Gales, Mark" ]
Can GPT-4 do L2 analytic assessment?
bea-1.14
Poster
2404.18557
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]