bibtex_url
stringlengths 41
52
| proceedings
stringlengths 38
49
| bibtext
stringlengths 788
3.49k
| abstract
stringlengths 0
2.12k
| authors
sequencelengths 1
58
| title
stringlengths 16
181
| id
stringlengths 7
18
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 170
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
56
| num_comments
int64 -1
9
| n_authors
int64 -1
57
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
99
| Datasets
sequencelengths 0
5
| Spaces
sequencelengths 0
57
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.naacl-short.14.bib | https://aclanthology.org/2024.naacl-short.14/ | @inproceedings{jones-etal-2024-multi,
title = "A Multi-Aspect Framework for Counter Narrative Evaluation using Large Language Models",
author = "Jones, Jaylen and
Mo, Lingbo and
Fosler-Lussier, Eric and
Sun, Huan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.14",
doi = "10.18653/v1/2024.naacl-short.14",
pages = "147--168",
abstract = "Counter narratives - informed responses to hate speech contexts designed to refute hateful claims and de-escalate encounters - have emerged as an effective hate speech intervention strategy. While previous work has proposed automatic counter narrative generation methods to aid manual interventions, the evaluation of these approaches remains underdeveloped. Previous automatic metrics for counter narrative evaluation lack alignment with human judgment as they rely on superficial reference comparisons instead of incorporating key aspects of counter narrative quality as evaluation criteria. To address prior evaluation limitations, we propose a novel evaluation framework prompting LLMs to provide scores and feedback for generated counter narrative candidates using 5 defined aspects derived from guidelines from counter narrative specialized NGOs. We found that LLM evaluators achieve strong alignment to human-annotated scores and feedback and outperform alternative metrics, indicating their potential as multi-aspect, reference-free and interpretable evaluators for counter narrative evaluation.",
}
| Counter narratives - informed responses to hate speech contexts designed to refute hateful claims and de-escalate encounters - have emerged as an effective hate speech intervention strategy. While previous work has proposed automatic counter narrative generation methods to aid manual interventions, the evaluation of these approaches remains underdeveloped. Previous automatic metrics for counter narrative evaluation lack alignment with human judgment as they rely on superficial reference comparisons instead of incorporating key aspects of counter narrative quality as evaluation criteria. To address prior evaluation limitations, we propose a novel evaluation framework prompting LLMs to provide scores and feedback for generated counter narrative candidates using 5 defined aspects derived from guidelines from counter narrative specialized NGOs. We found that LLM evaluators achieve strong alignment to human-annotated scores and feedback and outperform alternative metrics, indicating their potential as multi-aspect, reference-free and interpretable evaluators for counter narrative evaluation. | [
"Jones, Jaylen",
"Mo, Lingbo",
"Fosler-Lussier, Eric",
"Sun, Huan"
] | A Multi-Aspect Framework for Counter Narrative Evaluation using Large Language Models | naacl-short.14 | Poster | 2402.11676 | [
"https://github.com/osu-nlp-group/llm-cn-eval"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.15.bib | https://aclanthology.org/2024.naacl-short.15/ | @inproceedings{bhasin-etal-2024-multi,
title = "How does Multi-Task Training Affect Transformer In-Context Capabilities? Investigations with Function Classes",
author = "Bhasin, Harmon and
Ossowski, Timothy and
Zhong, Yiqiao and
Hu, Junjie",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.15",
doi = "10.18653/v1/2024.naacl-short.15",
pages = "169--187",
abstract = "Large language models (LLM) have recently shown the extraordinary ability to perform unseen tasks based on few-shot examples provided as text, also known as in-context learning (ICL). While recent works have attempted to understand the mechanisms driving ICL, few have explored training strategies that incentivize these models to generalize to multiple tasks. Multi-task learning (MTL) for generalist models is a promising direction that offers transfer learning potential, enabling large parameterized models to be trained from simpler, related tasks. In this work, we investigate the combination of MTL with ICL to build models that efficiently learn tasks while being robust to out-of-distribution examples. We propose several effective curriculum learning strategies that allow ICL models to achieve higher data efficiency and more stable convergence. Our experiments reveal that ICL models can effectively learn difficult tasks by training on progressively harder tasks while mixing in prior tasks, denoted as mixed curriculum in this work.",
}
| Large language models (LLM) have recently shown the extraordinary ability to perform unseen tasks based on few-shot examples provided as text, also known as in-context learning (ICL). While recent works have attempted to understand the mechanisms driving ICL, few have explored training strategies that incentivize these models to generalize to multiple tasks. Multi-task learning (MTL) for generalist models is a promising direction that offers transfer learning potential, enabling large parameterized models to be trained from simpler, related tasks. In this work, we investigate the combination of MTL with ICL to build models that efficiently learn tasks while being robust to out-of-distribution examples. We propose several effective curriculum learning strategies that allow ICL models to achieve higher data efficiency and more stable convergence. Our experiments reveal that ICL models can effectively learn difficult tasks by training on progressively harder tasks while mixing in prior tasks, denoted as mixed curriculum in this work. | [
"Bhasin, Harmon",
"Ossowski, Timothy",
"Zhong, Yiqiao",
"Hu, Junjie"
] | How does Multi-Task Training Affect Transformer In-Context Capabilities? Investigations with Function Classes | naacl-short.15 | Poster | 2404.03558 | [
"https://github.com/harmonbhasin/curriculum_learning_icl"
] | https://huggingface.co/papers/2404.03558 | 2 | 0 | 0 | 4 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-short.16.bib | https://aclanthology.org/2024.naacl-short.16/ | @inproceedings{zhang-etal-2024-celi,
title = "{CELI}: Simple yet Effective Approach to Enhance Out-of-Domain Generalization of Cross-Encoders.",
author = "Zhang, Crystina and
Li, Minghan and
Lin, Jimmy",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.16",
doi = "10.18653/v1/2024.naacl-short.16",
pages = "188--196",
abstract = "In text ranking, it is generally believed that the cross-encoders already gather sufficient token interaction information via the attention mechanism in the hidden layers. However, our results show that the cross-encoders can consistently benefit from additional token interaction in the similarity computation at the last layer. We introduce CELI (Cross-Encoder with Late Interaction), which incorporates a late interaction layer into the current cross-encoder models. This simple method brings 5{\%} improvement on BEIR without compromising in-domain effectiveness or search latency. Extensive experiments show that this finding is consistent across different sizes of the cross-encoder models and the first-stage retrievers. Our findings suggest that boiling all information into the [CLS] token is a suboptimal use for cross-encoders, and advocate further studies to investigate its relevance score mechanism.",
}
| In text ranking, it is generally believed that the cross-encoders already gather sufficient token interaction information via the attention mechanism in the hidden layers. However, our results show that the cross-encoders can consistently benefit from additional token interaction in the similarity computation at the last layer. We introduce CELI (Cross-Encoder with Late Interaction), which incorporates a late interaction layer into the current cross-encoder models. This simple method brings 5{\%} improvement on BEIR without compromising in-domain effectiveness or search latency. Extensive experiments show that this finding is consistent across different sizes of the cross-encoder models and the first-stage retrievers. Our findings suggest that boiling all information into the [CLS] token is a suboptimal use for cross-encoders, and advocate further studies to investigate its relevance score mechanism. | [
"Zhang, Crystina",
"Li, Minghan",
"Lin, Jimmy"
] | CELI: Simple yet Effective Approach to Enhance Out-of-Domain Generalization of Cross-Encoders. | naacl-short.16 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.17.bib | https://aclanthology.org/2024.naacl-short.17/ | @inproceedings{do-etal-2024-contrastivemix,
title = "{C}ontrastive{M}ix: Overcoming Code-Mixing Dilemma in Cross-Lingual Transfer for Information Retrieval",
author = "Do, Junggeun and
Lee, Jaeseong and
Hwang, Seung-won",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.17",
doi = "10.18653/v1/2024.naacl-short.17",
pages = "197--204",
abstract = "Multilingual pretrained language models (mPLMs) have been widely adopted in cross-lingual transfer, and code-mixing has demonstrated effectiveness across various tasks in the absence of target language data. Our contribution involves an in-depth investigation into the counterproductive nature of training mPLMs on code-mixed data for information retrieval (IR). Our finding is that while code-mixing demonstrates a positive effect in aligning representations across languages, it hampers the IR-specific objective of matching representations between queries and relevant passages. To balance between positive and negative effects, we introduce ContrastiveMix, which disentangles contrastive loss between these conflicting objectives, thereby enhancing zero-shot IR performance. Specifically, we leverage both English and code-mixed data and employ two contrastive loss functions, by adding an additional contrastive loss that aligns embeddings of English data with their code-mixed counterparts in the query encoder. Our proposed ContrastiveMix exhibits statistically significant outperformance compared to mDPR, particularly in scenarios involving lower linguistic similarity, where the conflict between goals is more pronounced.",
}
| Multilingual pretrained language models (mPLMs) have been widely adopted in cross-lingual transfer, and code-mixing has demonstrated effectiveness across various tasks in the absence of target language data. Our contribution involves an in-depth investigation into the counterproductive nature of training mPLMs on code-mixed data for information retrieval (IR). Our finding is that while code-mixing demonstrates a positive effect in aligning representations across languages, it hampers the IR-specific objective of matching representations between queries and relevant passages. To balance between positive and negative effects, we introduce ContrastiveMix, which disentangles contrastive loss between these conflicting objectives, thereby enhancing zero-shot IR performance. Specifically, we leverage both English and code-mixed data and employ two contrastive loss functions, by adding an additional contrastive loss that aligns embeddings of English data with their code-mixed counterparts in the query encoder. Our proposed ContrastiveMix exhibits statistically significant outperformance compared to mDPR, particularly in scenarios involving lower linguistic similarity, where the conflict between goals is more pronounced. | [
"Do, Junggeun",
"Lee, Jaeseong",
"Hwang, Seung-won"
] | ContrastiveMix: Overcoming Code-Mixing Dilemma in Cross-Lingual Transfer for Information Retrieval | naacl-short.17 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.18.bib | https://aclanthology.org/2024.naacl-short.18/ | @inproceedings{raunak-etal-2024-slide,
title = "{SLIDE}: Reference-free Evaluation for Machine Translation using a Sliding Document Window",
author = "Raunak, Vikas and
Kocmi, Tom and
Post, Matt",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.18",
doi = "10.18653/v1/2024.naacl-short.18",
pages = "205--211",
abstract = "Reference-based metrics that operate at the sentence-level typically outperform quality estimation metrics, which have access only to the source and system output.This is unsurprising, since references resolve ambiguities that may be present in the source.In this paper, we investigate whether additional source context can effectively substitute for a reference.We present a metric named SLIDE (SLIding Document Evaluator), which operates on blocks of sentences. SLIDE leverages a moving window that slides over each document in the test set, feeding each chunk of sentences into an unmodified, off-the-shelf quality estimation model.We find that SLIDE obtains significantly higher pairwise system accuracy than its sentence-level baseline, in some cases even eliminating the gap with reference-base metrics.This suggests that source context may provide the same information as a human reference in disambiguating source ambiguities. This finding is especially pertinent for reference-free document-level evaluation, wherein SLIDE could provide higher-quality pairwise system assessments while only requiring document boundary annotations.",
}
| Reference-based metrics that operate at the sentence-level typically outperform quality estimation metrics, which have access only to the source and system output.This is unsurprising, since references resolve ambiguities that may be present in the source.In this paper, we investigate whether additional source context can effectively substitute for a reference.We present a metric named SLIDE (SLIding Document Evaluator), which operates on blocks of sentences. SLIDE leverages a moving window that slides over each document in the test set, feeding each chunk of sentences into an unmodified, off-the-shelf quality estimation model.We find that SLIDE obtains significantly higher pairwise system accuracy than its sentence-level baseline, in some cases even eliminating the gap with reference-base metrics.This suggests that source context may provide the same information as a human reference in disambiguating source ambiguities. This finding is especially pertinent for reference-free document-level evaluation, wherein SLIDE could provide higher-quality pairwise system assessments while only requiring document boundary annotations. | [
"Raunak, Vikas",
"Kocmi, Tom",
"Post, Matt"
] | SLIDE: Reference-free Evaluation for Machine Translation using a Sliding Document Window | naacl-short.18 | Poster | 2309.08832 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.19.bib | https://aclanthology.org/2024.naacl-short.19/ | @inproceedings{zou-etal-2024-separately,
title = "Separately Parameterizing Singleton Detection Improves End-to-end Neural Coreference Resolution",
author = "Zou, Xiyuan and
Li, Yiran and
Porada, Ian and
Cheung, Jackie",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.19",
doi = "10.18653/v1/2024.naacl-short.19",
pages = "212--219",
abstract = "Current end-to-end coreference resolution models combine detection of singleton mentions and antecedent linking into a single step. In contrast, singleton detection was often treated as a separate step in the pre-neural era. In this work, we show that separately parameterizing these two sub-tasks also benefits end-to-end neural coreference systems. Specifically, we add a singleton detector to the coarse-to-fine (C2F) coreference model, and design an anaphoricity-aware span embedding and singleton detection loss. Our method significantly improves model performance on OntoNotes and four additional datasets.",
}
| Current end-to-end coreference resolution models combine detection of singleton mentions and antecedent linking into a single step. In contrast, singleton detection was often treated as a separate step in the pre-neural era. In this work, we show that separately parameterizing these two sub-tasks also benefits end-to-end neural coreference systems. Specifically, we add a singleton detector to the coarse-to-fine (C2F) coreference model, and design an anaphoricity-aware span embedding and singleton detection loss. Our method significantly improves model performance on OntoNotes and four additional datasets. | [
"Zou, Xiyuan",
"Li, Yiran",
"Porada, Ian",
"Cheung, Jackie"
] | Separately Parameterizing Singleton Detection Improves End-to-end Neural Coreference Resolution | naacl-short.19 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.20.bib | https://aclanthology.org/2024.naacl-short.20/ | @inproceedings{kishore-he-2024-unveiling,
title = "Unveiling Divergent Inductive Biases of {LLM}s on Temporal Data",
author = "Kishore, Sindhu and
He, Hangfeng",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.20",
doi = "10.18653/v1/2024.naacl-short.20",
pages = "220--228",
abstract = "Unraveling the intricate details of events in natural language necessitates a subtle understanding of temporal dynamics. Despite the adeptness of Large Language Models (LLMs) in discerning patterns and relationships from data, their inherent comprehension of temporal dynamics remains a formidable challenge. This research meticulously explores these intrinsic challenges within LLMs, with a specific emphasis on evaluating the performance of GPT-3.5 and GPT-4 models in the analysis of temporal data. Employing two distinct prompt types, namely Question Answering (QA) format and Textual Entailment (TE) format, our analysis probes into both implicit and explicit events. The findings underscore noteworthy trends, revealing disparities in the performance of GPT-3.5 and GPT-4. Notably, biases toward specific temporal relationships come to light, with GPT-3.5 demonstrating a preference for {``}AFTER{''} in the QA format for both implicit and explicit events, while GPT-4 leans towards {``}BEFORE{''}. Furthermore, a consistent pattern surfaces wherein GPT-3.5 tends towards {``}TRUE{''}, and GPT-4 exhibits a preference for {``}FALSE{''} in the TE format for both implicit and explicit events. This persistent discrepancy between GPT-3.5 and GPT-4 in handling temporal data highlights the intricate nature of inductive bias in LLMs, suggesting that the evolution of these models may not merely mitigate bias but may introduce new layers of complexity.",
}
| Unraveling the intricate details of events in natural language necessitates a subtle understanding of temporal dynamics. Despite the adeptness of Large Language Models (LLMs) in discerning patterns and relationships from data, their inherent comprehension of temporal dynamics remains a formidable challenge. This research meticulously explores these intrinsic challenges within LLMs, with a specific emphasis on evaluating the performance of GPT-3.5 and GPT-4 models in the analysis of temporal data. Employing two distinct prompt types, namely Question Answering (QA) format and Textual Entailment (TE) format, our analysis probes into both implicit and explicit events. The findings underscore noteworthy trends, revealing disparities in the performance of GPT-3.5 and GPT-4. Notably, biases toward specific temporal relationships come to light, with GPT-3.5 demonstrating a preference for {``}AFTER{''} in the QA format for both implicit and explicit events, while GPT-4 leans towards {``}BEFORE{''}. Furthermore, a consistent pattern surfaces wherein GPT-3.5 tends towards {``}TRUE{''}, and GPT-4 exhibits a preference for {``}FALSE{''} in the TE format for both implicit and explicit events. This persistent discrepancy between GPT-3.5 and GPT-4 in handling temporal data highlights the intricate nature of inductive bias in LLMs, suggesting that the evolution of these models may not merely mitigate bias but may introduce new layers of complexity. | [
"Kishore, Sindhu",
"He, Hangfeng"
] | Unveiling Divergent Inductive Biases of LLMs on Temporal Data | naacl-short.20 | Poster | 2404.01453 | [
"https://github.com/sindhukrao/llm_temporal_bias"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.21.bib | https://aclanthology.org/2024.naacl-short.21/ | @inproceedings{chiang-etal-2024-retrieval,
title = "On Retrieval Augmentation and the Limitations of Language Model Training",
author = "Chiang, Ting-Rui and
Yu, Xinyan and
Robinson, Joshua and
Liu, Ollie and
Lee, Isabelle and
Yogatama, Dani",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.21",
doi = "10.18653/v1/2024.naacl-short.21",
pages = "229--238",
abstract = "Augmenting a language model (LM) with $k$-nearest neighbors ($k$NN) retrieval on its training data alone can decrease its perplexity, though the underlying reasons for this remain elusive. In this work, we rule out one previously posited possibility {---} the {``}softmax bottleneck.{''} We then create a new dataset to evaluate LM generalization ability in the setting where training data contains additional information that is not causally relevant. This task is challenging even for GPT-3.5 Turbo. We show that, for both GPT-2 and Mistral 7B, $k$NN retrieval augmentation consistently improves per formance in this setting. Finally, to make $k$NN retrieval more accessible, we propose using amulti-layer perceptron model that maps datastore keys to values as a drop-in replacement for traditional retrieval. This reduces storage costsby over 25x.",
}
| Augmenting a language model (LM) with $k$-nearest neighbors ($k$NN) retrieval on its training data alone can decrease its perplexity, though the underlying reasons for this remain elusive. In this work, we rule out one previously posited possibility {---} the {``}softmax bottleneck.{''} We then create a new dataset to evaluate LM generalization ability in the setting where training data contains additional information that is not causally relevant. This task is challenging even for GPT-3.5 Turbo. We show that, for both GPT-2 and Mistral 7B, $k$NN retrieval augmentation consistently improves per formance in this setting. Finally, to make $k$NN retrieval more accessible, we propose using amulti-layer perceptron model that maps datastore keys to values as a drop-in replacement for traditional retrieval. This reduces storage costsby over 25x. | [
"Chiang, Ting-Rui",
"Yu, Xinyan",
"Robinson, Joshua",
"Liu, Ollie",
"Lee, Isabelle",
"Yogatama, Dani"
] | On Retrieval Augmentation and the Limitations of Language Model Training | naacl-short.21 | Poster | 2311.09615 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.22.bib | https://aclanthology.org/2024.naacl-short.22/ | @inproceedings{zhou-etal-2024-gendecider,
title = "{G}en{D}ecider: Integrating {``}None of the Candidates{''} Judgments in Zero-Shot Entity Linking Re-ranking",
author = "Zhou, Kang and
Li, Yuepei and
Wang, Qing and
Qiao, Qiao and
Li, Qi",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.22",
doi = "10.18653/v1/2024.naacl-short.22",
pages = "239--245",
abstract = "We introduce GenDecider, a novel re-ranking approach for Zero-Shot Entity Linking (ZSEL), built on the Llama model. It innovatively detects scenarios where the correct entity is not among the retrieved candidates, a common oversight in existing re-ranking methods. By autoregressively generating outputs based on the context of the entity mention and the candidate entities, GenDecider significantly enhances disambiguation, improving the accuracy and reliability of ZSEL systems, as demonstrated on the benchmark ZESHEL dataset. Our code is available at https://github.com/kangISU/GenDecider.",
}
| We introduce GenDecider, a novel re-ranking approach for Zero-Shot Entity Linking (ZSEL), built on the Llama model. It innovatively detects scenarios where the correct entity is not among the retrieved candidates, a common oversight in existing re-ranking methods. By autoregressively generating outputs based on the context of the entity mention and the candidate entities, GenDecider significantly enhances disambiguation, improving the accuracy and reliability of ZSEL systems, as demonstrated on the benchmark ZESHEL dataset. Our code is available at https://github.com/kangISU/GenDecider. | [
"Zhou, Kang",
"Li, Yuepei",
"Wang, Qing",
"Qiao, Qiao",
"Li, Qi"
] | GenDecider: Integrating “None of the Candidates” Judgments in Zero-Shot Entity Linking Re-ranking | naacl-short.22 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.23.bib | https://aclanthology.org/2024.naacl-short.23/ | @inproceedings{ji-etal-2024-advancing,
title = "Advancing the Robustness of Large Language Models through Self-Denoised Smoothing",
author = "Ji, Jiabao and
Hou, Bairu and
Zhang, Zhen and
Zhang, Guanhua and
Fan, Wenqi and
Li, Qing and
Zhang, Yang and
Liu, Gaowen and
Liu, Sijia and
Chang, Shiyu",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.23",
doi = "10.18653/v1/2024.naacl-short.23",
pages = "246--257",
abstract = "Although large language models (LLMs) have achieved significant success, their vulnerability to adversarial perturbations, including recent jailbreak attacks, has raised considerable concerns. However, the increasing size of these models and their limited access make improving their robustness a challenging task. Among various defense strategies, randomized smoothing has shown great potential for LLMs, as it does not require full access to the model{'}s parameters or fine-tuning via adversarial training. However, randomized smoothing involves adding noise to the input before model prediction, and the final model{'}s robustness largely depends on the model{'}s performance on these noise-corrupted data. Its effectiveness is often limited by the model{'}s sub-optimal performance on noisy data. To address this issue, we propose to leverage the multitasking nature of LLMs to first denoise the noisy inputs and then to make predictions based on these denoised versions. We call this procedure self-denoised smoothing. Unlike previous denoised smoothing techniques in computer vision, which require training a separate model to enhance the robustness of LLMs, our method offers significantly better efficiency and flexibility. Our experimental results indicate that our method surpasses existing methods in both empirical and certified robustness in defending against adversarial attacks for both downstream tasks and human alignments (i.e., jailbreak attacks). Our code is publicly available at https://github.com/UCSB-NLP-Chang/SelfDenoise.",
}
| Although large language models (LLMs) have achieved significant success, their vulnerability to adversarial perturbations, including recent jailbreak attacks, has raised considerable concerns. However, the increasing size of these models and their limited access make improving their robustness a challenging task. Among various defense strategies, randomized smoothing has shown great potential for LLMs, as it does not require full access to the model{'}s parameters or fine-tuning via adversarial training. However, randomized smoothing involves adding noise to the input before model prediction, and the final model{'}s robustness largely depends on the model{'}s performance on these noise-corrupted data. Its effectiveness is often limited by the model{'}s sub-optimal performance on noisy data. To address this issue, we propose to leverage the multitasking nature of LLMs to first denoise the noisy inputs and then to make predictions based on these denoised versions. We call this procedure self-denoised smoothing. Unlike previous denoised smoothing techniques in computer vision, which require training a separate model to enhance the robustness of LLMs, our method offers significantly better efficiency and flexibility. Our experimental results indicate that our method surpasses existing methods in both empirical and certified robustness in defending against adversarial attacks for both downstream tasks and human alignments (i.e., jailbreak attacks). Our code is publicly available at https://github.com/UCSB-NLP-Chang/SelfDenoise. | [
"Ji, Jiabao",
"Hou, Bairu",
"Zhang, Zhen",
"Zhang, Guanhua",
"Fan, Wenqi",
"Li, Qing",
"Zhang, Yang",
"Liu, Gaowen",
"Liu, Sijia",
"Chang, Shiyu"
] | Advancing the Robustness of Large Language Models through Self-Denoised Smoothing | naacl-short.23 | Poster | 2404.12274 | [
"https://github.com/ucsb-nlp-chang/selfdenoise"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.24.bib | https://aclanthology.org/2024.naacl-short.24/ | @inproceedings{dorbala-etal-2024-llms,
title = "Can {LLM}{'}s Generate Human-Like Wayfinding Instructions? Towards Platform-Agnostic Embodied Instruction Synthesis",
author = "Dorbala, Vishnu Sashank and
Chowdhury, Sanjoy and
Manocha, Dinesh",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.24",
doi = "10.18653/v1/2024.naacl-short.24",
pages = "258--271",
abstract = "We present a novel approach to automatically synthesize {``}wayfinding instructions{''} for an embodied robot agent. In contrast to prior approaches that are heavily reliant on human-annotated datasets designed exclusively for specific simulation platforms, our algorithm uses in-context learning to condition an LLM to generate instructions using just a few references. Using an LLM-based Visual Question Answering strategy, we gather detailed information about the environment which is used by the LLM for instruction synthesis. We implement our approach on multiple simulation platforms including Matterport3D, AI Habitat and ThreeDWorld, thereby demonstrating its platform-agnostic nature. We subjectively evaluate our approach via a user study and observe that 83.3{\%} of users find the synthesized instructions accurately capture the details of the environment and show characteristics similar to those of human-generated instructions. Further, we conduct zero-shot navigation with multiple approaches on the REVERIE dataset using the generated instructions, and observe very close correlation with the baseline on standard success metrics ({\textless} 1{\%} change in SR), quantifying the viability of generated instructions in replacing human-annotated data. We finally discuss the applicability of our approach in enabling a generalizable evaluation of embodied navigation policies. To the best of our knowledge, ours is the first LLM-driven approach capable of generating {``}human-like{''} instructions in a platform-agnostic manner, without training.",
}
| We present a novel approach to automatically synthesize {``}wayfinding instructions{''} for an embodied robot agent. In contrast to prior approaches that are heavily reliant on human-annotated datasets designed exclusively for specific simulation platforms, our algorithm uses in-context learning to condition an LLM to generate instructions using just a few references. Using an LLM-based Visual Question Answering strategy, we gather detailed information about the environment which is used by the LLM for instruction synthesis. We implement our approach on multiple simulation platforms including Matterport3D, AI Habitat and ThreeDWorld, thereby demonstrating its platform-agnostic nature. We subjectively evaluate our approach via a user study and observe that 83.3{\%} of users find the synthesized instructions accurately capture the details of the environment and show characteristics similar to those of human-generated instructions. Further, we conduct zero-shot navigation with multiple approaches on the REVERIE dataset using the generated instructions, and observe very close correlation with the baseline on standard success metrics ({\textless} 1{\%} change in SR), quantifying the viability of generated instructions in replacing human-annotated data. We finally discuss the applicability of our approach in enabling a generalizable evaluation of embodied navigation policies. To the best of our knowledge, ours is the first LLM-driven approach capable of generating {``}human-like{''} instructions in a platform-agnostic manner, without training. | [
"Dorbala, Vishnu Sashank",
"Chowdhury, Sanjoy",
"Manocha, Dinesh"
] | Can LLM's Generate Human-Like Wayfinding Instructions? Towards Platform-Agnostic Embodied Instruction Synthesis | naacl-short.24 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.25.bib | https://aclanthology.org/2024.naacl-short.25/ | @inproceedings{nawrath-etal-2024-role,
title = "On the Role of Summary Content Units in Text Summarization Evaluation",
author = "Nawrath, Marcel and
Nowak, Agnieszka and
Ratz, Tristan and
Walenta, Danilo and
Opitz, Juri and
Ribeiro, Leonardo and
Sedoc, Jo{\~a}o and
Deutsch, Daniel and
Mille, Simon and
Liu, Yixin and
Gehrmann, Sebastian and
Zhang, Lining and
Mahamood, Saad and
Clinciu, Miruna and
Chandu, Khyathi and
Hou, Yufang",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.25",
doi = "10.18653/v1/2024.naacl-short.25",
pages = "272--281",
abstract = "At the heart of the Pyramid evaluation method for text summarization lie human written summary content units (SCUs). These SCUs areconcise sentences that decompose a summary into small facts. Such SCUs can be used to judge the quality of a candidate summary, possibly partially automated via natural language inference (NLI) systems. Interestingly, with the aim to fully automate the Pyramid evaluation, Zhang and Bansal (2021) show that SCUs can be approximated by automatically generated semantic role triplets (STUs). However, several questions currently lack answers, in particular: i) Are there other ways of approximating SCUs that can offer advantages?ii) Under which conditions are SCUs (or their approximations) offering the most value? In this work, we examine two novel strategiesto approximate SCUs: generating SCU approximations from AMR meaning representations (SMUs) and from large language models (SGUs), respectively. We find that while STUs and SMUs are competitive, the best approximation quality is achieved by SGUs. We also show through a simple sentence-decomposition baseline (SSUs) that SCUs (and their approximations) offer the most value when rankingshort summaries, but may not help as much when ranking systems or longer summaries.",
}
| At the heart of the Pyramid evaluation method for text summarization lie human written summary content units (SCUs). These SCUs areconcise sentences that decompose a summary into small facts. Such SCUs can be used to judge the quality of a candidate summary, possibly partially automated via natural language inference (NLI) systems. Interestingly, with the aim to fully automate the Pyramid evaluation, Zhang and Bansal (2021) show that SCUs can be approximated by automatically generated semantic role triplets (STUs). However, several questions currently lack answers, in particular: i) Are there other ways of approximating SCUs that can offer advantages?ii) Under which conditions are SCUs (or their approximations) offering the most value? In this work, we examine two novel strategiesto approximate SCUs: generating SCU approximations from AMR meaning representations (SMUs) and from large language models (SGUs), respectively. We find that while STUs and SMUs are competitive, the best approximation quality is achieved by SGUs. We also show through a simple sentence-decomposition baseline (SSUs) that SCUs (and their approximations) offer the most value when rankingshort summaries, but may not help as much when ranking systems or longer summaries. | [
"Nawrath, Marcel",
"Nowak, Agnieszka",
"Ratz, Tristan",
"Walenta, Danilo",
"Opitz, Juri",
"Ribeiro, Leonardo",
"Sedoc, Jo{\\~a}o",
"Deutsch, Daniel",
"Mille, Simon",
"Liu, Yixin",
"Gehrmann, Sebastian",
"Zhang, Lining",
"Mahamood, Saad",
"Clinciu, Miruna",
"Ch",
"u, Khyathi",
"Hou, Yufang"
] | On the Role of Summary Content Units in Text Summarization Evaluation | naacl-short.25 | Poster | 2404.01701 | [
"https://github.com/tristanratz/scu-text-evaluation"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.26.bib | https://aclanthology.org/2024.naacl-short.26/ | @inproceedings{samuel-etal-2024-room,
title = "More room for language: Investigating the effect of retrieval on language models",
author = "Samuel, David and
Charpentier, Lucas and
Wold, Sondre",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.26",
doi = "10.18653/v1/2024.naacl-short.26",
pages = "282--305",
abstract = "Retrieval-augmented language models pose a promising alternative to standard language modeling. During pretraining, these models search in a corpus of documents for contextually relevant information that could aid the language modeling objective. We introduce an {`}ideal retrieval{'} methodology to study these models in a fully controllable setting. We conduct an extensive evaluation to examine how retrieval augmentation affects the behavior of the underlying language model. Among other things, we observe that these models: (i) save substantially less world knowledge in their weights, (ii) are better at understanding local context and inter-word dependencies, but (iii) are worse at comprehending global context.",
}
| Retrieval-augmented language models pose a promising alternative to standard language modeling. During pretraining, these models search in a corpus of documents for contextually relevant information that could aid the language modeling objective. We introduce an {`}ideal retrieval{'} methodology to study these models in a fully controllable setting. We conduct an extensive evaluation to examine how retrieval augmentation affects the behavior of the underlying language model. Among other things, we observe that these models: (i) save substantially less world knowledge in their weights, (ii) are better at understanding local context and inter-word dependencies, but (iii) are worse at comprehending global context. | [
"Samuel, David",
"Charpentier, Lucas",
"Wold, Sondre"
] | More room for language: Investigating the effect of retrieval on language models | naacl-short.26 | Poster | 2404.10939 | [
"https://github.com/ltgoslo/more-room-for-language"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.27.bib | https://aclanthology.org/2024.naacl-short.27/ | @inproceedings{gautam-etal-2024-discourse,
title = "Discourse-Aware In-Context Learning for Temporal Expression Normalization",
author = {Gautam, Akash and
Lange, Lukas and
Str{\"o}tgen, Jannik},
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.27",
doi = "10.18653/v1/2024.naacl-short.27",
pages = "306--315",
abstract = "Temporal expression (TE) normalization is a well-studied problem. However, the predominately used rule-based systems are highly restricted to specific settings, and upcoming machine learning approaches suffer from a lack of labeled data. In this work, we explore the feasibility of proprietary and open-source large language models (LLMs) for TE normalization using in-context learning to inject task, document, and example information into the model. We explore various sample selection strategies to retrieve the most relevant set of examples. By using a window-based prompt design approach, we can perform TE normalization across sentences, while leveraging the LLM knowledge without training the model.Our experiments show competitive results to models designed for this task. In particular, our method achieves large performance improvements for non-standard settings by dynamically including relevant examples during inference.",
}
| Temporal expression (TE) normalization is a well-studied problem. However, the predominately used rule-based systems are highly restricted to specific settings, and upcoming machine learning approaches suffer from a lack of labeled data. In this work, we explore the feasibility of proprietary and open-source large language models (LLMs) for TE normalization using in-context learning to inject task, document, and example information into the model. We explore various sample selection strategies to retrieve the most relevant set of examples. By using a window-based prompt design approach, we can perform TE normalization across sentences, while leveraging the LLM knowledge without training the model.Our experiments show competitive results to models designed for this task. In particular, our method achieves large performance improvements for non-standard settings by dynamically including relevant examples during inference. | [
"Gautam, Akash",
"Lange, Lukas",
"Str{\\\"o}tgen, Jannik"
] | Discourse-Aware In-Context Learning for Temporal Expression Normalization | naacl-short.27 | Poster | 2404.07775 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.28.bib | https://aclanthology.org/2024.naacl-short.28/ | @inproceedings{deshpande-etal-2024-contextualizing,
title = "Contextualizing Argument Quality Assessment with Relevant Knowledge",
author = "Deshpande, Darshan and
Sourati, Zhivar and
Ilievski, Filip and
Morstatter, Fred",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.28",
doi = "10.18653/v1/2024.naacl-short.28",
pages = "316--326",
abstract = "Automatic assessment of the quality of arguments has been recognized as a challenging task with significant implications for misinformation and targeted speech. While real-world arguments are tightly anchored in context, existing computational methods analyze their quality in isolation, which affects their accuracy and generalizability. We propose SPARK: a novel method for scoring argument quality based on contextualization via relevant knowledge. We devise four augmentations that leverage large language models to provide feedback, infer hidden assumptions, supply a similar-quality argument, or give a counter-argument. SPARK uses a dual-encoder Transformer architecture to enable the original argument and its augmentation to be considered jointly. Our experiments in both in-domain and zero-shot setups show that SPARK consistently outperforms existing techniques across multiple metrics",
}
| Automatic assessment of the quality of arguments has been recognized as a challenging task with significant implications for misinformation and targeted speech. While real-world arguments are tightly anchored in context, existing computational methods analyze their quality in isolation, which affects their accuracy and generalizability. We propose SPARK: a novel method for scoring argument quality based on contextualization via relevant knowledge. We devise four augmentations that leverage large language models to provide feedback, infer hidden assumptions, supply a similar-quality argument, or give a counter-argument. SPARK uses a dual-encoder Transformer architecture to enable the original argument and its augmentation to be considered jointly. Our experiments in both in-domain and zero-shot setups show that SPARK consistently outperforms existing techniques across multiple metrics | [
"Deshp",
"e, Darshan",
"Sourati, Zhivar",
"Ilievski, Filip",
"Morstatter, Fred"
] | Contextualizing Argument Quality Assessment with Relevant Knowledge | naacl-short.28 | Poster | 2305.12280 | [
"https://github.com/usc-isi-i2/forecast-argument"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.29.bib | https://aclanthology.org/2024.naacl-short.29/ | @inproceedings{nottingham-etal-2024-selective,
title = "Selective Perception: Learning Concise State Descriptions for Language Model Actors",
author = "Nottingham, Kolby and
Razeghi, Yasaman and
Kim, Kyungmin and
Lanier, Jb and
Baldi, Pierre and
Fox, Roy and
Singh, Sameer",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.29",
doi = "10.18653/v1/2024.naacl-short.29",
pages = "327--341",
abstract = "The latest large language models (LMs) support increasingly longer contexts. While this trend permits using substantial amounts of text with SOTA LMs, requiring these large LMs to process potentially redundant or irrelevant data needlessly increases inference time and cost. To remedy this problem, we propose BLINDER, a method that leverages a small finetuned LM to sample the minimal set of input features that maximizes the performance of a downstream LM. BLINDER trains an LM with a value head to estimate the likelihood of optimal outputs from a downstream LM given an input. We evaluate BLINDER on embodied decision making tasks with notoriously verbose state descriptions: NetHack and robot planning. BLINDER reduces the length of LM actor input by 87{\%} and 99{\%} while improving task success rates by 158{\%} and 54{\%} on NetHack and robot planning respectively which represents substantial inference cost savings while actually increasing performance.",
}
| The latest large language models (LMs) support increasingly longer contexts. While this trend permits using substantial amounts of text with SOTA LMs, requiring these large LMs to process potentially redundant or irrelevant data needlessly increases inference time and cost. To remedy this problem, we propose BLINDER, a method that leverages a small finetuned LM to sample the minimal set of input features that maximizes the performance of a downstream LM. BLINDER trains an LM with a value head to estimate the likelihood of optimal outputs from a downstream LM given an input. We evaluate BLINDER on embodied decision making tasks with notoriously verbose state descriptions: NetHack and robot planning. BLINDER reduces the length of LM actor input by 87{\%} and 99{\%} while improving task success rates by 158{\%} and 54{\%} on NetHack and robot planning respectively which represents substantial inference cost savings while actually increasing performance. | [
"Nottingham, Kolby",
"Razeghi, Yasaman",
"Kim, Kyungmin",
"Lanier, Jb",
"Baldi, Pierre",
"Fox, Roy",
"Singh, Sameer"
] | Selective Perception: Learning Concise State Descriptions for Language Model Actors | naacl-short.29 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.30.bib | https://aclanthology.org/2024.naacl-short.30/ | @inproceedings{petryk-etal-2024-aloha,
title = "{ALOH}a: A New Measure for Hallucination in Captioning Models",
author = "Petryk, Suzanne and
Chan, David and
Kachinthaya, Anish and
Zou, Haodi and
Canny, John and
Gonzalez, Joseph and
Darrell, Trevor",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.30",
doi = "10.18653/v1/2024.naacl-short.30",
pages = "342--357",
abstract = "Despite recent advances in multimodal pre-training for visual description, state-of-the-art models still produce captions containing errors, such as hallucinating objects not present in a scene. The existing prominent metric for object hallucination, CHAIR, is limited to a fixed set of MS COCO objects and synonyms. In this work, we propose a modernized open-vocabulary metric, ALOHa, which leverages large language models (LLMs) to measure object hallucinations. Specifically, we use an LLM to extract groundable objects from a candidate caption, measure their semantic similarity to reference objects from captions and object detections, and use Hungarian matching to produce a final hallucination score. We show that ALOHa correctly identifies 13.6{\%} more hallucinated objects than CHAIR on HAT, a new gold-standard subset of MS COCO Captions annotated for hallucinations, and 30.8{\%} more on nocaps, where objects extend beyond MS COCO categories.",
}
| Despite recent advances in multimodal pre-training for visual description, state-of-the-art models still produce captions containing errors, such as hallucinating objects not present in a scene. The existing prominent metric for object hallucination, CHAIR, is limited to a fixed set of MS COCO objects and synonyms. In this work, we propose a modernized open-vocabulary metric, ALOHa, which leverages large language models (LLMs) to measure object hallucinations. Specifically, we use an LLM to extract groundable objects from a candidate caption, measure their semantic similarity to reference objects from captions and object detections, and use Hungarian matching to produce a final hallucination score. We show that ALOHa correctly identifies 13.6{\%} more hallucinated objects than CHAIR on HAT, a new gold-standard subset of MS COCO Captions annotated for hallucinations, and 30.8{\%} more on nocaps, where objects extend beyond MS COCO categories. | [
"Petryk, Suzanne",
"Chan, David",
"Kachinthaya, Anish",
"Zou, Haodi",
"Canny, John",
"Gonzalez, Joseph",
"Darrell, Trevor"
] | ALOHa: A New Measure for Hallucination in Captioning Models | naacl-short.30 | Oral | 2404.02904 | [
""
] | https://huggingface.co/papers/2404.02904 | 1 | 0 | 0 | 7 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-short.31.bib | https://aclanthology.org/2024.naacl-short.31/ | @inproceedings{zhuang-etal-2024-beyond,
title = "Beyond Yes and No: Improving Zero-Shot {LLM} Rankers via Scoring Fine-Grained Relevance Labels",
author = "Zhuang, Honglei and
Qin, Zhen and
Hui, Kai and
Wu, Junru and
Yan, Le and
Wang, Xuanhui and
Bendersky, Michael",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.31",
doi = "10.18653/v1/2024.naacl-short.31",
pages = "358--370",
abstract = "Zero-shot text rankers powered by recent LLMs achieve remarkable ranking performance by simply prompting. Existing prompts for pointwise LLM rankers mostly ask the model to choose from binary relevance labels like {``}Yes{''} and {``}No{''}. However, the lack of intermediate relevance label options may cause the LLM to provide noisy or biased answers for documents that are partially relevant to the query. We propose to incorporate fine-grained relevance labels into the prompt for LLM rankers, enabling them to better differentiate among documents with different levels of relevance to the query and thus derive a more accurate ranking. We study two variants of the prompt template, coupled with different numbers of relevance levels. Our experiments on 8 BEIR data sets show that adding fine-grained relevance labels significantly improves the performance of LLM rankers.",
}
| Zero-shot text rankers powered by recent LLMs achieve remarkable ranking performance by simply prompting. Existing prompts for pointwise LLM rankers mostly ask the model to choose from binary relevance labels like {``}Yes{''} and {``}No{''}. However, the lack of intermediate relevance label options may cause the LLM to provide noisy or biased answers for documents that are partially relevant to the query. We propose to incorporate fine-grained relevance labels into the prompt for LLM rankers, enabling them to better differentiate among documents with different levels of relevance to the query and thus derive a more accurate ranking. We study two variants of the prompt template, coupled with different numbers of relevance levels. Our experiments on 8 BEIR data sets show that adding fine-grained relevance labels significantly improves the performance of LLM rankers. | [
"Zhuang, Honglei",
"Qin, Zhen",
"Hui, Kai",
"Wu, Junru",
"Yan, Le",
"Wang, Xuanhui",
"Bendersky, Michael"
] | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | naacl-short.31 | Poster | 2310.14122 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.32.bib | https://aclanthology.org/2024.naacl-short.32/ | @inproceedings{zhang-etal-2024-llm-driven,
title = "{LLM}-Driven Knowledge Injection Advances Zero-Shot and Cross-Target Stance Detection",
author = "Zhang, Zhao and
Li, Yiming and
Zhang, Jin and
Xu, Hui",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.32",
doi = "10.18653/v1/2024.naacl-short.32",
pages = "371--378",
abstract = "Stance detection aims at inferring an author{'}s attitude towards a specific target in a text. Prior methods mainly consider target-related background information for a better understanding of targets while neglecting the accompanying input texts. In this study, we propose to prompt Large Language Models (LLMs) to explicitly extract the relationship between paired text and target as contextual knowledge. We then inject such LLM-driven knowledge into a generation model BART to exploit the rich contexts and semantics. Moreover, to further enhance the decoding capability of BART, a novel prototypical contrastive scheme is designed to align input contents with stance labels. Our experimental results demonstrate the state-of-the-art performance across several publicly available datasets, showcasing effectiveness in both zero-shot and cross-target stance detection scenarios. We publicly release our code to facilitate future research.",
}
| Stance detection aims at inferring an author{'}s attitude towards a specific target in a text. Prior methods mainly consider target-related background information for a better understanding of targets while neglecting the accompanying input texts. In this study, we propose to prompt Large Language Models (LLMs) to explicitly extract the relationship between paired text and target as contextual knowledge. We then inject such LLM-driven knowledge into a generation model BART to exploit the rich contexts and semantics. Moreover, to further enhance the decoding capability of BART, a novel prototypical contrastive scheme is designed to align input contents with stance labels. Our experimental results demonstrate the state-of-the-art performance across several publicly available datasets, showcasing effectiveness in both zero-shot and cross-target stance detection scenarios. We publicly release our code to facilitate future research. | [
"Zhang, Zhao",
"Li, Yiming",
"Zhang, Jin",
"Xu, Hui"
] | LLM-Driven Knowledge Injection Advances Zero-Shot and Cross-Target Stance Detection | naacl-short.32 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.33.bib | https://aclanthology.org/2024.naacl-short.33/ | @inproceedings{iskander-etal-2024-leveraging,
title = "Leveraging Prototypical Representations for Mitigating Social Bias without Demographic Information",
author = "Iskander, Shadi and
Radinsky, Kira and
Belinkov, Yonatan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.33",
doi = "10.18653/v1/2024.naacl-short.33",
pages = "379--390",
abstract = "Mitigating social biases typically requires identifying the social groups associated with each data sample. In this paper, we present DAFair, a novel approach to address social bias in language models. Unlike traditional methods that rely on explicit demographic labels, our approach does not require any such information. Instead, we leverage predefined prototypical demographic texts and incorporate a regularization term during the fine-tuning process to mitigate bias in the model{'}s representations. Our empirical results across two tasks and two models demonstrate the effectiveness of our method compared to previous approaches that do not rely on labeled data. Moreover, with limited demographic-annotated data, our approach outperforms common debiasing approaches.",
}
| Mitigating social biases typically requires identifying the social groups associated with each data sample. In this paper, we present DAFair, a novel approach to address social bias in language models. Unlike traditional methods that rely on explicit demographic labels, our approach does not require any such information. Instead, we leverage predefined prototypical demographic texts and incorporate a regularization term during the fine-tuning process to mitigate bias in the model{'}s representations. Our empirical results across two tasks and two models demonstrate the effectiveness of our method compared to previous approaches that do not rely on labeled data. Moreover, with limited demographic-annotated data, our approach outperforms common debiasing approaches. | [
"Isk",
"er, Shadi",
"Radinsky, Kira",
"Belinkov, Yonatan"
] | Leveraging Prototypical Representations for Mitigating Social Bias without Demographic Information | naacl-short.33 | Poster | 2403.09516 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.34.bib | https://aclanthology.org/2024.naacl-short.34/ | @inproceedings{yang-etal-2024-direct,
title = "Direct Preference Optimization for Neural Machine Translation with Minimum {B}ayes Risk Decoding",
author = "Yang, Guangyu and
Chen, Jinghong and
Lin, Weizhe and
Byrne, Bill",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.34",
doi = "10.18653/v1/2024.naacl-short.34",
pages = "391--398",
abstract = "Minimum Bayes Risk (MBR) decoding can significantly improve translation performance of Multilingual Large Language Models (MLLMs). However, MBR decoding is computationally expensive. We show how the recently developed Reinforcement Learning technique, Direct Preference Optimization (DPO), can fine-tune MLLMs to get the gains of MBR without any additional computation in inference. Our method uses only a small monolingual fine-tuning set and yields significantly improved performance on multiple NMT test sets compared to MLLMs without DPO.",
}
| Minimum Bayes Risk (MBR) decoding can significantly improve translation performance of Multilingual Large Language Models (MLLMs). However, MBR decoding is computationally expensive. We show how the recently developed Reinforcement Learning technique, Direct Preference Optimization (DPO), can fine-tune MLLMs to get the gains of MBR without any additional computation in inference. Our method uses only a small monolingual fine-tuning set and yields significantly improved performance on multiple NMT test sets compared to MLLMs without DPO. | [
"Yang, Guangyu",
"Chen, Jinghong",
"Lin, Weizhe",
"Byrne, Bill"
] | Direct Preference Optimization for Neural Machine Translation with Minimum Bayes Risk Decoding | naacl-short.34 | Poster | 2311.08380 | [
"https://github.com/bruceyg/dpo-mbr"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.35.bib | https://aclanthology.org/2024.naacl-short.35/ | @inproceedings{mekala-etal-2024-echoprompt,
title = "{E}cho{P}rompt: Instructing the Model to Rephrase Queries for Improved In-context Learning",
author = "Mekala, Raja Sekhar Reddy and
Razeghi, Yasaman and
Singh, Sameer",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.35",
doi = "10.18653/v1/2024.naacl-short.35",
pages = "399--432",
abstract = "Language models are achieving impressive performance on various tasks by aggressively adopting inference-time prompting techniques,such as zero-shot and few-shot prompting. In this work, we introduce EchoPrompt, a simple yet effective approach that prompts the model to rephrase its queries before answering them. EchoPrompt is tailored for four scenarios, including standard and chain-of-thought prompting, in both zero-shot and few-shot settings. Experimental results show that EchoPrompt yields substantial improvementsacross all these settings for four families of causal language models. These improvements are observed across various numerical reasoning (e.g., GSM8K, SVAMP), reading comprehension (e.g., DROP), and logical reasoning (e.g., Coin flipping) tasks. On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5{\%} in numerical tasks and 13{\%} in reading comprehension tasks. Our empirical results indicate that EchoPrompt is an effective technique that enhances in-context learning performance.",
}
| Language models are achieving impressive performance on various tasks by aggressively adopting inference-time prompting techniques,such as zero-shot and few-shot prompting. In this work, we introduce EchoPrompt, a simple yet effective approach that prompts the model to rephrase its queries before answering them. EchoPrompt is tailored for four scenarios, including standard and chain-of-thought prompting, in both zero-shot and few-shot settings. Experimental results show that EchoPrompt yields substantial improvementsacross all these settings for four families of causal language models. These improvements are observed across various numerical reasoning (e.g., GSM8K, SVAMP), reading comprehension (e.g., DROP), and logical reasoning (e.g., Coin flipping) tasks. On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5{\%} in numerical tasks and 13{\%} in reading comprehension tasks. Our empirical results indicate that EchoPrompt is an effective technique that enhances in-context learning performance. | [
"Mekala, Raja Sekhar Reddy",
"Razeghi, Yasaman",
"Singh, Sameer"
] | EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning | naacl-short.35 | Poster | 2309.10687 | [
"https://github.com/rajasekharmekala/query-rephrasing-subtask-cot"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.36.bib | https://aclanthology.org/2024.naacl-short.36/ | @inproceedings{behzad-etal-2024-leaf,
title = "{LEAF}: Language Learners{'} {E}nglish Essays and Feedback Corpus",
author = "Behzad, Shabnam and
Kashefi, Omid and
Somasundaran, Swapna",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.36",
doi = "10.18653/v1/2024.naacl-short.36",
pages = "433--442",
abstract = "This paper addresses the issue of automated feedback generation for English language learners by presenting a corpus of English essays and their corresponding feedback, called LEAF, collected from the {``}essayforum{''} website. The corpus comprises approximately 6K essay-feedback pairs, offering a diverse and valuable resource for developing personalized feedback generation systems that address the critical deficiencies within essays, spanning from rectifying grammatical errors to offering insights on argumentative aspects and organizational coherence. Using this corpus, we present and compare multiple feedback generation baselines. Our findings shed light on the challenges of providing personalized feedback and highlight the potential of the LEAF corpus in advancing automated essay evaluation.",
}
| This paper addresses the issue of automated feedback generation for English language learners by presenting a corpus of English essays and their corresponding feedback, called LEAF, collected from the {``}essayforum{''} website. The corpus comprises approximately 6K essay-feedback pairs, offering a diverse and valuable resource for developing personalized feedback generation systems that address the critical deficiencies within essays, spanning from rectifying grammatical errors to offering insights on argumentative aspects and organizational coherence. Using this corpus, we present and compare multiple feedback generation baselines. Our findings shed light on the challenges of providing personalized feedback and highlight the potential of the LEAF corpus in advancing automated essay evaluation. | [
"Behzad, Shabnam",
"Kashefi, Omid",
"Somasundaran, Swapna"
] | LEAF: Language Learners' English Essays and Feedback Corpus | naacl-short.36 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.37.bib | https://aclanthology.org/2024.naacl-short.37/ | @inproceedings{ebrahimi-wense-2024-zero,
title = "Zero-Shot vs. Translation-Based Cross-Lingual Transfer: The Case of Lexical Gaps",
author = "Ebrahimi, Abteen and
von der Wense, Katharina",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.37",
doi = "10.18653/v1/2024.naacl-short.37",
pages = "443--458",
abstract = "Cross-lingual transfer can be achieved through two main approaches: zero-shot transfer or machine translation (MT). While the former has been the dominant approach, both have been shown to be competitive. In this work, we compare the current performance and long-term viability of these methods. We leverage lexical gaps to create a multilingual question answering dataset, which provides a difficult domain for evaluation. Both approaches struggle in this setting, though zero-shot transfer performs better, as current MT outputs are not specific enough for the task. Using oracle translation offers the best performance, showing that this approach can perform well long-term, however current MT quality is a bottleneck. We also conduct an exploratory study to see if humans produce translations sufficient for the task with only general instructions. We find this to be true for the majority of translators, but not all. This indicates that while translation has the potential to outperform zero-shot approaches, creating MT models that generate accurate task-specific translations may not be straightforward.",
}
| Cross-lingual transfer can be achieved through two main approaches: zero-shot transfer or machine translation (MT). While the former has been the dominant approach, both have been shown to be competitive. In this work, we compare the current performance and long-term viability of these methods. We leverage lexical gaps to create a multilingual question answering dataset, which provides a difficult domain for evaluation. Both approaches struggle in this setting, though zero-shot transfer performs better, as current MT outputs are not specific enough for the task. Using oracle translation offers the best performance, showing that this approach can perform well long-term, however current MT quality is a bottleneck. We also conduct an exploratory study to see if humans produce translations sufficient for the task with only general instructions. We find this to be true for the majority of translators, but not all. This indicates that while translation has the potential to outperform zero-shot approaches, creating MT models that generate accurate task-specific translations may not be straightforward. | [
"Ebrahimi, Abteen",
"von der Wense, Katharina"
] | Zero-Shot vs. Translation-Based Cross-Lingual Transfer: The Case of Lexical Gaps | naacl-short.37 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.38.bib | https://aclanthology.org/2024.naacl-short.38/ | @inproceedings{ohashi-etal-2024-true,
title = "On the True Distribution Approximation of Minimum {B}ayes-Risk Decoding",
author = "Ohashi, Atsumoto and
Honda, Ukyo and
Morimura, Tetsuro and
Jinnai, Yuu",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.38",
doi = "10.18653/v1/2024.naacl-short.38",
pages = "459--468",
abstract = "Minimum Bayes-risk (MBR) decoding has recently gained renewed attention in text generation.MBR decoding considers texts sampled from a model as pseudo-references and selects the text with the highest similarity to the others.Therefore, sampling is one of the key elements of MBR decoding, and previous studies reported that the performance varies by sampling methods.From a theoretical standpoint, this performance variation is likely tied to how closely the samples approximate the true distribution of references.However, this approximation has not been the subject of in-depth study.In this study, we propose using anomaly detection to measure the degree of approximation.We first closely examine the performance variation and then show that previous hypotheses about samples do not correlate well with the variation, but our introduced anomaly scores do.The results are the first to empirically support the link between the performance and the core assumption of MBR decoding.",
}
| Minimum Bayes-risk (MBR) decoding has recently gained renewed attention in text generation.MBR decoding considers texts sampled from a model as pseudo-references and selects the text with the highest similarity to the others.Therefore, sampling is one of the key elements of MBR decoding, and previous studies reported that the performance varies by sampling methods.From a theoretical standpoint, this performance variation is likely tied to how closely the samples approximate the true distribution of references.However, this approximation has not been the subject of in-depth study.In this study, we propose using anomaly detection to measure the degree of approximation.We first closely examine the performance variation and then show that previous hypotheses about samples do not correlate well with the variation, but our introduced anomaly scores do.The results are the first to empirically support the link between the performance and the core assumption of MBR decoding. | [
"Ohashi, Atsumoto",
"Honda, Ukyo",
"Morimura, Tetsuro",
"Jinnai, Yuu"
] | On the True Distribution Approximation of Minimum Bayes-Risk Decoding | naacl-short.38 | Poster | 2404.00752 | [
"https://github.com/cyberagentailab/mbr-anomaly"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.39.bib | https://aclanthology.org/2024.naacl-short.39/ | @inproceedings{wang-etal-2024-rehearsal,
title = "Rehearsal-Free Modular and Compositional Continual Learning for Language Models",
author = {Wang, Mingyang and
Adel, Heike and
Lange, Lukas and
Str{\"o}tgen, Jannik and
Schuetze, Hinrich},
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.39",
doi = "10.18653/v1/2024.naacl-short.39",
pages = "469--480",
abstract = "Continual learning aims at incrementally acquiring new knowledge while not forgetting existing knowledge. To overcome catastrophic forgetting, methods are either rehearsal-based, i.e., store data examples from previous tasks for data replay, or isolate parameters dedicated to each task. However, rehearsal-based methods raise privacy and memory issues, and parameter-isolation continual learning does not consider interaction between tasks, thus hindering knowledge transfer. In this work, we propose MoCL, a rehearsal-free **Mo**dular and **C**ompositional Continual **L**earning framework which continually adds new modules to language models and composes them with existing modules. Experiments on various benchmarks show that MoCL outperforms state of the art and effectively facilitates knowledge transfer.",
}
| Continual learning aims at incrementally acquiring new knowledge while not forgetting existing knowledge. To overcome catastrophic forgetting, methods are either rehearsal-based, i.e., store data examples from previous tasks for data replay, or isolate parameters dedicated to each task. However, rehearsal-based methods raise privacy and memory issues, and parameter-isolation continual learning does not consider interaction between tasks, thus hindering knowledge transfer. In this work, we propose MoCL, a rehearsal-free **Mo**dular and **C**ompositional Continual **L**earning framework which continually adds new modules to language models and composes them with existing modules. Experiments on various benchmarks show that MoCL outperforms state of the art and effectively facilitates knowledge transfer. | [
"Wang, Mingyang",
"Adel, Heike",
"Lange, Lukas",
"Str{\\\"o}tgen, Jannik",
"Schuetze, Hinrich"
] | Rehearsal-Free Modular and Compositional Continual Learning for Language Models | naacl-short.39 | Poster | 2404.00790 | [
"https://github.com/boschresearch/mocl-naacl-2024"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.40.bib | https://aclanthology.org/2024.naacl-short.40/ | @inproceedings{chalkidis-brandl-2024-llama,
title = "Llama meets {EU}: Investigating the {E}uropean political spectrum through the lens of {LLM}s",
author = "Chalkidis, Ilias and
Brandl, Stephanie",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.40",
doi = "10.18653/v1/2024.naacl-short.40",
pages = "481--498",
abstract = "Instruction-finetuned Large Language Models inherit clear political leanings that have been shown to influence downstream task performance. We expand this line of research beyond the two-party system in the US and audit Llama Chat in the context of EU politics in various settings to analyze the model{'}s political knowledge and its ability to reason in context. We adapt, i.e., further fine-tune, Llama Chat on speeches of individual euro-parties from debates in the European Parliament to reevaluate its political leaning based on the EUandI questionnaire. Llama Chat shows considerable knowledge of national parties{'} positions and is capable of reasoning in context. The adapted, party-specific, models are substantially re-aligned towards respective positions which we see as a starting point for using chat-based LLMs as data-driven conversational engines to assist research in political science.",
}
| Instruction-finetuned Large Language Models inherit clear political leanings that have been shown to influence downstream task performance. We expand this line of research beyond the two-party system in the US and audit Llama Chat in the context of EU politics in various settings to analyze the model{'}s political knowledge and its ability to reason in context. We adapt, i.e., further fine-tune, Llama Chat on speeches of individual euro-parties from debates in the European Parliament to reevaluate its political leaning based on the EUandI questionnaire. Llama Chat shows considerable knowledge of national parties{'} positions and is capable of reasoning in context. The adapted, party-specific, models are substantially re-aligned towards respective positions which we see as a starting point for using chat-based LLMs as data-driven conversational engines to assist research in political science. | [
"Chalkidis, Ilias",
"Br",
"l, Stephanie"
] | Llama meets EU: Investigating the European political spectrum through the lens of LLMs | naacl-short.40 | Poster | 2403.13592 | [
"https://github.com/coastalcph/eu-politics-llms"
] | https://huggingface.co/papers/2403.13592 | 1 | 0 | 0 | 2 | 1 | [
"coastalcph/Llama-2-13b-chat-hf-LoRA-eu-debates-epp",
"coastalcph/Llama-2-13b-chat-hf-LoRA-eu-debates-sd",
"coastalcph/Llama-2-13b-chat-hf-LoRA-eu-debates-greens-efa",
"coastalcph/Llama-2-13b-chat-hf-LoRA-eu-debates-gue-ngl",
"coastalcph/Llama-2-13b-chat-hf-LoRA-eu-debates-id"
] | [
"coastalcph/eu_debates",
"coastalcph/euandi_2019"
] | [] |
https://aclanthology.org/2024.naacl-short.41.bib | https://aclanthology.org/2024.naacl-short.41/ | @inproceedings{hsu-etal-2024-m3t,
title = "{M}3{T}: A New Benchmark Dataset for Multi-Modal Document-Level Machine Translation",
author = "Hsu, Benjamin and
Liu, Xiaoyu and
Li, Huayang and
Fujinuma, Yoshinari and
Nadejde, Maria and
Niu, Xing and
Litman, Ron and
Kittenplon, Yair and
Pappagari, Raghavendra",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.41",
doi = "10.18653/v1/2024.naacl-short.41",
pages = "499--507",
abstract = "Document translation poses a challenge for Neural Machine Translation (NMT) systems. Most document-level NMT systems rely on meticulously curated sentence-level parallel data, assuming flawless extraction of text from documents along with their precise reading order. These systems also tend to disregard additional visual cues such as the document layout, deeming it irrelevant. However, real-world documents often possess intricate text layouts that defy these assumptions. Extracting information from Optical Character Recognition (OCR) or heuristic rules can result in errors, and the layout (e.g., paragraphs, headers) may convey relationships between distant sections of text. This complexity is particularly evident in widely used PDF documents, which represent information visually. This paper addresses this gap by introducing M3T a novel benchmark dataset tailored to evaluate NMT systems on the comprehensive task of translating semi-structured documents. This dataset aims to bridge the evaluation gap in document-level NMT systems, acknowledging the challenges posed by rich text layouts in real-world applications.",
}
| Document translation poses a challenge for Neural Machine Translation (NMT) systems. Most document-level NMT systems rely on meticulously curated sentence-level parallel data, assuming flawless extraction of text from documents along with their precise reading order. These systems also tend to disregard additional visual cues such as the document layout, deeming it irrelevant. However, real-world documents often possess intricate text layouts that defy these assumptions. Extracting information from Optical Character Recognition (OCR) or heuristic rules can result in errors, and the layout (e.g., paragraphs, headers) may convey relationships between distant sections of text. This complexity is particularly evident in widely used PDF documents, which represent information visually. This paper addresses this gap by introducing M3T a novel benchmark dataset tailored to evaluate NMT systems on the comprehensive task of translating semi-structured documents. This dataset aims to bridge the evaluation gap in document-level NMT systems, acknowledging the challenges posed by rich text layouts in real-world applications. | [
"Hsu, Benjamin",
"Liu, Xiaoyu",
"Li, Huayang",
"Fujinuma, Yoshinari",
"Nadejde, Maria",
"Niu, Xing",
"Litman, Ron",
"Kittenplon, Yair",
"Pappagari, Raghavendra"
] | M3T: A New Benchmark Dataset for Multi-Modal Document-Level Machine Translation | naacl-short.41 | Poster | 2406.08255 | [
"https://github.com/amazon-science/m3t-multi-modal-translation-bench"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.42.bib | https://aclanthology.org/2024.naacl-short.42/ | @inproceedings{chen-etal-2024-control,
title = "Control-{DAG}: Constrained Decoding for Non-Autoregressive Directed Acyclic T5 using Weighted Finite State Automata",
author = "Chen, Jinghong and
Lin, Weizhe and
Mei, Jingbiao and
Byrne, Bill",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.42",
doi = "10.18653/v1/2024.naacl-short.42",
pages = "508--518",
abstract = "The Directed Acyclic Transformer is a fast non-autoregressive (NAR) model that performs well in Neural Machine Translation. Two issues prevent its application to general Natural Language Generation (NLG) tasks: frequent Out-Of-Vocabulary (OOV) errors and the inability to faithfully generate entity names. We introduce Control-DAG, a constrained decoding algorithm for our Directed Acyclic T5 (DA-T5) model which offers lexical, vocabulary and length control. We show that Control-DAG significantly enhances DA-T5 on the Schema Guided Dialogue and the DART datasets, establishing strong NAR results for Task-Oriented Dialogue and Data-to-Text NLG.",
}
| The Directed Acyclic Transformer is a fast non-autoregressive (NAR) model that performs well in Neural Machine Translation. Two issues prevent its application to general Natural Language Generation (NLG) tasks: frequent Out-Of-Vocabulary (OOV) errors and the inability to faithfully generate entity names. We introduce Control-DAG, a constrained decoding algorithm for our Directed Acyclic T5 (DA-T5) model which offers lexical, vocabulary and length control. We show that Control-DAG significantly enhances DA-T5 on the Schema Guided Dialogue and the DART datasets, establishing strong NAR results for Task-Oriented Dialogue and Data-to-Text NLG. | [
"Chen, Jinghong",
"Lin, Weizhe",
"Mei, Jingbiao",
"Byrne, Bill"
] | Control-DAG: Constrained Decoding for Non-Autoregressive Directed Acyclic T5 using Weighted Finite State Automata | naacl-short.42 | Poster | 2404.06854 | [
"https://github.com/erichen0615/controldag"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.43.bib | https://aclanthology.org/2024.naacl-short.43/ | @inproceedings{kumar-etal-2024-vision,
title = "Do Vision-Language Models Understand Compound Nouns?",
author = "Kumar, Sonal and
Ghosh, Sreyan and
Sakshi, S and
Tyagi, Utkarsh and
Manocha, Dinesh",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.43",
doi = "10.18653/v1/2024.naacl-short.43",
pages = "519--527",
abstract = "Open-vocabulary vision-language models (VLMs) like CLIP, trained using contrastive loss, have emerged as a promising new paradigm for text-to-image retrieval. However, do VLMs understand compound nouns (CNs) (e.g., *lab coat*) as well as they understand nouns (e.g., *lab*)? We curate Compun, a novel benchmark with 400 unique and commonly used CNs, to evaluate the effectiveness of VLMs in interpreting CNs. The Compun benchmark challenges a VLM for text-to-image retrieval where, given a text prompt with a CN, the task is to select the correct image that shows the CN among a pair of distractor images that show the constituent nouns that make up the CN. Next, we perform an in-depth analysis to highlight CLIPs{'} limited understanding of certain types of CNs. Finally, we present an alternative framework that moves beyond hand-written templates for text prompts widely used by CLIP-like models. We employ a Large Language Model to generate multiple diverse captions that include the CN as an object in the scene described by the caption. Our proposed method improves CN understanding of CLIP by 8.25{\%} on Compun. Code and benchmark are available.",
}
| Open-vocabulary vision-language models (VLMs) like CLIP, trained using contrastive loss, have emerged as a promising new paradigm for text-to-image retrieval. However, do VLMs understand compound nouns (CNs) (e.g., *lab coat*) as well as they understand nouns (e.g., *lab*)? We curate Compun, a novel benchmark with 400 unique and commonly used CNs, to evaluate the effectiveness of VLMs in interpreting CNs. The Compun benchmark challenges a VLM for text-to-image retrieval where, given a text prompt with a CN, the task is to select the correct image that shows the CN among a pair of distractor images that show the constituent nouns that make up the CN. Next, we perform an in-depth analysis to highlight CLIPs{'} limited understanding of certain types of CNs. Finally, we present an alternative framework that moves beyond hand-written templates for text prompts widely used by CLIP-like models. We employ a Large Language Model to generate multiple diverse captions that include the CN as an object in the scene described by the caption. Our proposed method improves CN understanding of CLIP by 8.25{\%} on Compun. Code and benchmark are available. | [
"Kumar, Sonal",
"Ghosh, Sreyan",
"Sakshi, S",
"Tyagi, Utkarsh",
"Manocha, Dinesh"
] | Do Vision-Language Models Understand Compound Nouns? | naacl-short.43 | Poster | 2404.00419 | [
"https://github.com/sonalkum/compun"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.44.bib | https://aclanthology.org/2024.naacl-short.44/ | @inproceedings{jung-etal-2024-prompt,
title = "Is Prompt Transfer Always Effective? An Empirical Study of Prompt Transfer for Question Answering",
author = "Jung, Minji and
Park, Soyeon and
Sul, Jeewoo and
Choi, Yong Suk",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.44",
doi = "10.18653/v1/2024.naacl-short.44",
pages = "528--539",
abstract = "Prompt tuning, which freezes all parameters of a pre-trained model and only trains a soft prompt, has emerged as a parameter-efficient approach. For the reason that the prompt initialization becomes sensitive when the model size is small, the prompt transfer that uses the trained prompt as an initialization for the target task has recently been introduced. Since previous works have compared tasks in large categories (e.g., summarization, sentiment analysis), the factors that influence prompt transfer have not been sufficiently explored. In this paper, we characterize the question answering task based on features such as answer format and empirically investigate the transferability of soft prompts for the first time. We analyze the impact of initialization during prompt transfer and find that the train dataset size of source and target tasks have the influence significantly. Furthermore, we propose a novel approach for measuring catastrophic forgetting and investigate how it occurs in terms of the amount of evidence. Our findings can help deeply understand transfer learning in prompt tuning.",
}
| Prompt tuning, which freezes all parameters of a pre-trained model and only trains a soft prompt, has emerged as a parameter-efficient approach. For the reason that the prompt initialization becomes sensitive when the model size is small, the prompt transfer that uses the trained prompt as an initialization for the target task has recently been introduced. Since previous works have compared tasks in large categories (e.g., summarization, sentiment analysis), the factors that influence prompt transfer have not been sufficiently explored. In this paper, we characterize the question answering task based on features such as answer format and empirically investigate the transferability of soft prompts for the first time. We analyze the impact of initialization during prompt transfer and find that the train dataset size of source and target tasks have the influence significantly. Furthermore, we propose a novel approach for measuring catastrophic forgetting and investigate how it occurs in terms of the amount of evidence. Our findings can help deeply understand transfer learning in prompt tuning. | [
"Jung, Minji",
"Park, Soyeon",
"Sul, Jeewoo",
"Choi, Yong Suk"
] | Is Prompt Transfer Always Effective? An Empirical Study of Prompt Transfer for Question Answering | naacl-short.44 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.45.bib | https://aclanthology.org/2024.naacl-short.45/ | @inproceedings{pantazopoulos-etal-2024-lost,
title = "Lost in Space: Probing Fine-grained Spatial Understanding in Vision and Language Resamplers",
author = "Pantazopoulos, Georgios and
Suglia, Alessandro and
Lemon, Oliver and
Eshghi, Arash",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.45",
doi = "10.18653/v1/2024.naacl-short.45",
pages = "540--549",
abstract = "An effective method for combining frozen large language models (LLM) and visual encoders involves a resampler module that creates a {`}visual prompt{'} which is provided to the LLM, along with the textual prompt. While this approach has enabled impressive performance across many coarse-grained tasks like image captioning and visual question answering, more fine-grained tasks that require spatial understanding have not been thoroughly examined. In this paper, we use diagnostic classifiers to measure the extent to which the visual prompt produced by the resampler encodes spatial information. Our results show that this information is largely absent from the resampler output when kept frozen during training of the classifiers. However, when the resampler and classifier are trained jointly, we observe a significant performance boost. This shows that the compression achieved by the resamplers can in principle encode the requisite spatial information, but that more object-aware objectives are needed at the pretraining stage to facilitate this capability.",
}
| An effective method for combining frozen large language models (LLM) and visual encoders involves a resampler module that creates a {`}visual prompt{'} which is provided to the LLM, along with the textual prompt. While this approach has enabled impressive performance across many coarse-grained tasks like image captioning and visual question answering, more fine-grained tasks that require spatial understanding have not been thoroughly examined. In this paper, we use diagnostic classifiers to measure the extent to which the visual prompt produced by the resampler encodes spatial information. Our results show that this information is largely absent from the resampler output when kept frozen during training of the classifiers. However, when the resampler and classifier are trained jointly, we observe a significant performance boost. This shows that the compression achieved by the resamplers can in principle encode the requisite spatial information, but that more object-aware objectives are needed at the pretraining stage to facilitate this capability. | [
"Pantazopoulos, Georgios",
"Suglia, Aless",
"ro",
"Lemon, Oliver",
"Eshghi, Arash"
] | Lost in Space: Probing Fine-grained Spatial Understanding in Vision and Language Resamplers | naacl-short.45 | Oral | 2404.13594 | [
"https://github.com/gpantaz/probing-resamplers"
] | https://huggingface.co/papers/2404.13594 | 1 | 0 | 0 | 4 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-short.46.bib | https://aclanthology.org/2024.naacl-short.46/ | @inproceedings{etxaniz-etal-2024-multilingual,
title = "Do Multilingual Language Models Think Better in {E}nglish?",
author = "Etxaniz, Julen and
Azkune, Gorka and
Soroa, Aitor and
Lacalle, Oier and
Artetxe, Mikel",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.46",
doi = "10.18653/v1/2024.naacl-short.46",
pages = "550--564",
abstract = "Translate-test is a popular technique to improve the performance of multilingual language models. This approach works by translating the input into English using an external machine translation system before running inference. However, these improvements can be attributed to the use of a separate translation system, which is typically trained on large amounts of parallel data not seen by the language model. In this work, we introduce a new approach called self-translate that leverages the few-shot translation capabilities of multilingual language models. This allows us to analyze the effect of translation in isolation. Experiments over 5 tasks show that self-translate consistently outperforms direct inference, demonstrating that language models are unable to leverage their full multilingual potential when prompted in non-English languages. Our code is available at https://github.com/juletx/self-translate.",
}
| Translate-test is a popular technique to improve the performance of multilingual language models. This approach works by translating the input into English using an external machine translation system before running inference. However, these improvements can be attributed to the use of a separate translation system, which is typically trained on large amounts of parallel data not seen by the language model. In this work, we introduce a new approach called self-translate that leverages the few-shot translation capabilities of multilingual language models. This allows us to analyze the effect of translation in isolation. Experiments over 5 tasks show that self-translate consistently outperforms direct inference, demonstrating that language models are unable to leverage their full multilingual potential when prompted in non-English languages. Our code is available at https://github.com/juletx/self-translate. | [
"Etxaniz, Julen",
"Azkune, Gorka",
"Soroa, Aitor",
"Lacalle, Oier",
"Artetxe, Mikel"
] | Do Multilingual Language Models Think Better in English? | naacl-short.46 | Poster | 2308.01223 | [
"https://github.com/juletx/self-translate"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.47.bib | https://aclanthology.org/2024.naacl-short.47/ | @inproceedings{yuan-etal-2024-continued,
title = "A Continued Pretrained {LLM} Approach for Automatic Medical Note Generation",
author = "Yuan, Dong and
Rastogi, Eti and
Naik, Gautam and
Rajagopal, Sree Prasanna and
Goyal, Sagar and
Zhao, Fen and
Chintagunta, Bharath and
Ward, Jeffrey",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.47",
doi = "10.18653/v1/2024.naacl-short.47",
pages = "565--571",
abstract = "LLMs are revolutionizing NLP tasks. However, the use of the most advanced LLMs, such as GPT-4, is often prohibitively expensive for most specialized fields. We introduce HEAL, the first continuously trained 13B LLaMA2-based LLM that is purpose-built for medical conversations and measured on automated scribing. Our results demonstrate that HEAL outperforms GPT-4 and PMC-LLaMA in PubMedQA, with an accuracy of 78.4{\%}. It also achieves parity with GPT-4 in generating medical notes. Remarkably, HEAL surpasses GPT-4 and Med-PaLM 2 in identifying more correct medical concepts and exceeds the performance of human scribes and other comparable models in correctness and completeness.",
}
| LLMs are revolutionizing NLP tasks. However, the use of the most advanced LLMs, such as GPT-4, is often prohibitively expensive for most specialized fields. We introduce HEAL, the first continuously trained 13B LLaMA2-based LLM that is purpose-built for medical conversations and measured on automated scribing. Our results demonstrate that HEAL outperforms GPT-4 and PMC-LLaMA in PubMedQA, with an accuracy of 78.4{\%}. It also achieves parity with GPT-4 in generating medical notes. Remarkably, HEAL surpasses GPT-4 and Med-PaLM 2 in identifying more correct medical concepts and exceeds the performance of human scribes and other comparable models in correctness and completeness. | [
"Yuan, Dong",
"Rastogi, Eti",
"Naik, Gautam",
"Rajagopal, Sree Prasanna",
"Goyal, Sagar",
"Zhao, Fen",
"Chintagunta, Bharath",
"Ward, Jeffrey"
] | A Continued Pretrained LLM Approach for Automatic Medical Note Generation | naacl-short.47 | Poster | 2403.09057 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.48.bib | https://aclanthology.org/2024.naacl-short.48/ | @inproceedings{saxon-etal-2024-lost,
title = "Lost in Translation? Translation Errors and Challenges for Fair Assessment of Text-to-Image Models on Multilingual Concepts",
author = "Saxon, Michael and
Luo, Yiran and
Levy, Sharon and
Baral, Chitta and
Yang, Yezhou and
Wang, William Yang",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.48",
doi = "10.18653/v1/2024.naacl-short.48",
pages = "572--582",
abstract = "Benchmarks of the multilingual capabilities of text-to-image (T2I) models compare generated images prompted in a test language to an expected image distribution over a concept set. One such benchmark, {``}Conceptual Coverage Across Languages{''} (CoCo-CroLa), assesses the tangible noun inventory of T2I models by prompting them to generate pictures from a concept list translated to seven languages and comparing the output image populations. Unfortunately, we find that this benchmark contains translation errors of varying severity in Spanish, Japanese, and Chinese. We provide corrections for these errors and analyze how impactful they are on the utility and validity of CoCo-CroLa as a benchmark. We reassess multiple baseline T2I models with the revisions, compare the outputs elicited under the new translations to those conditioned on the old, and show that a correction{'}s impactfulness on the image-domain benchmark results can be predicted in the text domain with similarity scores. Our findings will guide the future development of T2I multilinguality metrics by providing analytical tools for practical translation decisions.",
}
| Benchmarks of the multilingual capabilities of text-to-image (T2I) models compare generated images prompted in a test language to an expected image distribution over a concept set. One such benchmark, {``}Conceptual Coverage Across Languages{''} (CoCo-CroLa), assesses the tangible noun inventory of T2I models by prompting them to generate pictures from a concept list translated to seven languages and comparing the output image populations. Unfortunately, we find that this benchmark contains translation errors of varying severity in Spanish, Japanese, and Chinese. We provide corrections for these errors and analyze how impactful they are on the utility and validity of CoCo-CroLa as a benchmark. We reassess multiple baseline T2I models with the revisions, compare the outputs elicited under the new translations to those conditioned on the old, and show that a correction{'}s impactfulness on the image-domain benchmark results can be predicted in the text domain with similarity scores. Our findings will guide the future development of T2I multilinguality metrics by providing analytical tools for practical translation decisions. | [
"Saxon, Michael",
"Luo, Yiran",
"Levy, Sharon",
"Baral, Chitta",
"Yang, Yezhou",
"Wang, William Yang"
] | Lost in Translation? Translation Errors and Challenges for Fair Assessment of Text-to-Image Models on Multilingual Concepts | naacl-short.48 | Oral | 2403.11092 | [
""
] | https://huggingface.co/papers/2403.11092 | 2 | 0 | 0 | 6 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-short.49.bib | https://aclanthology.org/2024.naacl-short.49/ | @inproceedings{xie-etal-2024-self,
title = "Self-Improving for Zero-Shot Named Entity Recognition with Large Language Models",
author = "Xie, Tingyu and
Li, Qi and
Zhang, Yan and
Liu, Zuozhu and
Wang, Hongwei",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.49",
doi = "10.18653/v1/2024.naacl-short.49",
pages = "583--593",
abstract = "Exploring the application of powerful large language models (LLMs) on the named entity recognition (NER) task has drawn much attention recently. This work pushes the performance boundary of zero-shot NER with LLMs by proposing a training-free self-improving framework, which utilizes an unlabeled corpus to stimulate the self-learning ability of LLMs. First, we use the LLM to make predictions on the unlabeled corpus using self-consistency and obtain a self-annotated dataset. Second, we explore various strategies to select reliable annotations to form a reliable self-annotated dataset. Finally, for each test input, we retrieve demonstrations from the reliable self-annotated dataset and perform inference via in-context learning. Experiments on four benchmarks show substantial performance improvements achieved by our framework. Through comprehensive experimental analysis, we find that increasing the size of unlabeled corpus or iterations of self-improving does not guarantee further improvement, but the performance might be boosted via more advanced strategies for reliable annotation selection.",
}
| Exploring the application of powerful large language models (LLMs) on the named entity recognition (NER) task has drawn much attention recently. This work pushes the performance boundary of zero-shot NER with LLMs by proposing a training-free self-improving framework, which utilizes an unlabeled corpus to stimulate the self-learning ability of LLMs. First, we use the LLM to make predictions on the unlabeled corpus using self-consistency and obtain a self-annotated dataset. Second, we explore various strategies to select reliable annotations to form a reliable self-annotated dataset. Finally, for each test input, we retrieve demonstrations from the reliable self-annotated dataset and perform inference via in-context learning. Experiments on four benchmarks show substantial performance improvements achieved by our framework. Through comprehensive experimental analysis, we find that increasing the size of unlabeled corpus or iterations of self-improving does not guarantee further improvement, but the performance might be boosted via more advanced strategies for reliable annotation selection. | [
"Xie, Tingyu",
"Li, Qi",
"Zhang, Yan",
"Liu, Zuozhu",
"Wang, Hongwei"
] | Self-Improving for Zero-Shot Named Entity Recognition with Large Language Models | naacl-short.49 | Poster | 2311.08921 | [
"https://github.com/Emma1066/Self-Improve-Zero-Shot-NER"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.50.bib | https://aclanthology.org/2024.naacl-short.50/ | @inproceedings{qin-etal-2024-lifelong,
title = "Lifelong Event Detection with Embedding Space Separation and Compaction",
author = "Qin, Chengwei and
Chen, Ruirui and
Zhao, Ruochen and
Xia, Wenhan and
Joty, Shafiq",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.50",
doi = "10.18653/v1/2024.naacl-short.50",
pages = "594--602",
abstract = "To mitigate forgetting, existing lifelong event detection methods typically maintain a memory module and replay the stored memory data during the learning of a new task. However, the simple combination of memory data and new-task samples can still result in substantial forgetting of previously acquired knowledge, which may occur due to the potential overlap between the feature distribution of new data and the previously learned embedding space. Moreover, the model suffers from overfitting on the few memory samples rather than effectively remembering learned patterns. To address the challenges of forgetting and overfitting, we propose a novel method based on embedding space separation and compaction. Our method alleviates forgetting of previously learned tasks by forcing the feature distribution of new data away from the previous embedding space. It also mitigates overfitting by a memory calibration mechanism that encourages memory data to be close to its prototype to enhance intra-class compactness. In addition, the learnable parameters of the new task are initialized by drawing upon acquired knowledge from the previously learned task to facilitate forward knowledge transfer. With extensive experiments, we demonstrate that our method can significantly outperform previous state-of-the-art approaches.",
}
| To mitigate forgetting, existing lifelong event detection methods typically maintain a memory module and replay the stored memory data during the learning of a new task. However, the simple combination of memory data and new-task samples can still result in substantial forgetting of previously acquired knowledge, which may occur due to the potential overlap between the feature distribution of new data and the previously learned embedding space. Moreover, the model suffers from overfitting on the few memory samples rather than effectively remembering learned patterns. To address the challenges of forgetting and overfitting, we propose a novel method based on embedding space separation and compaction. Our method alleviates forgetting of previously learned tasks by forcing the feature distribution of new data away from the previous embedding space. It also mitigates overfitting by a memory calibration mechanism that encourages memory data to be close to its prototype to enhance intra-class compactness. In addition, the learnable parameters of the new task are initialized by drawing upon acquired knowledge from the previously learned task to facilitate forward knowledge transfer. With extensive experiments, we demonstrate that our method can significantly outperform previous state-of-the-art approaches. | [
"Qin, Chengwei",
"Chen, Ruirui",
"Zhao, Ruochen",
"Xia, Wenhan",
"Joty, Shafiq"
] | Lifelong Event Detection with Embedding Space Separation and Compaction | naacl-short.50 | Poster | 2404.02507 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.51.bib | https://aclanthology.org/2024.naacl-short.51/ | @inproceedings{singh-etal-2024-language,
title = "Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion",
author = "Singh, Smriti and
Caragea, Cornelia and
Li, Junyi Jessy",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.51",
doi = "10.18653/v1/2024.naacl-short.51",
pages = "603--614",
abstract = "Situations and events evoke emotions in humans, but to what extent do they inform the prediction of emotion detection models? This work investigates how well human-annotated emotion triggers correlate with features that models deemed salient in their prediction of emotions. First, we introduce a novel dataset EmoTrigger, consisting of 900 social media posts sourced from three different datasets; these were annotated by experts for emotion triggers with high agreement. Using EmoTrigger, we evaluate the ability of large language models (LLMs) to identify emotion triggers, and conduct a comparative analysis of the features considered important for these tasks between LLMs and fine-tuned models. Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.",
}
| Situations and events evoke emotions in humans, but to what extent do they inform the prediction of emotion detection models? This work investigates how well human-annotated emotion triggers correlate with features that models deemed salient in their prediction of emotions. First, we introduce a novel dataset EmoTrigger, consisting of 900 social media posts sourced from three different datasets; these were annotated by experts for emotion triggers with high agreement. Using EmoTrigger, we evaluate the ability of large language models (LLMs) to identify emotion triggers, and conduct a comparative analysis of the features considered important for these tasks between LLMs and fine-tuned models. Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection. | [
"Singh, Smriti",
"Caragea, Cornelia",
"Li, Junyi Jessy"
] | Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion | naacl-short.51 | Poster | 2311.09602 | [
"https://github.com/smritisingh26/emotrigger"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.52.bib | https://aclanthology.org/2024.naacl-short.52/ | @inproceedings{jiang-joshi-2024-cpopqa,
title = "{CP}op{QA}: Ranking Cultural Concept Popularity by {LLM}s",
author = "Jiang, Ming and
Joshi, Mansi",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.52",
doi = "10.18653/v1/2024.naacl-short.52",
pages = "615--630",
abstract = "Many recent studies examining the knowledge capacity of large language models (LLM) have focused on knowledge explicitly learned from the pretraining data or implicitly inferable from similar contexts. However, the extent to which an LLM effectively captures corpus-level statistical trends of concepts for reasoning, especially long-tail ones, is largely underexplored. In this study, we introduce a novel few-shot question-answering task (CPopQA) that examines LLMs{'} statistical ranking abilities for long-tail cultural concepts (e.g., holidays), particularly focusing on these concepts{'} popularity in the United States and the United Kingdom, respectively. We curate a dataset of 457 holidays across 58 countries, generating a total of 9,000 QA testing pairs. Experiments on four strong LLMs show that open-sourced LLMs still lag way behind close LLM API (e.g., GPT-3.5) in statistical ranking of cultural concepts. Notably, GPT-3.5 exhibited its potential to identify geo-cultural proximity across continents.",
}
| Many recent studies examining the knowledge capacity of large language models (LLM) have focused on knowledge explicitly learned from the pretraining data or implicitly inferable from similar contexts. However, the extent to which an LLM effectively captures corpus-level statistical trends of concepts for reasoning, especially long-tail ones, is largely underexplored. In this study, we introduce a novel few-shot question-answering task (CPopQA) that examines LLMs{'} statistical ranking abilities for long-tail cultural concepts (e.g., holidays), particularly focusing on these concepts{'} popularity in the United States and the United Kingdom, respectively. We curate a dataset of 457 holidays across 58 countries, generating a total of 9,000 QA testing pairs. Experiments on four strong LLMs show that open-sourced LLMs still lag way behind close LLM API (e.g., GPT-3.5) in statistical ranking of cultural concepts. Notably, GPT-3.5 exhibited its potential to identify geo-cultural proximity across continents. | [
"Jiang, Ming",
"Joshi, Mansi"
] | CPopQA: Ranking Cultural Concept Popularity by LLMs | naacl-short.52 | Poster | 2311.07897 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.53.bib | https://aclanthology.org/2024.naacl-short.53/ | @inproceedings{chen-etal-2024-impact,
title = "The Impact of Language on Arithmetic Proficiency: A Multilingual Investigation with Cross-Agent Checking Computation",
author = "Chen, Chung-Chi and
Takamura, Hiroya and
Kobayashi, Ichiro and
Miyao, Yusuke",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.53",
doi = "10.18653/v1/2024.naacl-short.53",
pages = "631--637",
abstract = "This paper critically examines the arithmetic capabilities of Large Language Models (LLMs), uncovering significant limitations in their performance. Our research reveals a notable decline in accuracy for complex calculations involving large numbers, with addition and subtraction tasks showing varying degrees of proficiency. Additionally, we challenge the notion that arithmetic is language-independent, finding up to a 10{\%} difference in performance across twenty languages. The study also compares self-verification methods with cross-agent collaborations, showing that a single model often outperforms collaborative approaches in basic arithmetic tasks. These findings suggest a need to reassess the effectiveness of LLMs in tasks requiring numerical accuracy and precision.",
}
| This paper critically examines the arithmetic capabilities of Large Language Models (LLMs), uncovering significant limitations in their performance. Our research reveals a notable decline in accuracy for complex calculations involving large numbers, with addition and subtraction tasks showing varying degrees of proficiency. Additionally, we challenge the notion that arithmetic is language-independent, finding up to a 10{\%} difference in performance across twenty languages. The study also compares self-verification methods with cross-agent collaborations, showing that a single model often outperforms collaborative approaches in basic arithmetic tasks. These findings suggest a need to reassess the effectiveness of LLMs in tasks requiring numerical accuracy and precision. | [
"Chen, Chung-Chi",
"Takamura, Hiroya",
"Kobayashi, Ichiro",
"Miyao, Yusuke"
] | The Impact of Language on Arithmetic Proficiency: A Multilingual Investigation with Cross-Agent Checking Computation | naacl-short.53 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.54.bib | https://aclanthology.org/2024.naacl-short.54/ | @inproceedings{borchert-etal-2024-efficient,
title = "Efficient Information Extraction in Few-Shot Relation Classification through Contrastive Representation Learning",
author = "Borchert, Philipp and
De Weerdt, Jochen and
Moens, Marie-Francine",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.54",
doi = "10.18653/v1/2024.naacl-short.54",
pages = "638--646",
abstract = "Differentiating relationships between entity pairs with limited labeled instances poses a significant challenge in few-shot relation classification. Representations of textual data extract rich information spanning the domain, entities, and relations. In this paper, we introduce a novel approach to enhance information extraction combining multiple sentence representations and contrastive learning. While representations in relation classification are commonly extracted using entity marker tokens, we argue that substantial information within the internal model representations remains untapped. To address this, we propose aligning multiple sentence representations, such as the CLS] token, the [MASK] token used in prompting, and entity marker tokens. Our method employs contrastive learning to extract complementary discriminative information from these individual representations. This is particularly relevant in low-resource settings where information is scarce. Leveraging multiple sentence representations is especially effective in distilling discriminative information for relation classification when additional information, like relation descriptions, are not available. We validate the adaptability of our approach, maintaining robust performance in scenarios that include relation descriptions, and showcasing its flexibility to adapt to different resource constraints.",
}
| Differentiating relationships between entity pairs with limited labeled instances poses a significant challenge in few-shot relation classification. Representations of textual data extract rich information spanning the domain, entities, and relations. In this paper, we introduce a novel approach to enhance information extraction combining multiple sentence representations and contrastive learning. While representations in relation classification are commonly extracted using entity marker tokens, we argue that substantial information within the internal model representations remains untapped. To address this, we propose aligning multiple sentence representations, such as the CLS] token, the [MASK] token used in prompting, and entity marker tokens. Our method employs contrastive learning to extract complementary discriminative information from these individual representations. This is particularly relevant in low-resource settings where information is scarce. Leveraging multiple sentence representations is especially effective in distilling discriminative information for relation classification when additional information, like relation descriptions, are not available. We validate the adaptability of our approach, maintaining robust performance in scenarios that include relation descriptions, and showcasing its flexibility to adapt to different resource constraints. | [
"Borchert, Philipp",
"De Weerdt, Jochen",
"Moens, Marie-Francine"
] | Efficient Information Extraction in Few-Shot Relation Classification through Contrastive Representation Learning | naacl-short.54 | Oral | 2403.16543 | [
"https://github.com/pnborchert/multirep"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.55.bib | https://aclanthology.org/2024.naacl-short.55/ | @inproceedings{leeb-scholkopf-2024-diverse,
title = "A diverse Multilingual News Headlines Dataset from around the World",
author = {Leeb, Felix and
Sch{\"o}lkopf, Bernhard},
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.55",
doi = "10.18653/v1/2024.naacl-short.55",
pages = "647--652",
abstract = "Babel Briefings is a novel dataset featuring 4.7 million news headlines from August 2020 to November 2021, across 30 languages and 54 locations worldwide with English translations of all articles included. Designed for natural language processing and media studies, it serves as a high-quality dataset for training or evaluating language models as well as offering a simple, accessible collection of articles, for example, to analyze global news coverage and cultural narratives. As a simple demonstration of the analyses facilitated by this dataset, we use a basic procedure using a TF-IDF weighted similarity metric to group articles into clusters about the same event. We then visualize the \textit{event signatures} of the event showing articles of which languages appear over time, revealing intuitive features based on the proximity of the event and unexpectedness of the event. The dataset is available on [Kaggle](https://www.kaggle.com/datasets/felixludos/babel-briefings) and [HuggingFace](https://huggingface.co/datasets/felixludos/babel-briefings) with accompanying [GitHub](https://github.com/felixludos/babel-briefings) code.",
}
| Babel Briefings is a novel dataset featuring 4.7 million news headlines from August 2020 to November 2021, across 30 languages and 54 locations worldwide with English translations of all articles included. Designed for natural language processing and media studies, it serves as a high-quality dataset for training or evaluating language models as well as offering a simple, accessible collection of articles, for example, to analyze global news coverage and cultural narratives. As a simple demonstration of the analyses facilitated by this dataset, we use a basic procedure using a TF-IDF weighted similarity metric to group articles into clusters about the same event. We then visualize the \textit{event signatures} of the event showing articles of which languages appear over time, revealing intuitive features based on the proximity of the event and unexpectedness of the event. The dataset is available on [Kaggle](https://www.kaggle.com/datasets/felixludos/babel-briefings) and [HuggingFace](https://huggingface.co/datasets/felixludos/babel-briefings) with accompanying [GitHub](https://github.com/felixludos/babel-briefings) code. | [
"Leeb, Felix",
"Sch{\\\"o}lkopf, Bernhard"
] | A diverse Multilingual News Headlines Dataset from around the World | naacl-short.55 | Poster | 2403.19352 | [
"https://github.com/felixludos/babel-briefings"
] | https://huggingface.co/papers/2403.19352 | 0 | 0 | 0 | 2 | 1 | [] | [
"felixludos/babel-briefings"
] | [] |
https://aclanthology.org/2024.naacl-short.56.bib | https://aclanthology.org/2024.naacl-short.56/ | @inproceedings{tokarchuk-niculae-2024-unreasonable,
title = "The Unreasonable Effectiveness of Random Target Embeddings for Continuous-Output Neural Machine Translation",
author = "Tokarchuk, Evgeniia and
Niculae, Vlad",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.56",
doi = "10.18653/v1/2024.naacl-short.56",
pages = "653--662",
abstract = "Continuous-output neural machine translation (CoNMT) replaces the discrete next-word prediction problem with an embedding prediction.The semantic structure of the target embedding space (*i.e.*, closeness of related words) is intuitively believed to be crucial. We challenge this assumption and show that completely random output embeddings can outperform laboriously pre-trained ones, especially on larger datasets. Further investigation shows this surprising effect is strongest for rare words, due to the geometry of their embeddings. We shed further light on this finding by designing a mixed strategy that combines random and pre-trained embeddings, and that performs best overall.",
}
| Continuous-output neural machine translation (CoNMT) replaces the discrete next-word prediction problem with an embedding prediction.The semantic structure of the target embedding space (*i.e.*, closeness of related words) is intuitively believed to be crucial. We challenge this assumption and show that completely random output embeddings can outperform laboriously pre-trained ones, especially on larger datasets. Further investigation shows this surprising effect is strongest for rare words, due to the geometry of their embeddings. We shed further light on this finding by designing a mixed strategy that combines random and pre-trained embeddings, and that performs best overall. | [
"Tokarchuk, Evgeniia",
"Niculae, Vlad"
] | The Unreasonable Effectiveness of Random Target Embeddings for Continuous-Output Neural Machine Translation | naacl-short.56 | Oral | 2310.20620 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.57.bib | https://aclanthology.org/2024.naacl-short.57/ | @inproceedings{fathullah-gales-2024-efficient,
title = "Efficient Sample-Specific Encoder Perturbations",
author = "Fathullah, Yassir and
Gales, Mark",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.57",
doi = "10.18653/v1/2024.naacl-short.57",
pages = "663--671",
abstract = "Encoder-decoder foundation models have displayed state-of-the-art performance on a range of autoregressive sequence tasks. This paper proposes a simple and lightweight modification to such systems to control the behaviour according to a specific attribute of interest. This paper proposes a novel inference-efficient approach to modifying the behaviour of an encoder-decoder system according to a specific attribute of interest. Specifically, we show that a small proxy network can be used to find a sample-by-sample perturbation of the encoder output of a frozen foundation model to trigger the decoder to generate improved decodings. This work explores a specific realization of this framework focused on improving the COMET performance of Flan-T5 on Machine Translation and the WER of Whisper foundation models on Speech Recognition. Results display consistent improvements in performance evaluated through COMET and WER respectively. Furthermore, experiments also show that the proxies are robust to the exact nature of the data used to train them and can extend to other domains.",
}
| Encoder-decoder foundation models have displayed state-of-the-art performance on a range of autoregressive sequence tasks. This paper proposes a simple and lightweight modification to such systems to control the behaviour according to a specific attribute of interest. This paper proposes a novel inference-efficient approach to modifying the behaviour of an encoder-decoder system according to a specific attribute of interest. Specifically, we show that a small proxy network can be used to find a sample-by-sample perturbation of the encoder output of a frozen foundation model to trigger the decoder to generate improved decodings. This work explores a specific realization of this framework focused on improving the COMET performance of Flan-T5 on Machine Translation and the WER of Whisper foundation models on Speech Recognition. Results display consistent improvements in performance evaluated through COMET and WER respectively. Furthermore, experiments also show that the proxies are robust to the exact nature of the data used to train them and can extend to other domains. | [
"Fathullah, Yassir",
"Gales, Mark"
] | Efficient Sample-Specific Encoder Perturbations | naacl-short.57 | Poster | 2405.01601 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.58.bib | https://aclanthology.org/2024.naacl-short.58/ | @inproceedings{abdelkadir-etal-2024-diverse,
title = "Diverse Perspectives, Divergent Models: Cross-Cultural Evaluation of Depression Detection on {T}witter",
author = "Abdelkadir, Nuredin Ali and
Zhang, Charles and
Mayo, Ned and
Chancellor, Stevie",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.58",
doi = "10.18653/v1/2024.naacl-short.58",
pages = "672--680",
abstract = "Social media data has been used for detecting users with mental disorders, such as depression. Despite the global significance of cross-cultural representation and its potential impact on model performance, publicly available datasets often lack crucial metadata relatedto this aspect. In this work, we evaluate the generalization of benchmark datasets to build AI models on cross-cultural Twitter data. We gather a custom geo-located Twitter dataset of depressed users from seven countries as a test dataset. Our results show that depressiondetection models do not generalize globally. The models perform worse on Global South users compared to Global North. Pre-trainedlanguage models achieve the best generalization compared to Logistic Regression, though still show significant gaps in performance on depressed and non-Western users. We quantify our findings and provide several actionable suggestions to mitigate this issue",
}
| Social media data has been used for detecting users with mental disorders, such as depression. Despite the global significance of cross-cultural representation and its potential impact on model performance, publicly available datasets often lack crucial metadata relatedto this aspect. In this work, we evaluate the generalization of benchmark datasets to build AI models on cross-cultural Twitter data. We gather a custom geo-located Twitter dataset of depressed users from seven countries as a test dataset. Our results show that depressiondetection models do not generalize globally. The models perform worse on Global South users compared to Global North. Pre-trainedlanguage models achieve the best generalization compared to Logistic Regression, though still show significant gaps in performance on depressed and non-Western users. We quantify our findings and provide several actionable suggestions to mitigate this issue | [
"Abdelkadir, Nuredin Ali",
"Zhang, Charles",
"Mayo, Ned",
"Chancellor, Stevie"
] | Diverse Perspectives, Divergent Models: Cross-Cultural Evaluation of Depression Detection on Twitter | naacl-short.58 | Poster | 2406.15362 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.59.bib | https://aclanthology.org/2024.naacl-short.59/ | @inproceedings{zhan-etal-2024-removing,
title = "Removing {RLHF} Protections in {GPT}-4 via Fine-Tuning",
author = "Zhan, Qiusi and
Fang, Richard and
Bindu, Rohan and
Gupta, Akul and
Hashimoto, Tatsunori and
Kang, Daniel",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.59",
doi = "10.18653/v1/2024.naacl-short.59",
pages = "681--687",
abstract = "As large language models (LLMs) have increased in their capabilities, so doestheir potential for dual use. To reduce harmful outputs, produces and vendors ofLLMs have used reinforcement learning with human feedback (RLHF). In tandem,LLM vendors have been increasingly enabling fine-tuning of their most powerfulmodels. However, concurrent work has shown that fine-tuning can remove RLHFprotections. We may expect that the most powerful models currently available(GPT-4) are less susceptible to fine-tuning attacks. In this work, we show the contrary: fine-tuning allows attackers to remove RLHFprotections with as few as 340 examples and a 95{\%} success rate. These trainingexamples can be automatically generated with weaker models. We further show thatremoving RLHF protections does not decrease usefulness on non-censored outputs,providing evidence that our fine-tuning strategy does not decrease usefulnessdespite using weaker models to generate training data. Our results show the needfor further research on protections on LLMs.",
}
| As large language models (LLMs) have increased in their capabilities, so doestheir potential for dual use. To reduce harmful outputs, produces and vendors ofLLMs have used reinforcement learning with human feedback (RLHF). In tandem,LLM vendors have been increasingly enabling fine-tuning of their most powerfulmodels. However, concurrent work has shown that fine-tuning can remove RLHFprotections. We may expect that the most powerful models currently available(GPT-4) are less susceptible to fine-tuning attacks. In this work, we show the contrary: fine-tuning allows attackers to remove RLHFprotections with as few as 340 examples and a 95{\%} success rate. These trainingexamples can be automatically generated with weaker models. We further show thatremoving RLHF protections does not decrease usefulness on non-censored outputs,providing evidence that our fine-tuning strategy does not decrease usefulnessdespite using weaker models to generate training data. Our results show the needfor further research on protections on LLMs. | [
"Zhan, Qiusi",
"Fang, Richard",
"Bindu, Rohan",
"Gupta, Akul",
"Hashimoto, Tatsunori",
"Kang, Daniel"
] | Removing RLHF Protections in GPT-4 via Fine-Tuning | naacl-short.59 | Oral | 2311.05553 | [
""
] | https://huggingface.co/papers/2311.05553 | 0 | 0 | 0 | 6 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-short.60.bib | https://aclanthology.org/2024.naacl-short.60/ | @inproceedings{kim-etal-2024-lifetox,
title = "{L}ife{T}ox: Unveiling Implicit Toxicity in Life Advice",
author = "Kim, Minbeom and
Koo, Jahyun and
Lee, Hwanhee and
Park, Joonsuk and
Lee, Hwaran and
Jung, Kyomin",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.60",
doi = "10.18653/v1/2024.naacl-short.60",
pages = "688--698",
abstract = "As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial. To this end, we introduce $\texttt{LifeTox}$, a dataset designed for identifying implicit toxicity within a broad range of advice-seeking scenarios. Unlike existing safety datasets, $\texttt{LifeTox}$ comprises diverse contexts derived from personal experiences through open-ended questions. Our experiments demonstrate that RoBERTa fine-tuned on $\texttt{LifeTox}$ matches or surpasses the zero-shot performance of large language models in toxicity classification tasks. These results underscore the efficacy of $\texttt{LifeTox}$ in addressing the complex challenges inherent in implicit toxicity. We open-sourced the dataset and the $\texttt{LifeTox}$ moderator family; 350M, 7B, and 13B.",
}
| As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial. To this end, we introduce $\texttt{LifeTox}$, a dataset designed for identifying implicit toxicity within a broad range of advice-seeking scenarios. Unlike existing safety datasets, $\texttt{LifeTox}$ comprises diverse contexts derived from personal experiences through open-ended questions. Our experiments demonstrate that RoBERTa fine-tuned on $\texttt{LifeTox}$ matches or surpasses the zero-shot performance of large language models in toxicity classification tasks. These results underscore the efficacy of $\texttt{LifeTox}$ in addressing the complex challenges inherent in implicit toxicity. We open-sourced the dataset and the $\texttt{LifeTox}$ moderator family; 350M, 7B, and 13B. | [
"Kim, Minbeom",
"Koo, Jahyun",
"Lee, Hwanhee",
"Park, Joonsuk",
"Lee, Hwaran",
"Jung, Kyomin"
] | LifeTox: Unveiling Implicit Toxicity in Life Advice | naacl-short.60 | Oral | 2311.09585 | [
""
] | https://huggingface.co/papers/2311.09585 | 1 | 0 | 0 | 6 | 1 | [
"mbkim/LifeTox_Moderator_7B",
"mbkim/LifeTox_Moderator_350M",
"mbkim/LifeTox_Moderator_13B"
] | [
"mbkim/LifeTox"
] | [] |
https://aclanthology.org/2024.naacl-short.61.bib | https://aclanthology.org/2024.naacl-short.61/ | @inproceedings{yang-etal-2024-arithmetic,
title = "Arithmetic Reasoning with {LLM}: {P}rolog Generation {\&} Permutation",
author = "Yang, Xiaocheng and
Chen, Bingsen and
Tam, Yik-Cheung",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.61",
doi = "10.18653/v1/2024.naacl-short.61",
pages = "699--710",
abstract = "Instructing large language models (LLMs) to solve elementary school math problems has shown great success using Chain of Thought (CoT). However, the CoT approach relies on an LLM to generate a sequence of arithmetic calculations which can be prone to cascaded calculation errors. We hypothesize that an LLM should focus on extracting predicates and generating symbolic formulas from the math problem description so that the underlying calculation can be done via an external code interpreter. We investigate using LLM to generate Prolog programs to solve mathematical questions. Experimental results show that our Prolog-based arithmetic problem-solving outperforms CoT generation in the GSM8K benchmark across three distinct LLMs. In addition, given the insensitive ordering of predicates and symbolic formulas in Prolog, we propose to permute the ground truth predicates for more robust LLM training via data augmentation.",
}
| Instructing large language models (LLMs) to solve elementary school math problems has shown great success using Chain of Thought (CoT). However, the CoT approach relies on an LLM to generate a sequence of arithmetic calculations which can be prone to cascaded calculation errors. We hypothesize that an LLM should focus on extracting predicates and generating symbolic formulas from the math problem description so that the underlying calculation can be done via an external code interpreter. We investigate using LLM to generate Prolog programs to solve mathematical questions. Experimental results show that our Prolog-based arithmetic problem-solving outperforms CoT generation in the GSM8K benchmark across three distinct LLMs. In addition, given the insensitive ordering of predicates and symbolic formulas in Prolog, we propose to permute the ground truth predicates for more robust LLM training via data augmentation. | [
"Yang, Xiaocheng",
"Chen, Bingsen",
"Tam, Yik-Cheung"
] | Arithmetic Reasoning with LLM: Prolog Generation & Permutation | naacl-short.61 | Poster | 2405.17893 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.62.bib | https://aclanthology.org/2024.naacl-short.62/ | @inproceedings{aono-etal-2024-verifying,
title = "Verifying Claims About Metaphors with Large-Scale Automatic Metaphor Identification",
author = "Aono, Kotaro and
Sasano, Ryohei and
Takeda, Koichi",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.62",
doi = "10.18653/v1/2024.naacl-short.62",
pages = "711--719",
abstract = "There are several linguistic claims about situations where words are more likely to be used as metaphors.However, few studies have sought to verify such claims with large corpora.This study entails a large-scale, corpus-based analysis of certain existing claims about verb metaphors, by applying metaphor detection to sentences extracted from Common Crawl and using the statistics obtained from the results.The verification results indicate that the direct objects of verbs used as metaphors tend to have lower degrees of concreteness, imageability, and familiarity, and that metaphors are more likely to be used in emotional and subjective sentences.",
}
| There are several linguistic claims about situations where words are more likely to be used as metaphors.However, few studies have sought to verify such claims with large corpora.This study entails a large-scale, corpus-based analysis of certain existing claims about verb metaphors, by applying metaphor detection to sentences extracted from Common Crawl and using the statistics obtained from the results.The verification results indicate that the direct objects of verbs used as metaphors tend to have lower degrees of concreteness, imageability, and familiarity, and that metaphors are more likely to be used in emotional and subjective sentences. | [
"Aono, Kotaro",
"Sasano, Ryohei",
"Takeda, Koichi"
] | Verifying Claims About Metaphors with Large-Scale Automatic Metaphor Identification | naacl-short.62 | Oral | 2404.01029 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.63.bib | https://aclanthology.org/2024.naacl-short.63/ | @inproceedings{scaria-etal-2024-instructabsa,
title = "{I}nstruct{ABSA}: Instruction Learning for Aspect Based Sentiment Analysis",
author = "Scaria, Kevin and
Gupta, Himanshu and
Goyal, Siddharth and
Sawant, Saurabh and
Mishra, Swaroop and
Baral, Chitta",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.63",
doi = "10.18653/v1/2024.naacl-short.63",
pages = "720--736",
abstract = "We introduce InstructABSA, an instruction learning paradigm for Aspect-Based Sentiment Analysis (ABSA) subtasks.Our method introduces positive, negative, and neutral examples to each training sample, and instruction tune the model (T$k$-Instruct) for ABSA subtasks, yielding significant performance improvements. Experimental results on the Sem Eval 2014, 15, and 16 datasets demonstrate that InstructABSA outperforms the previous state-of-the-art (SOTA) approaches on Term Extraction (ATE), Sentiment Classification(ATSC) and Sentiment Pair Extraction (ASPE) subtasks.In particular, InstructABSA outperforms the previous state-of-the-art (SOTA) on the Rest14 ATE subtask by 5.69{\%} points, the Rest15 ATSC subtask by 9.59{\%} points, and the Lapt14 AOPE subtask by 3.37{\%} points, surpassing 7x larger models.We get competitive results on AOOE, AOPE, AOSTE, and ACOSQE subtasks indicating strong generalization ability to all subtasks. Exploring sample efficiency reveals that just 50{\%} train data is required to get competitive results with other instruction tuning approaches. Lastly, we assess the quality of instructions and observe that InstructABSA{'}s performance experiences a decline of {\textasciitilde}10{\%} when adding misleading examples",
}
| We introduce InstructABSA, an instruction learning paradigm for Aspect-Based Sentiment Analysis (ABSA) subtasks.Our method introduces positive, negative, and neutral examples to each training sample, and instruction tune the model (T$k$-Instruct) for ABSA subtasks, yielding significant performance improvements. Experimental results on the Sem Eval 2014, 15, and 16 datasets demonstrate that InstructABSA outperforms the previous state-of-the-art (SOTA) approaches on Term Extraction (ATE), Sentiment Classification(ATSC) and Sentiment Pair Extraction (ASPE) subtasks.In particular, InstructABSA outperforms the previous state-of-the-art (SOTA) on the Rest14 ATE subtask by 5.69{\%} points, the Rest15 ATSC subtask by 9.59{\%} points, and the Lapt14 AOPE subtask by 3.37{\%} points, surpassing 7x larger models.We get competitive results on AOOE, AOPE, AOSTE, and ACOSQE subtasks indicating strong generalization ability to all subtasks. Exploring sample efficiency reveals that just 50{\%} train data is required to get competitive results with other instruction tuning approaches. Lastly, we assess the quality of instructions and observe that InstructABSA{'}s performance experiences a decline of {\textasciitilde}10{\%} when adding misleading examples | [
"Scaria, Kevin",
"Gupta, Himanshu",
"Goyal, Siddharth",
"Sawant, Saurabh",
"Mishra, Swaroop",
"Baral, Chitta"
] | InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis | naacl-short.63 | Oral | 2302.08624 | [
"https://github.com/kevinscaria/instructabsa"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.64.bib | https://aclanthology.org/2024.naacl-short.64/ | @inproceedings{zemlyanskiy-etal-2024-memory,
title = "{MEMORY}-{VQ}: Compression for Tractable {I}nternet-Scale Memory",
author = "Zemlyanskiy, Yury and
de Jong, Michiel and
Vilnis, Luke and
Ontanon, Santiago and
Cohen, William and
Sanghai, Sumit and
Ainslie, Joshua",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.64",
doi = "10.18653/v1/2024.naacl-short.64",
pages = "737--744",
abstract = "Retrieval augmentation is a powerful but expensive method to make language models more knowledgeable about the world. Memory-based methods like LUMEN (de Jong et al., 2023a) pre-compute token representations for retrieved passages to drastically speed up inference. However, memory also leads to much greater storage requirements from storing pre-computed representations. We propose MEMORY-VQ, a new method to reduce storage requirements of memory-augmented models without sacrificing performance. Our method uses a vector quantization variational autoencoder (VQ-VAE) to compress token representations. We apply MEMORY-VQ to the LUMEN model to obtain LUMEN-VQ, a memory model that achieves a 16x compression rate with comparable performance on the KILT benchmark. LUMEN-VQ enables practical retrieval augmentation even for extremely large retrieval corpora.",
}
| Retrieval augmentation is a powerful but expensive method to make language models more knowledgeable about the world. Memory-based methods like LUMEN (de Jong et al., 2023a) pre-compute token representations for retrieved passages to drastically speed up inference. However, memory also leads to much greater storage requirements from storing pre-computed representations. We propose MEMORY-VQ, a new method to reduce storage requirements of memory-augmented models without sacrificing performance. Our method uses a vector quantization variational autoencoder (VQ-VAE) to compress token representations. We apply MEMORY-VQ to the LUMEN model to obtain LUMEN-VQ, a memory model that achieves a 16x compression rate with comparable performance on the KILT benchmark. LUMEN-VQ enables practical retrieval augmentation even for extremely large retrieval corpora. | [
"Zemlyanskiy, Yury",
"de Jong, Michiel",
"Vilnis, Luke",
"Ontanon, Santiago",
"Cohen, William",
"Sanghai, Sumit",
"Ainslie, Joshua"
] | MEMORY-VQ: Compression for Tractable Internet-Scale Memory | naacl-short.64 | Poster | 2308.14903 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.65.bib | https://aclanthology.org/2024.naacl-short.65/ | @inproceedings{li-etal-2024-unveiling,
title = "Unveiling the Magic: Investigating Attention Distillation in Retrieval-Augmented Generation",
author = "Li, Zizhong and
Zhang, Haopeng and
Zhang, Jiawei",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.65",
doi = "10.18653/v1/2024.naacl-short.65",
pages = "745--754",
abstract = "Retrieval-augmented generation framework addresses the limitations of large language models by enabling real-time knowledge updates for more accurate answers. An efficient way in the training phase of retrieval-augmented models is attention distillation, which uses attention scores as supervision signals instead of manually annotated query-document pairs. Despite its growing popularity, the detailed mechanisms behind the success of attention distillation remain unexplored, particularly the specific patterns it leverages to benefit training. In this paper, we address this gap by conducting a comprehensive investigation of attention distillation workflow and identifying key factors influencing the learning performance of retrieval-augmented language models. We further propose several insightful indicators for optimizing models{'} training methods and avoiding ineffective training.",
}
| Retrieval-augmented generation framework addresses the limitations of large language models by enabling real-time knowledge updates for more accurate answers. An efficient way in the training phase of retrieval-augmented models is attention distillation, which uses attention scores as supervision signals instead of manually annotated query-document pairs. Despite its growing popularity, the detailed mechanisms behind the success of attention distillation remain unexplored, particularly the specific patterns it leverages to benefit training. In this paper, we address this gap by conducting a comprehensive investigation of attention distillation workflow and identifying key factors influencing the learning performance of retrieval-augmented language models. We further propose several insightful indicators for optimizing models{'} training methods and avoiding ineffective training. | [
"Li, Zizhong",
"Zhang, Haopeng",
"Zhang, Jiawei"
] | Unveiling the Magic: Investigating Attention Distillation in Retrieval-Augmented Generation | naacl-short.65 | Poster | 2402.11794 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.66.bib | https://aclanthology.org/2024.naacl-short.66/ | @inproceedings{elhady-etal-2024-improving,
title = "Improving Factuality in Clinical Abstractive Multi-Document Summarization by Guided Continued Pre-training",
author = "Elhady, Ahmed and
Elsayed, Khaled and
Agirre, Eneko and
Artetxe, Mikel",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.66",
doi = "10.18653/v1/2024.naacl-short.66",
pages = "755--761",
abstract = "Factual accuracy is an important property of neural abstractive summarization models, especially in fact-critical domains such as the clinical literature. In this work, we introduce a guided continued pre-training stage for encoder-decoder models that improves their understanding of the factual attributes of documents, which is followed by supervised fine-tuning on summarization. Our approach extends the pre-training recipe of BART to incorporate 3 additional objectives based on PICO spans, which capture the population, intervention, comparison, and outcomes related to a clinical study. Experiments on multi-document summarization in the clinical domain demonstrate that our approach is competitive with prior work, improving the quality and factuality of the summaries and achieving the best-published results in factual accuracy on the MSLR task.",
}
| Factual accuracy is an important property of neural abstractive summarization models, especially in fact-critical domains such as the clinical literature. In this work, we introduce a guided continued pre-training stage for encoder-decoder models that improves their understanding of the factual attributes of documents, which is followed by supervised fine-tuning on summarization. Our approach extends the pre-training recipe of BART to incorporate 3 additional objectives based on PICO spans, which capture the population, intervention, comparison, and outcomes related to a clinical study. Experiments on multi-document summarization in the clinical domain demonstrate that our approach is competitive with prior work, improving the quality and factuality of the summaries and achieving the best-published results in factual accuracy on the MSLR task. | [
"Elhady, Ahmed",
"Elsayed, Khaled",
"Agirre, Eneko",
"Artetxe, Mikel"
] | Improving Factuality in Clinical Abstractive Multi-Document Summarization by Guided Continued Pre-training | naacl-short.66 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.67.bib | https://aclanthology.org/2024.naacl-short.67/ | @inproceedings{fierro-etal-2024-mulan,
title = "{M}u{L}an: A Study of Fact Mutability in Language Models",
author = "Fierro, Constanza and
Garneau, Nicolas and
Bugliarello, Emanuele and
Kementchedjhieva, Yova and
S{\o}gaard, Anders",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.67",
doi = "10.18653/v1/2024.naacl-short.67",
pages = "762--771",
abstract = "Facts are subject to contingencies and can be true or false in different circumstances. One such contingency is time, wherein some facts mutate over a given period, e.g., the president of a country or the winner of a championship. Trustworthy language models ideally identify mutable facts as such and process them accordingly. We create MuLan, a benchmark for evaluating the ability of English language models to anticipate time-contingency, covering both 1:1 and 1:N relations. We hypothesize that mutable facts are encoded differently than immutable ones, hence being easier to update. In a detailed evaluation of six popular large language models, we consistently find differences in the LLMs{'} confidence, representations, and update behavior, depending on the mutability of a fact. Our findings should inform future work on the injection of and induction of time-contingent knowledge to/from LLMs.",
}
| Facts are subject to contingencies and can be true or false in different circumstances. One such contingency is time, wherein some facts mutate over a given period, e.g., the president of a country or the winner of a championship. Trustworthy language models ideally identify mutable facts as such and process them accordingly. We create MuLan, a benchmark for evaluating the ability of English language models to anticipate time-contingency, covering both 1:1 and 1:N relations. We hypothesize that mutable facts are encoded differently than immutable ones, hence being easier to update. In a detailed evaluation of six popular large language models, we consistently find differences in the LLMs{'} confidence, representations, and update behavior, depending on the mutability of a fact. Our findings should inform future work on the injection of and induction of time-contingent knowledge to/from LLMs. | [
"Fierro, Constanza",
"Garneau, Nicolas",
"Bugliarello, Emanuele",
"Kementchedjhieva, Yova",
"S{\\o}gaard, Anders"
] | MuLan: A Study of Fact Mutability in Language Models | naacl-short.67 | Poster | 2404.03036 | [
"https://github.com/coastalcph/fact_mutability"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.68.bib | https://aclanthology.org/2024.naacl-short.68/ | @inproceedings{solovyev-etal-2024-language,
title = "Language-Independent Representations Improve Zero-Shot Summarization",
author = "Solovyev, Vladimir and
Liu, Danni and
Niehues, Jan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.68",
doi = "10.18653/v1/2024.naacl-short.68",
pages = "772--782",
abstract = "Finetuning pretrained models on downstream generation tasks often leads to catastrophic forgetting in zero-shot conditions. In this work, we focus on summarization and tackle the problem through the lens of language-independent representations. After training on monolingual summarization, we perform zero-shot transfer to new languages or language pairs. We first show naively finetuned models are highly language-specific in both output behavior and internal representations, resulting in poor zero-shot performance. Next, we propose query-key (QK) finetuning to decouple task-specific knowledge from the pretrained language generation abilities. Then, after showing downsides of the standard adversarial language classifier, we propose a balanced variant that more directly enforces language-agnostic representations. Moreover, our qualitative analyses show removing source language identity correlates to zero-shot summarization performance. Our code is openly available.",
}
| Finetuning pretrained models on downstream generation tasks often leads to catastrophic forgetting in zero-shot conditions. In this work, we focus on summarization and tackle the problem through the lens of language-independent representations. After training on monolingual summarization, we perform zero-shot transfer to new languages or language pairs. We first show naively finetuned models are highly language-specific in both output behavior and internal representations, resulting in poor zero-shot performance. Next, we propose query-key (QK) finetuning to decouple task-specific knowledge from the pretrained language generation abilities. Then, after showing downsides of the standard adversarial language classifier, we propose a balanced variant that more directly enforces language-agnostic representations. Moreover, our qualitative analyses show removing source language identity correlates to zero-shot summarization performance. Our code is openly available. | [
"Solovyev, Vladimir",
"Liu, Danni",
"Niehues, Jan"
] | Language-Independent Representations Improve Zero-Shot Summarization | naacl-short.68 | Poster | 2404.05720 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.69.bib | https://aclanthology.org/2024.naacl-short.69/ | @inproceedings{shi-etal-2024-trusting,
title = "Trusting Your Evidence: Hallucinate Less with Context-aware Decoding",
author = "Shi, Weijia and
Han, Xiaochuang and
Lewis, Mike and
Tsvetkov, Yulia and
Zettlemoyer, Luke and
Yih, Wen-tau",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.69",
doi = "10.18653/v1/2024.naacl-short.69",
pages = "783--791",
abstract = "Language models (LMs) often struggle to pay enough attention to the input context, and generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we present context-aware decoding (CAD), which follows a contrastive output distribution that amplifies the difference between the output probabilities when a model is used with and without context. Our experiments show that CAD, without additional training, significantly improves the faithfulness of different LM families, including OPT, GPT, LLaMA, and FLAN-T5 for summarization tasks (e.g., 14.3{\%} gain for LLaMA in factuality metrics). Furthermore, CAD is particularly effective in overriding a model{'}s prior knowledge when it contradicts the provided context, leading to substantial improvements in tasks where resolving the knowledge conflict is essential. Our code is publicly released at https://github.com/xhan77/context-aware-decoding.",
}
| Language models (LMs) often struggle to pay enough attention to the input context, and generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we present context-aware decoding (CAD), which follows a contrastive output distribution that amplifies the difference between the output probabilities when a model is used with and without context. Our experiments show that CAD, without additional training, significantly improves the faithfulness of different LM families, including OPT, GPT, LLaMA, and FLAN-T5 for summarization tasks (e.g., 14.3{\%} gain for LLaMA in factuality metrics). Furthermore, CAD is particularly effective in overriding a model{'}s prior knowledge when it contradicts the provided context, leading to substantial improvements in tasks where resolving the knowledge conflict is essential. Our code is publicly released at https://github.com/xhan77/context-aware-decoding. | [
"Shi, Weijia",
"Han, Xiaochuang",
"Lewis, Mike",
"Tsvetkov, Yulia",
"Zettlemoyer, Luke",
"Yih, Wen-tau"
] | Trusting Your Evidence: Hallucinate Less with Context-aware Decoding | naacl-short.69 | Poster | 2305.14739 | [
""
] | https://huggingface.co/papers/2305.14739 | 1 | 0 | 0 | 6 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-short.70.bib | https://aclanthology.org/2024.naacl-short.70/ | @inproceedings{clarke-etal-2024-guylingo,
title = "{G}uy{L}ingo: The {R}epublic of {G}uyana Creole Corpora",
author = "Clarke, Christopher and
Daynauth, Roland and
Mars, Jason and
Wilkinson, Charlene and
Devonish, Hubert",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.70",
doi = "10.18653/v1/2024.naacl-short.70",
pages = "792--798",
abstract = "While major languages often enjoy substantial attention and resources, the linguistic diversity across the globe encompasses a multitude of smaller, indigenous, and regional languages that lack the same level of computational support. One such region is the Caribbean. While commonly labeled as {``}English speaking{''}, the ex-British Caribbean region consists of a myriad of Creole languages thriving alongside English. In this paper, we present Guylingo: a comprehensive corpus designed for advancing NLP research in the domain of Creolese (Guyanese English-lexicon Creole), the most widely spoken language in the culturally rich nation of Guyana. We first outline our framework for gathering and digitizing this diverse corpus, inclusive of colloquial expressions, idioms, and regional variations in a low-resource language. We then demonstrate the challenges of training and evaluating NLP models for machine translation for Creolese. Lastly, we discuss the unique opportunities presented by recent NLP advancements for accelerating the formal adoption of Creole languages as official languages in the Caribbean.",
}
| While major languages often enjoy substantial attention and resources, the linguistic diversity across the globe encompasses a multitude of smaller, indigenous, and regional languages that lack the same level of computational support. One such region is the Caribbean. While commonly labeled as {``}English speaking{''}, the ex-British Caribbean region consists of a myriad of Creole languages thriving alongside English. In this paper, we present Guylingo: a comprehensive corpus designed for advancing NLP research in the domain of Creolese (Guyanese English-lexicon Creole), the most widely spoken language in the culturally rich nation of Guyana. We first outline our framework for gathering and digitizing this diverse corpus, inclusive of colloquial expressions, idioms, and regional variations in a low-resource language. We then demonstrate the challenges of training and evaluating NLP models for machine translation for Creolese. Lastly, we discuss the unique opportunities presented by recent NLP advancements for accelerating the formal adoption of Creole languages as official languages in the Caribbean. | [
"Clarke, Christopher",
"Daynauth, Rol",
"",
"Mars, Jason",
"Wilkinson, Charlene",
"Devonish, Hubert"
] | GuyLingo: The Republic of Guyana Creole Corpora | naacl-short.70 | Oral | 2405.03832 | [
"https://github.com/chrisisking/caribbean-creole-languages-translation"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.71.bib | https://aclanthology.org/2024.naacl-short.71/ | @inproceedings{veljanovski-wood-doughty-2024-doublelingo,
title = "{D}ouble{L}ingo: Causal Estimation with Large Language Models",
author = "Veljanovski, Marko and
Wood-Doughty, Zach",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.71",
doi = "10.18653/v1/2024.naacl-short.71",
pages = "799--807",
abstract = "Estimating causal effects from non-randomized data requires assumptions about the underlying data-generating process. To achieve unbiased estimates of the causal effect of a treatment on an outcome, we typically adjust for any confounding variables that influence both treatment and outcome. When such confounders include text data, existing causal inference methods struggle due to the high dimensionality of the text. The simple statistical models which have sufficient convergence criteria for causal estimation are not well-equipped to handle noisy unstructured text, but flexible large language models that excel at predictive tasks with text data do not meet the statistical assumptions necessary for causal estimation. Our method enables theoretically consistent estimation of causal effects using LLM-based nuisance models by incorporating them within the framework of Double Machine Learning. On the best available dataset for evaluating such methods, we obtain a 10.4{\%} reduction in the relative absolute error for the estimated causal effect over existing methods.",
}
| Estimating causal effects from non-randomized data requires assumptions about the underlying data-generating process. To achieve unbiased estimates of the causal effect of a treatment on an outcome, we typically adjust for any confounding variables that influence both treatment and outcome. When such confounders include text data, existing causal inference methods struggle due to the high dimensionality of the text. The simple statistical models which have sufficient convergence criteria for causal estimation are not well-equipped to handle noisy unstructured text, but flexible large language models that excel at predictive tasks with text data do not meet the statistical assumptions necessary for causal estimation. Our method enables theoretically consistent estimation of causal effects using LLM-based nuisance models by incorporating them within the framework of Double Machine Learning. On the best available dataset for evaluating such methods, we obtain a 10.4{\%} reduction in the relative absolute error for the estimated causal effect over existing methods. | [
"Veljanovski, Marko",
"Wood-Doughty, Zach"
] | DoubleLingo: Causal Estimation with Large Language Models | naacl-short.71 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.72.bib | https://aclanthology.org/2024.naacl-short.72/ | @inproceedings{mitsios-etal-2024-improved,
title = "Improved Text Emotion Prediction Using Combined Valence and Arousal Ordinal Classification",
author = "Mitsios, Michail and
Vamvoukakis, Georgios and
Maniati, Georgia and
Ellinas, Nikolaos and
Dimitriou, Georgios and
Markopoulos, Konstantinos and
Kakoulidis, Panos and
Vioni, Alexandra and
Christidou, Myrsini and
Oh, Junkwang and
Jho, Gunu and
Hwang, Inchul and
Vardaxoglou, Georgios and
Chalamandaris, Aimilios and
Tsiakoulis, Pirros and
Raptis, Spyros",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.72",
doi = "10.18653/v1/2024.naacl-short.72",
pages = "808--813",
abstract = "Emotion detection in textual data has received growing interest in recent years, as it is pivotal for developing empathetic human-computer interaction systems.This paper introduces a method for categorizing emotions from text, which acknowledges and differentiates between the diversified similarities and distinctions of various emotions.Initially, we establish a baseline by training a transformer-based model for standard emotion classification, achieving state-of-the-art performance. We argue that not all misclassifications are of the same importance, as there are perceptual similarities among emotional classes.We thus redefine the emotion labeling problem by shifting it from a traditional classification model to an ordinal classification one, where discrete emotions are arranged in a sequential order according to their valence levels.Finally, we propose a method that performs ordinal classification in the two-dimensional emotion space, considering both valence and arousal scales.The results show that our approach not only preserves high accuracy in emotion prediction but also significantly reduces the magnitude of errors in cases of misclassification.",
}
| Emotion detection in textual data has received growing interest in recent years, as it is pivotal for developing empathetic human-computer interaction systems.This paper introduces a method for categorizing emotions from text, which acknowledges and differentiates between the diversified similarities and distinctions of various emotions.Initially, we establish a baseline by training a transformer-based model for standard emotion classification, achieving state-of-the-art performance. We argue that not all misclassifications are of the same importance, as there are perceptual similarities among emotional classes.We thus redefine the emotion labeling problem by shifting it from a traditional classification model to an ordinal classification one, where discrete emotions are arranged in a sequential order according to their valence levels.Finally, we propose a method that performs ordinal classification in the two-dimensional emotion space, considering both valence and arousal scales.The results show that our approach not only preserves high accuracy in emotion prediction but also significantly reduces the magnitude of errors in cases of misclassification. | [
"Mitsios, Michail",
"Vamvoukakis, Georgios",
"Maniati, Georgia",
"Ellinas, Nikolaos",
"Dimitriou, Georgios",
"Markopoulos, Konstantinos",
"Kakoulidis, Panos",
"Vioni, Alex",
"ra",
"Christidou, Myrsini",
"Oh, Junkwang",
"Jho, Gunu",
"Hwang, Inchul",
"Vardaxoglou, Georgios",
"Chalam",
"aris, Aimilios",
"Tsiakoulis, Pirros",
"Raptis, Spyros"
] | Improved Text Emotion Prediction Using Combined Valence and Arousal Ordinal Classification | naacl-short.72 | Poster | 2404.01805 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.73.bib | https://aclanthology.org/2024.naacl-short.73/ | @inproceedings{kalbaliyev-sirts-2024-narrative,
title = "On Narrative Question Answering Skills",
author = "Kalbaliyev, Emil and
Sirts, Kairit",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.73",
doi = "10.18653/v1/2024.naacl-short.73",
pages = "814--820",
abstract = "Narrative Question Answering is an important task for evaluating and improving reading comprehension abilities in both humans and machines. However, there is a lack of consensus on the skill taxonomy that would enable systematic and comprehensive assessment and learning of the various aspects of Narrative Question Answering. Existing task-level skill views oversimplify the multidimensional nature of tasks, while question-level taxonomies face issues in evaluation and methodology. To address these challenges, we introduce a more inclusive skill taxonomy that synthesizes and redefines narrative understanding skills from previous taxonomies and includes a generation skill dimension from the answering perspective.",
}
| Narrative Question Answering is an important task for evaluating and improving reading comprehension abilities in both humans and machines. However, there is a lack of consensus on the skill taxonomy that would enable systematic and comprehensive assessment and learning of the various aspects of Narrative Question Answering. Existing task-level skill views oversimplify the multidimensional nature of tasks, while question-level taxonomies face issues in evaluation and methodology. To address these challenges, we introduce a more inclusive skill taxonomy that synthesizes and redefines narrative understanding skills from previous taxonomies and includes a generation skill dimension from the answering perspective. | [
"Kalbaliyev, Emil",
"Sirts, Kairit"
] | On Narrative Question Answering Skills | naacl-short.73 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-short.74.bib | https://aclanthology.org/2024.naacl-short.74/ | @inproceedings{nandy-etal-2024-order,
title = "Order-Based Pre-training Strategies for Procedural Text Understanding",
author = "Nandy, Abhilash and
Kulkarni, Yash and
Goyal, Pawan and
Ganguly, Niloy",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.74",
doi = "10.18653/v1/2024.naacl-short.74",
pages = "821--828",
abstract = "In this paper, we propose sequence-based pre-training methods to enhance procedural understanding in natural language processing. Procedural text, containing sequential instructions to accomplish a task, is difficult to understand due to the changing attributes of entities in the context. We focus on recipes as they are commonly represented as ordered instructions, and use this order as a supervision signal. Our work is one of the first to compare several {`}order-as-supervision{'} transformer pre-training methods, including Permutation Classification, Embedding Regression, and Skip-Clip, and show that these methods give improved results compared to baselines and SoTA LLMs on two downstream Entity-Tracking datasets: NPN-Cooking dataset in recipe domain and ProPara dataset in open domain. Our proposed methods address the non-trivial Entity Tracking Task that requires prediction of entity states across procedure steps, which requires understanding the order of steps. These methods show an improvement over the best baseline by 1.6{\%} and 7-9{\%} on NPN-Cooking and ProPara Datasets respectively across metrics.",
}
| In this paper, we propose sequence-based pre-training methods to enhance procedural understanding in natural language processing. Procedural text, containing sequential instructions to accomplish a task, is difficult to understand due to the changing attributes of entities in the context. We focus on recipes as they are commonly represented as ordered instructions, and use this order as a supervision signal. Our work is one of the first to compare several {`}order-as-supervision{'} transformer pre-training methods, including Permutation Classification, Embedding Regression, and Skip-Clip, and show that these methods give improved results compared to baselines and SoTA LLMs on two downstream Entity-Tracking datasets: NPN-Cooking dataset in recipe domain and ProPara dataset in open domain. Our proposed methods address the non-trivial Entity Tracking Task that requires prediction of entity states across procedure steps, which requires understanding the order of steps. These methods show an improvement over the best baseline by 1.6{\%} and 7-9{\%} on NPN-Cooking and ProPara Datasets respectively across metrics. | [
"N",
"y, Abhilash",
"Kulkarni, Yash",
"Goyal, Pawan",
"Ganguly, Niloy"
] | Order-Based Pre-training Strategies for Procedural Text Understanding | naacl-short.74 | Poster | 2404.04676 | [
"https://github.com/abhi1nandy2/order_as_supervision"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-short.75.bib | https://aclanthology.org/2024.naacl-short.75/ | @inproceedings{intrator-etal-2024-breaking,
title = "Breaking the Language Barrier: Can Direct Inference Outperform Pre-Translation in Multilingual {LLM} Applications?",
author = "Intrator, Yotam and
Halfon, Matan and
Goldenberg, Roman and
Tsarfaty, Reut and
Eyal, Matan and
Rivlin, Ehud and
Matias, Yossi and
Aizenberg, Natalia",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.75",
doi = "10.18653/v1/2024.naacl-short.75",
pages = "829--844",
abstract = "Large language models hold significant promise in multilingual applications. However, inherent biases stemming from predominantly English-centric pre-training have led to the widespread practice of pre-translation, i.e., translating non-English inputs to English before inference, leading to complexity and information loss. This study re-evaluates the need for pre-translation in the context of PaLM2 models, which have been established as highly performant in multilingual tasks. We offer a comprehensive investigation across 108 languages and 6 diverse benchmarks, including open-end generative tasks, which were excluded from previous similar studies. Our findings challenge the pre-translation paradigm established in prior research, highlighting the advantages of direct inference in PaLM2. Specifically, PaLM2-L consistently outperforms pre-translation in 94 out of 108 languages. These findings pave the way for more efficient and effective multilingual applications, alleviating the limitations associated with pre-translation and unlocking linguistic authenticity.",
}
| Large language models hold significant promise in multilingual applications. However, inherent biases stemming from predominantly English-centric pre-training have led to the widespread practice of pre-translation, i.e., translating non-English inputs to English before inference, leading to complexity and information loss. This study re-evaluates the need for pre-translation in the context of PaLM2 models, which have been established as highly performant in multilingual tasks. We offer a comprehensive investigation across 108 languages and 6 diverse benchmarks, including open-end generative tasks, which were excluded from previous similar studies. Our findings challenge the pre-translation paradigm established in prior research, highlighting the advantages of direct inference in PaLM2. Specifically, PaLM2-L consistently outperforms pre-translation in 94 out of 108 languages. These findings pave the way for more efficient and effective multilingual applications, alleviating the limitations associated with pre-translation and unlocking linguistic authenticity. | [
"Intrator, Yotam",
"Halfon, Matan",
"Goldenberg, Roman",
"Tsarfaty, Reut",
"Eyal, Matan",
"Rivlin, Ehud",
"Matias, Yossi",
"Aizenberg, Natalia"
] | Breaking the Language Barrier: Can Direct Inference Outperform Pre-Translation in Multilingual LLM Applications? | naacl-short.75 | Oral | 2403.04792 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-demo.1.bib | https://aclanthology.org/2024.naacl-demo.1/ | @inproceedings{giorgi-etal-2024-topical,
title = "{TOPICAL}: {TOPIC} Pages {A}utomagica{L}ly",
author = "Giorgi, John and
Singh, Amanpreet and
Downey, Doug and
Feldman, Sergey and
Wang, Lucy",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.1",
doi = "10.18653/v1/2024.naacl-demo.1",
pages = "1--11",
abstract = "Topic pages aggregate useful information about an entity or concept into a single succinct and accessible article. Automated creation of topic pages would enable their rapid curation as information resources, providing an alternative to traditional web search. While most prior work has focused on generating topic pages about biographical entities, in this work, we develop a completely automated process to generate high-quality topic pages for scientific entities, with a focus on biomedical concepts. We release TOPICAL, a web app and associated open-source code, comprising a model pipeline combining retrieval, clustering, and prompting, that makes it easy for anyone to generate topic pages for a wide variety of biomedical entities on demand. In a human evaluation of 150 diverse topic pages generated using TOPICAL, we find that the vast majority were considered relevant, accurate, and coherent, with correct supporting citations. We make all code publicly available and host a free-to-use web app at: https://s2-topical.apps.allenai.org.",
}
| Topic pages aggregate useful information about an entity or concept into a single succinct and accessible article. Automated creation of topic pages would enable their rapid curation as information resources, providing an alternative to traditional web search. While most prior work has focused on generating topic pages about biographical entities, in this work, we develop a completely automated process to generate high-quality topic pages for scientific entities, with a focus on biomedical concepts. We release TOPICAL, a web app and associated open-source code, comprising a model pipeline combining retrieval, clustering, and prompting, that makes it easy for anyone to generate topic pages for a wide variety of biomedical entities on demand. In a human evaluation of 150 diverse topic pages generated using TOPICAL, we find that the vast majority were considered relevant, accurate, and coherent, with correct supporting citations. We make all code publicly available and host a free-to-use web app at: https://s2-topical.apps.allenai.org. | [
"Giorgi, John",
"Singh, Amanpreet",
"Downey, Doug",
"Feldman, Sergey",
"Wang, Lucy"
] | TOPICAL: TOPIC Pages AutomagicaLly | naacl-demo.1 | Poster | 2405.01796 | [
"https://github.com/allenai/topical"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-demo.2.bib | https://aclanthology.org/2024.naacl-demo.2/ | @inproceedings{cai-etal-2024-low-code,
title = "Low-code {LLM}: Graphical User Interface over Large Language Models",
author = "Cai, Yuzhe and
Mao, Shaoguang and
Wu, Wenshan and
Wang, Zehua and
Liang, Yaobo and
Ge, Tao and
Wu, Chenfei and
WangYou, WangYou and
Song, Ting and
Xia, Yan and
Duan, Nan and
Wei, Furu",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.2",
doi = "10.18653/v1/2024.naacl-demo.2",
pages = "12--25",
abstract = "Utilizing Large Language Models (LLMs) for complex tasks is challenging, often involving a time-consuming and uncontrollable prompt engineering process. This paper introduces a novel human-LLM interaction framework, Low-code LLM. It incorporates six types of simple low-code visual programming interactions to achieve more controllable and stable responses. Through visual interaction with a graphical user interface, users can incorporate their ideas into the process without writing trivial prompts. The proposed Low-code LLM framework consists of a Planning LLM that designs a structured planning workflow for complex tasks, which can be correspondingly edited and confirmed by users through low-code visual programming operations, and an Executing LLM that generates responses following the user-confirmed workflow. We highlight three advantages of the low-code LLM: user-friendly interaction, controllable generation, and wide applicability. We demonstrate its benefits using four typical applications. By introducing this framework, we aim to bridge the gap between humans and LLMs, enabling more effective and efficient utilization of LLMs for complex tasks. The code, prompts, and experimental details are available at https://github.com/moymix/TaskMatrix/tree/main/LowCodeLLM. A system demonstration video can be found at https://www.youtube.com/watch?v=jb2C1vaeO3E.",
}
| Utilizing Large Language Models (LLMs) for complex tasks is challenging, often involving a time-consuming and uncontrollable prompt engineering process. This paper introduces a novel human-LLM interaction framework, Low-code LLM. It incorporates six types of simple low-code visual programming interactions to achieve more controllable and stable responses. Through visual interaction with a graphical user interface, users can incorporate their ideas into the process without writing trivial prompts. The proposed Low-code LLM framework consists of a Planning LLM that designs a structured planning workflow for complex tasks, which can be correspondingly edited and confirmed by users through low-code visual programming operations, and an Executing LLM that generates responses following the user-confirmed workflow. We highlight three advantages of the low-code LLM: user-friendly interaction, controllable generation, and wide applicability. We demonstrate its benefits using four typical applications. By introducing this framework, we aim to bridge the gap between humans and LLMs, enabling more effective and efficient utilization of LLMs for complex tasks. The code, prompts, and experimental details are available at https://github.com/moymix/TaskMatrix/tree/main/LowCodeLLM. A system demonstration video can be found at https://www.youtube.com/watch?v=jb2C1vaeO3E. | [
"Cai, Yuzhe",
"Mao, Shaoguang",
"Wu, Wenshan",
"Wang, Zehua",
"Liang, Yaobo",
"Ge, Tao",
"Wu, Chenfei",
"WangYou, WangYou",
"Song, Ting",
"Xia, Yan",
"Duan, Nan",
"Wei, Furu"
] | Low-code LLM: Graphical User Interface over Large Language Models | naacl-demo.2 | Poster | 2304.08103 | [
"https://github.com/microsoft/visual-chatgpt/tree/main/LowCodeLLM"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-demo.3.bib | https://aclanthology.org/2024.naacl-demo.3/ | @inproceedings{palomino-etal-2024-edtec,
title = "{E}d{T}ec-{QB}uilder: A Semantic Retrieval Tool for Assembling Vocational Training Exams in {G}erman Language",
author = "Palomino, Alonso and
Fischer, Andreas and
Kuzilek, Jakub and
Nitsch, Jarek and
Pinkwart, Niels and
Paassen, Benjamin",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.3",
doi = "10.18653/v1/2024.naacl-demo.3",
pages = "26--35",
abstract = "Selecting and assembling test items from a validated item database into comprehensive exam forms is an under-researched but significant challenge in education. Search and retrieval methods provide a robust framework to assist educators when filtering and assembling relevant test items. In this work, we present EdTec-QBuilder, a semantic search tool developed to assist vocational educators in assembling exam forms. To implement EdTec-QBuilder{'}s core search functionality, we evaluated eight retrieval strategies and twenty-five popular pre-trained sentence similarity models. Our evaluation revealed that employing cross-encoders to re-rank an initial list of relevant items is best for assisting vocational trainers in assembling examination forms. Beyond topic-based exam assembly, EdTec-QBuilder aims to provide a crowdsourcing infrastructure enabling manual exam assembly data collection, which is critical for future research and development in assisted and automatic exam assembly models.",
}
| Selecting and assembling test items from a validated item database into comprehensive exam forms is an under-researched but significant challenge in education. Search and retrieval methods provide a robust framework to assist educators when filtering and assembling relevant test items. In this work, we present EdTec-QBuilder, a semantic search tool developed to assist vocational educators in assembling exam forms. To implement EdTec-QBuilder{'}s core search functionality, we evaluated eight retrieval strategies and twenty-five popular pre-trained sentence similarity models. Our evaluation revealed that employing cross-encoders to re-rank an initial list of relevant items is best for assisting vocational trainers in assembling examination forms. Beyond topic-based exam assembly, EdTec-QBuilder aims to provide a crowdsourcing infrastructure enabling manual exam assembly data collection, which is critical for future research and development in assisted and automatic exam assembly models. | [
"Palomino, Alonso",
"Fischer, Andreas",
"Kuzilek, Jakub",
"Nitsch, Jarek",
"Pinkwart, Niels",
"Paassen, Benjamin"
] | EdTec-QBuilder: A Semantic Retrieval Tool for Assembling Vocational Training Exams in German Language | naacl-demo.3 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-demo.4.bib | https://aclanthology.org/2024.naacl-demo.4/ | @inproceedings{hu-etal-2024-dialight,
title = "{DIALIGHT}: Lightweight Multilingual Development and Evaluation of Task-Oriented Dialogue Systems with Large Language Models",
author = "Hu, Songbo and
Wang, Xiaobin and
Yuan, Moy and
Korhonen, Anna and
Vuli{\'c}, Ivan",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.4",
doi = "10.18653/v1/2024.naacl-demo.4",
pages = "36--52",
abstract = "We present DIALIGHT, a toolkit for developing and evaluating multilingual Task-Oriented Dialogue (ToD) systems which facilitates systematic evaluations and comparisons between ToD systems using fine-tuning of Pretrained Language Models (PLMs) and those utilising the zero-shot and in-context learning capabilities of Large Language Models (LLMs). In addition to automatic evaluation, this toolkit features (i) a secure, user-friendly web interface for fine-grained human evaluation at both local utterance level and global dialogue level, and (ii) a microservice-based backend, improving efficiency and scalability. Our evaluations reveal that while PLM fine-tuning leads to higher accuracy and coherence, LLM-based systems excel in producing diverse and likeable responses. However, we also identify significant challenges of LLMs in adherence to task-specific instructions and generating outputs in multiple languages, highlighting areas for future research. We hope this open-sourced toolkit will serve as a valuable resource for researchers aiming to develop and properly evaluate multilingual ToD systems and will lower, currently still high, entry barriers in the field.",
}
| We present DIALIGHT, a toolkit for developing and evaluating multilingual Task-Oriented Dialogue (ToD) systems which facilitates systematic evaluations and comparisons between ToD systems using fine-tuning of Pretrained Language Models (PLMs) and those utilising the zero-shot and in-context learning capabilities of Large Language Models (LLMs). In addition to automatic evaluation, this toolkit features (i) a secure, user-friendly web interface for fine-grained human evaluation at both local utterance level and global dialogue level, and (ii) a microservice-based backend, improving efficiency and scalability. Our evaluations reveal that while PLM fine-tuning leads to higher accuracy and coherence, LLM-based systems excel in producing diverse and likeable responses. However, we also identify significant challenges of LLMs in adherence to task-specific instructions and generating outputs in multiple languages, highlighting areas for future research. We hope this open-sourced toolkit will serve as a valuable resource for researchers aiming to develop and properly evaluate multilingual ToD systems and will lower, currently still high, entry barriers in the field. | [
"Hu, Songbo",
"Wang, Xiaobin",
"Yuan, Moy",
"Korhonen, Anna",
"Vuli{\\'c}, Ivan"
] | DIALIGHT: Lightweight Multilingual Development and Evaluation of Task-Oriented Dialogue Systems with Large Language Models | naacl-demo.4 | Poster | 2401.02208 | [
"https://github.com/cambridgeltl/e2e_tod_toolkit"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-demo.5.bib | https://aclanthology.org/2024.naacl-demo.5/ | @inproceedings{cho-etal-2024-rtsum,
title = "{RTSUM}: Relation Triple-based Interpretable Summarization with Multi-level Salience Visualization",
author = "Cho, Seonglae and
Jang, Myungha and
Yeo, Jinyoung and
Lee, Dongha",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.5",
doi = "10.18653/v1/2024.naacl-demo.5",
pages = "53--60",
abstract = "In this paper, we present RTSum, an unsupervised summarization framework that utilizes relation triples as the basic unit for summarization. Given an input document, RTSum first selects salient relation triples via multi-level salience scoring and then generates a concise summary from the selected relation triples by using a text-to-text language model. On the basis of RTSum, we also develop a web demo for an interpretable summarizing tool, providing fine-grained interpretations with the output summary. With support for customization options, our tool visualizes the salience for textual units at three distinct levels: sentences, relation triples, and phrases. The code, demo, and video are publicly available.",
}
| In this paper, we present RTSum, an unsupervised summarization framework that utilizes relation triples as the basic unit for summarization. Given an input document, RTSum first selects salient relation triples via multi-level salience scoring and then generates a concise summary from the selected relation triples by using a text-to-text language model. On the basis of RTSum, we also develop a web demo for an interpretable summarizing tool, providing fine-grained interpretations with the output summary. With support for customization options, our tool visualizes the salience for textual units at three distinct levels: sentences, relation triples, and phrases. The code, demo, and video are publicly available. | [
"Cho, Seonglae",
"Jang, Myungha",
"Yeo, Jinyoung",
"Lee, Dongha"
] | RTSUM: Relation Triple-based Interpretable Summarization with Multi-level Salience Visualization | naacl-demo.5 | Poster | 2310.13895 | [
"https://github.com/sjyyj/sjyyj"
] | https://huggingface.co/papers/2310.13895 | 1 | 0 | 0 | 6 | 1 | [
"seonglae/rtsum"
] | [] | [] |
https://aclanthology.org/2024.naacl-demo.6.bib | https://aclanthology.org/2024.naacl-demo.6/ | @inproceedings{wang-demszky-2024-edu,
title = "Edu-{C}onvo{K}it: An Open-Source Library for Education Conversation Data",
author = "Wang, Rose and
Demszky, Dorottya",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.6",
doi = "10.18653/v1/2024.naacl-demo.6",
pages = "61--69",
abstract = "We introduce Edu-ConvoKit, an open-source library designed to handle pre-processing, annotation and analysis of conversation data in education. Resources for analyzing education conversation data are scarce, making the research challenging to perform and therefore hard to access. We address these challenges with Edu-ConvoKit. Edu-ConvoKit is open-source [1], pip-installable [2], with comprehensive documentation [3]. Our demo video is available at: https://youtu.be/zdcI839vAko?si=h9qlnl76ucSuXb8-. We include additional resources, such as Colab applications of Edu-ConvoKit to three diverse education datasets [4] and a repository of Edu-ConvoKit-related papers [5].[1] https://github.com/stanfordnlp/edu-convokit[2] https://pypi.org/project/edu-convokit/[3] https://edu-convokit.readthedocs.io/en/latest/[4] https://github.com/stanfordnlp/edu-convokit?tab=readme-ov-file{\#}datasets-with-edu-convokit[5] https://github.com/stanfordnlp/edu-convokit/blob/main/papers.md",
}
| We introduce Edu-ConvoKit, an open-source library designed to handle pre-processing, annotation and analysis of conversation data in education. Resources for analyzing education conversation data are scarce, making the research challenging to perform and therefore hard to access. We address these challenges with Edu-ConvoKit. Edu-ConvoKit is open-source [1], pip-installable [2], with comprehensive documentation [3]. Our demo video is available at: https://youtu.be/zdcI839vAko?si=h9qlnl76ucSuXb8-. We include additional resources, such as Colab applications of Edu-ConvoKit to three diverse education datasets [4] and a repository of Edu-ConvoKit-related papers [5].[1] https://github.com/stanfordnlp/edu-convokit[2] https://pypi.org/project/edu-convokit/[3] https://edu-convokit.readthedocs.io/en/latest/[4] https://github.com/stanfordnlp/edu-convokit?tab=readme-ov-file{\#}datasets-with-edu-convokit[5] https://github.com/stanfordnlp/edu-convokit/blob/main/papers.md | [
"Wang, Rose",
"Demszky, Dorottya"
] | Edu-ConvoKit: An Open-Source Library for Education Conversation Data | naacl-demo.6 | Poster | 2402.05111 | [
"https://github.com/stanfordnlp/edu-convokit"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-demo.7.bib | https://aclanthology.org/2024.naacl-demo.7/ | @inproceedings{park-etal-2024-jp,
title = "jp-evalb: Robust Alignment-based {PARSEVAL} Measures",
author = "Park, Jungyeul and
Wang, Junrui and
Jo, Eunkyul and
Park, Angela",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.7",
doi = "10.18653/v1/2024.naacl-demo.7",
pages = "70--77",
abstract = "We introduce an evaluation system designed to compute PARSEVAL measures, offering a viable alternative to evalb commonly used for constituency parsing evaluation. The widely used evalb script has traditionally been employed for evaluating the accuracy of constituency parsing results, albeit with the requirement for consistent tokenization and sentence boundaries. In contrast, our approach, named jp-evalb, is founded on an alignment method. This method aligns sentences and words when discrepancies arise. It aims to overcome several known issues associated with evalb by utilizing the {`}jointly preprocessed (JP){'} alignment-based method. We introduce a more flexible and adaptive framework, ultimately contributing to a more accurate assessment of constituency parsing performance.",
}
| We introduce an evaluation system designed to compute PARSEVAL measures, offering a viable alternative to evalb commonly used for constituency parsing evaluation. The widely used evalb script has traditionally been employed for evaluating the accuracy of constituency parsing results, albeit with the requirement for consistent tokenization and sentence boundaries. In contrast, our approach, named jp-evalb, is founded on an alignment method. This method aligns sentences and words when discrepancies arise. It aims to overcome several known issues associated with evalb by utilizing the {`}jointly preprocessed (JP){'} alignment-based method. We introduce a more flexible and adaptive framework, ultimately contributing to a more accurate assessment of constituency parsing performance. | [
"Park, Jungyeul",
"Wang, Junrui",
"Jo, Eunkyul",
"Park, Angela"
] | jp-evalb: Robust Alignment-based PARSEVAL Measures | naacl-demo.7 | Poster | 2405.14150 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-demo.8.bib | https://aclanthology.org/2024.naacl-demo.8/ | @inproceedings{haller-etal-2024-opiniongpt,
title = "{O}pinion{GPT}: Modelling Explicit Biases in Instruction-Tuned {LLM}s",
author = "Haller, Patrick and
Aynetdinov, Ansar and
Akbik, Alan",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.8",
doi = "10.18653/v1/2024.naacl-demo.8",
pages = "78--86",
abstract = "Instruction-tuned Large Language Models (LLMs) have recently showcased remarkable ability to generate fitting responses to natural language instructions. However, an open research question concerns the inherent biases of trained models and their responses. For instance, if the data used to tune an LLM is dominantly written by persons with a specific political bias, we might expect generated answers to share this bias. Current research work seeks to de-bias such models, or suppress potentially biased answers.With this demonstration, we take a different view on biases in instruction-tuning: Rather than aiming to suppress them, we aim to make them explicit and transparent. To this end, we present OpinionGPT, a web demo in which users can ask questions and select all biases they wish to investigate. The demo will answer this question using a model fine-tuned on text representing each of the selected biases, allowing side-by-side comparison. To train the underlying model, we identified 11 different biases (political, geographic, gender, age) and derived an instruction-tuning corpus in which each answer was written by members of one of these demographics. This paper presents OpinionGPT, illustrates how we trained the bias-aware model and showcases the web application (available at https://opiniongpt.informatik.hu-berlin.de).",
}
| Instruction-tuned Large Language Models (LLMs) have recently showcased remarkable ability to generate fitting responses to natural language instructions. However, an open research question concerns the inherent biases of trained models and their responses. For instance, if the data used to tune an LLM is dominantly written by persons with a specific political bias, we might expect generated answers to share this bias. Current research work seeks to de-bias such models, or suppress potentially biased answers.With this demonstration, we take a different view on biases in instruction-tuning: Rather than aiming to suppress them, we aim to make them explicit and transparent. To this end, we present OpinionGPT, a web demo in which users can ask questions and select all biases they wish to investigate. The demo will answer this question using a model fine-tuned on text representing each of the selected biases, allowing side-by-side comparison. To train the underlying model, we identified 11 different biases (political, geographic, gender, age) and derived an instruction-tuning corpus in which each answer was written by members of one of these demographics. This paper presents OpinionGPT, illustrates how we trained the bias-aware model and showcases the web application (available at https://opiniongpt.informatik.hu-berlin.de). | [
"Haller, Patrick",
"Aynetdinov, Ansar",
"Akbik, Alan"
] | OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs | naacl-demo.8 | Poster | 2309.03876 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-demo.9.bib | https://aclanthology.org/2024.naacl-demo.9/ | @inproceedings{siu-etal-2024-atlas,
title = "{ATLAS}: A System for {PDF}-centric Human Interaction Data Collection",
author = "Siu, Alexa and
Wang, Zichao and
Hoeflich, Joshua and
Kapasi, Naman and
Nenkova, Ani and
Sun, Tong",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.9",
doi = "10.18653/v1/2024.naacl-demo.9",
pages = "87--96",
abstract = "The Portable Document Format (PDF) is a popular format for distributing digital documents. Datasets on PDF reading behaviors and interactions remain limited due to the challenges of instrumenting PDF readers for these data collection tasks. We present ATLAS, a data collection tool designed to better support researchers in collecting rich PDF-centric datasets from users. ATLAS supports researchers in programmatically creating a user interface for data collection that is ready to share with annotators. It includes a toolkit and an extensible schema to easily customize the data collection tasks for a variety of purposes, allowing collection of PDF annotations (e.g., highlights, drawings) as well as reading behavior analytics (e.g., page scroll, text selections). We open-source ATLAS1 to support future research efforts and review use cases of ATLAS that showcase our system{'}s broad applicability.",
}
| The Portable Document Format (PDF) is a popular format for distributing digital documents. Datasets on PDF reading behaviors and interactions remain limited due to the challenges of instrumenting PDF readers for these data collection tasks. We present ATLAS, a data collection tool designed to better support researchers in collecting rich PDF-centric datasets from users. ATLAS supports researchers in programmatically creating a user interface for data collection that is ready to share with annotators. It includes a toolkit and an extensible schema to easily customize the data collection tasks for a variety of purposes, allowing collection of PDF annotations (e.g., highlights, drawings) as well as reading behavior analytics (e.g., page scroll, text selections). We open-source ATLAS1 to support future research efforts and review use cases of ATLAS that showcase our system{'}s broad applicability. | [
"Siu, Alexa",
"Wang, Zichao",
"Hoeflich, Joshua",
"Kapasi, Naman",
"Nenkova, Ani",
"Sun, Tong"
] | ATLAS: A System for PDF-centric Human Interaction Data Collection | naacl-demo.9 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-demo.10.bib | https://aclanthology.org/2024.naacl-demo.10/ | @inproceedings{murzaku-rambow-2024-beleaf,
title = "{B}e{L}eaf: Belief Prediction as Tree Generation",
author = "Murzaku, John and
Rambow, Owen",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.10",
doi = "10.18653/v1/2024.naacl-demo.10",
pages = "97--106",
abstract = "We present a novel approach to predicting source-and-target factuality by transforming it into a linearized tree generation task. Unlike previous work, our model and representation format fully account for the factuality tree structure, generating the full chain of nested sources instead of the last source only. Furthermore, our linearized tree representation significantly compresses the amount of tokens needed compared to other representations, allowing for fully end-to-end systems. We achieve state-of-the-art results on FactBank and the Modal Dependency Corpus, which are both corpora annotating source-and-target event factuality. Our results on fine-tuning validate the strong generality of the proposed linearized tree generation task, which can be easily adapted to other corpora with a similar structure. We then present BeLeaf, a system which directly leverages the linearized tree representation to create both sentence level and document level visualizations. Our system adds several missing pieces to the source-and-target factuality task such as coreference resolution and event head word to syntactic span conversion. Our demo code is available on https://github.com/yurpl/beleaf and our video is available on https://youtu.be/SpbMNnin-Po.",
}
| We present a novel approach to predicting source-and-target factuality by transforming it into a linearized tree generation task. Unlike previous work, our model and representation format fully account for the factuality tree structure, generating the full chain of nested sources instead of the last source only. Furthermore, our linearized tree representation significantly compresses the amount of tokens needed compared to other representations, allowing for fully end-to-end systems. We achieve state-of-the-art results on FactBank and the Modal Dependency Corpus, which are both corpora annotating source-and-target event factuality. Our results on fine-tuning validate the strong generality of the proposed linearized tree generation task, which can be easily adapted to other corpora with a similar structure. We then present BeLeaf, a system which directly leverages the linearized tree representation to create both sentence level and document level visualizations. Our system adds several missing pieces to the source-and-target factuality task such as coreference resolution and event head word to syntactic span conversion. Our demo code is available on https://github.com/yurpl/beleaf and our video is available on https://youtu.be/SpbMNnin-Po. | [
"Murzaku, John",
"Rambow, Owen"
] | BeLeaf: Belief Prediction as Tree Generation | naacl-demo.10 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-demo.11.bib | https://aclanthology.org/2024.naacl-demo.11/ | @inproceedings{dhole-etal-2024-queryexplorer,
title = "{Q}uery{E}xplorer: An Interactive Query Generation Assistant for Search and Exploration",
author = "Dhole, Kaustubh and
Bajaj, Shivam and
Chandradevan, Ramraj and
Agichtein, Eugene",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.11",
doi = "10.18653/v1/2024.naacl-demo.11",
pages = "107--115",
abstract = "Formulating effective search queries remains a challenging task, particularly when users lack expertise in a specific domain or are not proficient in the language of the content. Providing example documents of interest might be easier for a user. However, such query-by-example scenarios are prone to concept drift, and the retrieval effectiveness is highly sensitive to the query generation method, without a clear way to incorporate user feedback. To enable exploration and to support Human-In-The-Loop experiments we propose QueryExplorer{--} an interactive query generation, reformulation, and retrieval interface with support for Hug-gingFace generation models and PyTerrier{'}sretrieval pipelines and datasets, and extensivelogging of human feedback. To allow users to create and modify effective queries, our demo supports complementary approaches of using LLMs interactively, assisting the user with edits and feedback at multiple stages of the query formulation process. With support for recording fine-grained interactions and user annotations, QueryExplorer can serve as a valuable experimental and research platform for annotation, qualitative evaluation, and conducting Human-in-the-Loop (HITL) experiments for complex search tasks where users struggle to formulate queries.",
}
| Formulating effective search queries remains a challenging task, particularly when users lack expertise in a specific domain or are not proficient in the language of the content. Providing example documents of interest might be easier for a user. However, such query-by-example scenarios are prone to concept drift, and the retrieval effectiveness is highly sensitive to the query generation method, without a clear way to incorporate user feedback. To enable exploration and to support Human-In-The-Loop experiments we propose QueryExplorer{--} an interactive query generation, reformulation, and retrieval interface with support for Hug-gingFace generation models and PyTerrier{'}sretrieval pipelines and datasets, and extensivelogging of human feedback. To allow users to create and modify effective queries, our demo supports complementary approaches of using LLMs interactively, assisting the user with edits and feedback at multiple stages of the query formulation process. With support for recording fine-grained interactions and user annotations, QueryExplorer can serve as a valuable experimental and research platform for annotation, qualitative evaluation, and conducting Human-in-the-Loop (HITL) experiments for complex search tasks where users struggle to formulate queries. | [
"Dhole, Kaustubh",
"Bajaj, Shivam",
"Ch",
"radevan, Ramraj",
"Agichtein, Eugene"
] | QueryExplorer: An Interactive Query Generation Assistant for Search and Exploration | naacl-demo.11 | Poster | 2403.15667 | [
"https://github.com/emory-irlab/query-explorer"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-demo.12.bib | https://aclanthology.org/2024.naacl-demo.12/ | @inproceedings{diao-etal-2024-lmflow,
title = "{LMF}low: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models",
author = "Diao, Shizhe and
Pan, Rui and
Dong, Hanze and
Shum, KaShun and
Zhang, Jipeng and
Xiong, Wei and
Zhang, Tong",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.12",
doi = "10.18653/v1/2024.naacl-demo.12",
pages = "116--127",
abstract = "Foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more foundation models have become publicly available.However, most of those models exhibit a major deficiency in specialized-domain and specialized-task applications, where the step of domain- and task-aware finetuning is still required to obtain scientific language models. As the number of available foundation models and specialized tasks keeps growing, the job of training scientific language models becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the domain- and task-aware finetuning of general foundation models.LMFlow offers a complete finetuning workflow for a foundation model to support specialized training with limited computing resources.Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, inference acceleration, long context generalization, model customization, and even multimodal finetuning, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at \url{https://github.com/OptimalScale/LMFlow}.",
}
| Foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more foundation models have become publicly available.However, most of those models exhibit a major deficiency in specialized-domain and specialized-task applications, where the step of domain- and task-aware finetuning is still required to obtain scientific language models. As the number of available foundation models and specialized tasks keeps growing, the job of training scientific language models becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the domain- and task-aware finetuning of general foundation models.LMFlow offers a complete finetuning workflow for a foundation model to support specialized training with limited computing resources.Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, inference acceleration, long context generalization, model customization, and even multimodal finetuning, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at \url{https://github.com/OptimalScale/LMFlow}. | [
"Diao, Shizhe",
"Pan, Rui",
"Dong, Hanze",
"Shum, KaShun",
"Zhang, Jipeng",
"Xiong, Wei",
"Zhang, Tong"
] | LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models | naacl-demo.12 | Poster | 2306.12420 | [
"https://github.com/optimalscale/lmflow"
] | https://huggingface.co/papers/2306.12420 | 4 | 2 | 0 | 7 | 1 | [
"OptimalScale/robin-65b-v2-delta",
"OptimalScale/robin-7b-v2-delta",
"OptimalScale/robin-33b-v2-delta",
"OptimalScale/robin-13b-v2-delta"
] | [] | [
"open-llm-leaderboard/open_llm_leaderboard",
"Intel/low_bit_open_llm_leaderboard",
"BAAI/open_cn_llm_leaderboard",
"ZhangYuhan/3DGen-Arena",
"gsaivinay/open_llm_leaderboard",
"GTBench/GTBench",
"felixz/open_llm_leaderboard",
"OPTML-Group/UnlearnCanvas-Benchmark",
"Vikhrmodels/small-shlepa-lb",
"li-qing/FIRE",
"neubla/neubla-llm-evaluation-board",
"rodrigomasini/data_only_open_llm_leaderboard",
"Docfile/open_llm_leaderboard",
"tianleliphoebe/visual-arena",
"Ashmal/MobiLlama",
"smothiki/open_llm_leaderboard",
"0x1668/open_llm_leaderboard",
"pngwn/open_llm_leaderboard-check",
"asir0z/open_llm_leaderboard",
"kbmlcoding/open_llm_leaderboard_free",
"aichampions/open_llm_leaderboard",
"Adeco/open_llm_leaderboard",
"anirudh937/open_llm_leaderboard",
"smothiki/open_llm_leaderboard2",
"alexshengzhili/calahealthgpt",
"Bofeee5675/FIRE",
"evelyn-lo/evelyn",
"yuantao-infini-ai/demo_test",
"zjasper666/bf16_vs_fp8"
] |
https://aclanthology.org/2024.naacl-demo.13.bib | https://aclanthology.org/2024.naacl-demo.13/ | @inproceedings{nguyen-etal-2024-docmaster,
title = "{DOCMASTER}: A Unified Platform for Annotation, Training, {\&} Inference in Document Question-Answering",
author = "Nguyen, Alex and
Wang, Zilong and
Shang, Jingbo and
Mekala, Dheeraj",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.13",
doi = "10.18653/v1/2024.naacl-demo.13",
pages = "128--136",
abstract = "The application of natural language processing models to PDF documents is pivotal for various business applications yet the challenge of training models for this purpose persists in businesses due to specific hurdles. These include the complexity of working with PDF formats that necessitate parsing text and layout information for curating training data and the lack of privacy-preserving annotation tools. This paper introduces DOCMASTER, a unified platform designed for annotating PDF documents, model training, and inference, tailored to document question-answering. The annotation interface enables users to input questions and highlight text spans within the PDF file as answers, saving layout information and text spans accordingly. Furthermore, DOCMASTER supports both state-of-the-art layout-aware and text models for comprehensive training purposes. Importantly, as annotations, training, and inference occur on-device, it also safeguards privacy. The platform has been instrumental in driving several research prototypes concerning document analysis such as the AI assistant utilized by University of California San Diego{'}s (UCSD) International Services and Engagement Office (ISEO) for processing a substantial volume of PDF documents.",
}
| The application of natural language processing models to PDF documents is pivotal for various business applications yet the challenge of training models for this purpose persists in businesses due to specific hurdles. These include the complexity of working with PDF formats that necessitate parsing text and layout information for curating training data and the lack of privacy-preserving annotation tools. This paper introduces DOCMASTER, a unified platform designed for annotating PDF documents, model training, and inference, tailored to document question-answering. The annotation interface enables users to input questions and highlight text spans within the PDF file as answers, saving layout information and text spans accordingly. Furthermore, DOCMASTER supports both state-of-the-art layout-aware and text models for comprehensive training purposes. Importantly, as annotations, training, and inference occur on-device, it also safeguards privacy. The platform has been instrumental in driving several research prototypes concerning document analysis such as the AI assistant utilized by University of California San Diego{'}s (UCSD) International Services and Engagement Office (ISEO) for processing a substantial volume of PDF documents. | [
"Nguyen, Alex",
"Wang, Zilong",
"Shang, Jingbo",
"Mekala, Dheeraj"
] | DOCMASTER: A Unified Platform for Annotation, Training, & Inference in Document Question-Answering | naacl-demo.13 | Poster | 2404.00439 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-demo.14.bib | https://aclanthology.org/2024.naacl-demo.14/ | @inproceedings{tan-etal-2024-redcoast,
title = "{R}ed{C}oast: A Lightweight Tool to Automate Distributed Training of {LLM}s on Any {GPU}/{TPU}s",
author = "Tan, Bowen and
Zhu, Yun and
Liu, Lijuan and
Wang, Hongyi and
Zhuang, Yonghao and
Chen, Jindong and
Xing, Eric and
Hu, Zhiting",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.14",
doi = "10.18653/v1/2024.naacl-demo.14",
pages = "137--147",
abstract = "The recent progress of AI can be largely attributed to large language models (LLMs). However, their escalating memory requirements introduce challenges for machine learning (ML) researchers and engineers. Addressing this requires developers to partition a large model to distribute it across multiple GPUs or TPUs. This necessitates considerable coding and intricate configuration efforts with existing model parallel tools, such as Megatron-LM, DeepSpeed, and Alpa. These tools require users{'} expertise in machine learning systems (MLSys), creating a bottleneck in LLM development, particularly for developers without MLSys background. In this work, we present RedCoast (Redco), a lightweight and user-friendly tool crafted to automate distributed training and inference for LLMs, as well as to simplify ML pipeline development. The design of Redco emphasizes two key aspects. Firstly, to automate model parallelism, our study identifies two straightforward rules to generate tensor parallel strategies for any given LLM. Integrating these rules into Redco facilitates effortless distributed LLM training and inference, eliminating the need of additional coding or complex configurations. We demonstrate the effectiveness by applying Redco on a set of LLM architectures, such as GPT-J, LLaMA, T5, and OPT, up to the size of 66B. Secondly, we propose a mechanism that allows for the customization of diverse ML pipelines through the definition of merely three functions, avoiding redundant and formulaic code like multi-host related processing. This mechanism proves adaptable across a spectrum of ML algorithms, from foundational language modeling to complex algorithms like meta-learning and reinforcement learning. As a result, Redco implementations exhibit significantly fewer lines of code compared to their official counterparts. RedCoast (Redco) has been released under Apache 2.0 license at https://github.com/tanyuqian/redco.",
}
| The recent progress of AI can be largely attributed to large language models (LLMs). However, their escalating memory requirements introduce challenges for machine learning (ML) researchers and engineers. Addressing this requires developers to partition a large model to distribute it across multiple GPUs or TPUs. This necessitates considerable coding and intricate configuration efforts with existing model parallel tools, such as Megatron-LM, DeepSpeed, and Alpa. These tools require users{'} expertise in machine learning systems (MLSys), creating a bottleneck in LLM development, particularly for developers without MLSys background. In this work, we present RedCoast (Redco), a lightweight and user-friendly tool crafted to automate distributed training and inference for LLMs, as well as to simplify ML pipeline development. The design of Redco emphasizes two key aspects. Firstly, to automate model parallelism, our study identifies two straightforward rules to generate tensor parallel strategies for any given LLM. Integrating these rules into Redco facilitates effortless distributed LLM training and inference, eliminating the need of additional coding or complex configurations. We demonstrate the effectiveness by applying Redco on a set of LLM architectures, such as GPT-J, LLaMA, T5, and OPT, up to the size of 66B. Secondly, we propose a mechanism that allows for the customization of diverse ML pipelines through the definition of merely three functions, avoiding redundant and formulaic code like multi-host related processing. This mechanism proves adaptable across a spectrum of ML algorithms, from foundational language modeling to complex algorithms like meta-learning and reinforcement learning. As a result, Redco implementations exhibit significantly fewer lines of code compared to their official counterparts. RedCoast (Redco) has been released under Apache 2.0 license at https://github.com/tanyuqian/redco. | [
"Tan, Bowen",
"Zhu, Yun",
"Liu, Lijuan",
"Wang, Hongyi",
"Zhuang, Yonghao",
"Chen, Jindong",
"Xing, Eric",
"Hu, Zhiting"
] | RedCoast: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs | naacl-demo.14 | Poster | 2310.16355 | [
"https://github.com/tanyuqian/redco"
] | https://huggingface.co/papers/2310.16355 | 2 | 0 | 0 | 8 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-demo.15.bib | https://aclanthology.org/2024.naacl-demo.15/ | @inproceedings{fischer-etal-2024-concept,
title = "Concept Over Time Analysis: Unveiling Temporal Patterns for Qualitative Data Analysis",
author = "Fischer, Tim and
Schneider, Florian and
Geislinger, Robert and
Helfer, Florian and
Koch, Gertraud and
Biemann, Chris",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.15",
doi = "10.18653/v1/2024.naacl-demo.15",
pages = "148--157",
abstract = "In this system demonstration paper, we present the Concept Over Time Analysis extension for the Discourse Analysis Tool Suite.The proposed tool empowers users to define, refine, and visualize their concepts of interest within an interactive interface. Adhering to the Human-in-the-loop paradigm, users can give feedback through sentence annotations. Utilizing few-shot sentence classification, the system employs Sentence Transformers to compute representations of sentences and concepts. Through an iterative process involving semantic similarity searches, sentence annotation, and fine-tuning with contrastive data, the model continuously refines, providing users with enhanced analysis outcomes. The final output is a timeline visualization of sentences classified to concepts. Especially suited for the Digital Humanities, Concept Over Time Analysis serves as a valuable tool for qualitative data analysis within extensive datasets. The chronological overview of concepts enables researchers to uncover patterns, trends, and shifts in discourse over time.",
}
| In this system demonstration paper, we present the Concept Over Time Analysis extension for the Discourse Analysis Tool Suite.The proposed tool empowers users to define, refine, and visualize their concepts of interest within an interactive interface. Adhering to the Human-in-the-loop paradigm, users can give feedback through sentence annotations. Utilizing few-shot sentence classification, the system employs Sentence Transformers to compute representations of sentences and concepts. Through an iterative process involving semantic similarity searches, sentence annotation, and fine-tuning with contrastive data, the model continuously refines, providing users with enhanced analysis outcomes. The final output is a timeline visualization of sentences classified to concepts. Especially suited for the Digital Humanities, Concept Over Time Analysis serves as a valuable tool for qualitative data analysis within extensive datasets. The chronological overview of concepts enables researchers to uncover patterns, trends, and shifts in discourse over time. | [
"Fischer, Tim",
"Schneider, Florian",
"Geislinger, Robert",
"Helfer, Florian",
"Koch, Gertraud",
"Biemann, Chris"
] | Concept Over Time Analysis: Unveiling Temporal Patterns for Qualitative Data Analysis | naacl-demo.15 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-demo.16.bib | https://aclanthology.org/2024.naacl-demo.16/ | @inproceedings{wu-etal-2024-pyvene,
title = "pyvene: A Library for Understanding and Improving {P}y{T}orch Models via Interventions",
author = "Wu, Zhengxuan and
Geiger, Atticus and
Arora, Aryaman and
Huang, Jing and
Wang, Zheng and
Goodman, Noah and
Manning, Christopher and
Potts, Christopher",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.16",
doi = "10.18653/v1/2024.naacl-demo.16",
pages = "158--165",
abstract = "Interventions on model-internal states are fundamental operations in many areas of AI, including model editing, steering, robustness, and interpretability. To facilitate such research, we introduce $pyvene$, an open-source Python library that supports customizable interventions on a range of different PyTorch modules. $pyvene$ supports complex intervention schemes with an intuitive configuration format, and its interventions can be static or include trainable parameters. We show how $pyvene$ provides a unified and extensible framework for performing interventions on neural models and sharing the intervened upon models with others. We illustrate the power of the library via interpretability analyses using causal abstraction and knowledge localization. We publish our library through Python Package Index (PyPI) and provide code, documentation, and tutorials at {`}https://github.com/stanfordnlp/pyvene{`}.",
}
| Interventions on model-internal states are fundamental operations in many areas of AI, including model editing, steering, robustness, and interpretability. To facilitate such research, we introduce $pyvene$, an open-source Python library that supports customizable interventions on a range of different PyTorch modules. $pyvene$ supports complex intervention schemes with an intuitive configuration format, and its interventions can be static or include trainable parameters. We show how $pyvene$ provides a unified and extensible framework for performing interventions on neural models and sharing the intervened upon models with others. We illustrate the power of the library via interpretability analyses using causal abstraction and knowledge localization. We publish our library through Python Package Index (PyPI) and provide code, documentation, and tutorials at {`}https://github.com/stanfordnlp/pyvene{`}. | [
"Wu, Zhengxuan",
"Geiger, Atticus",
"Arora, Aryaman",
"Huang, Jing",
"Wang, Zheng",
"Goodman, Noah",
"Manning, Christopher",
"Potts, Christopher"
] | pyvene: A Library for Understanding and Improving PyTorch Models via Interventions | naacl-demo.16 | Poster | 2403.07809 | [
"https://github.com/stanfordnlp/pyvene"
] | https://huggingface.co/papers/2403.07809 | 3 | 1 | 0 | 8 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-demo.17.bib | https://aclanthology.org/2024.naacl-demo.17/ | @inproceedings{saxena-etal-2024-newspaper,
title = "Newspaper Signaling for Crisis Prediction",
author = "Saxena, Prajvi and
Janzen, Sabine and
Maass, Wolfgang",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.17",
doi = "10.18653/v1/2024.naacl-demo.17",
pages = "166--173",
abstract = "To establish sophisticated monitoring of newspaper articles for detecting crisis-related signals, natural language processing has to cope with unstructured data, media, and cultural bias as well as multiple languages. So far, research on detecting signals in newspaper articles is focusing on structured data, restricted language settings, and isolated application domains. When considering complex crisis-related signals, a high number of diverse newspaper articles in terms of language and culture reduces potential biases. We demonstrate MENDEL {--} a model for multi-lingual and open-domain newspaper signaling for detecting crisis-related indicators in newspaper articles. The model works with unstructured news data and combines multiple transformer-based models for pre-processing (STANZA) and content filtering (RoBERTa, GPT-3.5). Embedded in a Question-Answering (QA) setting, MENDEL supports multiple languages ({\textgreater}66) and can detect early newspaper signals for open crisis domains in real-time.",
}
| To establish sophisticated monitoring of newspaper articles for detecting crisis-related signals, natural language processing has to cope with unstructured data, media, and cultural bias as well as multiple languages. So far, research on detecting signals in newspaper articles is focusing on structured data, restricted language settings, and isolated application domains. When considering complex crisis-related signals, a high number of diverse newspaper articles in terms of language and culture reduces potential biases. We demonstrate MENDEL {--} a model for multi-lingual and open-domain newspaper signaling for detecting crisis-related indicators in newspaper articles. The model works with unstructured news data and combines multiple transformer-based models for pre-processing (STANZA) and content filtering (RoBERTa, GPT-3.5). Embedded in a Question-Answering (QA) setting, MENDEL supports multiple languages ({\textgreater}66) and can detect early newspaper signals for open crisis domains in real-time. | [
"Saxena, Prajvi",
"Janzen, Sabine",
"Maass, Wolfgang"
] | Newspaper Signaling for Crisis Prediction | naacl-demo.17 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-demo.18.bib | https://aclanthology.org/2024.naacl-demo.18/ | @inproceedings{yehudai-bandel-2024-fastfit,
title = "{F}ast{F}it: Fast and Effective Few-Shot Text Classification with a Multitude of Classes",
author = "Yehudai, Asaf and
Bandel, Elron",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.18",
doi = "10.18653/v1/2024.naacl-demo.18",
pages = "174--184",
abstract = "We present FastFit, a Python package designed to provide fast and accurate few-shot classification, especially for scenarios with many semantically similar classes. FastFit utilizes a novel approach integrating batch contrastive learning and token-level similarity score. Compared to existing few-shot learning packages, such as SetFit, Transformers, or few-shot prompting of large language models via API calls, FastFit significantly improves multi-class classification performance in speed and accuracy across various English and Multilingual datasets. FastFit demonstrates a 3-20x improvement in training speed, completing training in just a few seconds. The FastFit package is now available on GitHub, presenting a user-friendly solution for NLP practitioners.",
}
| We present FastFit, a Python package designed to provide fast and accurate few-shot classification, especially for scenarios with many semantically similar classes. FastFit utilizes a novel approach integrating batch contrastive learning and token-level similarity score. Compared to existing few-shot learning packages, such as SetFit, Transformers, or few-shot prompting of large language models via API calls, FastFit significantly improves multi-class classification performance in speed and accuracy across various English and Multilingual datasets. FastFit demonstrates a 3-20x improvement in training speed, completing training in just a few seconds. The FastFit package is now available on GitHub, presenting a user-friendly solution for NLP practitioners. | [
"Yehudai, Asaf",
"B",
"el, Elron"
] | FastFit: Fast and Effective Few-Shot Text Classification with a Multitude of Classes | naacl-demo.18 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-demo.19.bib | https://aclanthology.org/2024.naacl-demo.19/ | @inproceedings{gioacchini-etal-2024-agentquest,
title = "{A}gent{Q}uest: A Modular Benchmark Framework to Measure Progress and Improve {LLM} Agents",
author = "Gioacchini, Luca and
Siracusano, Giuseppe and
Sanvito, Davide and
Gashteovski, Kiril and
Friede, David and
Bifulco, Roberto and
Lawrence, Carolin",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.19",
doi = "10.18653/v1/2024.naacl-demo.19",
pages = "185--193",
abstract = "The advances made by Large Language Models (LLMs) have led to the pursuit of LLM agents that can solve intricate, multi-step reasoning tasks. As with any research pursuit, benchmarking and evaluation are key corner stones to efficient and reliable progress. However, existing benchmarks are often narrow and simply compute overall task success. To face these issues, we propose AgentQuest {--} a framework where (i) both benchmarks and metrics are modular and easily extensible through well documented and easy-to-use APIs; (ii) we offer two new evaluation metrics that can reliably track LLM agent progress while solving a task. We exemplify the utility of the metrics on two use cases wherein we identify common failure points and refine the agent architecture to obtain a significant performance increase. Together with the research community, we hope to extend AgentQuest further and therefore we make it available under https://github.com/nec-research/agentquest.",
}
| The advances made by Large Language Models (LLMs) have led to the pursuit of LLM agents that can solve intricate, multi-step reasoning tasks. As with any research pursuit, benchmarking and evaluation are key corner stones to efficient and reliable progress. However, existing benchmarks are often narrow and simply compute overall task success. To face these issues, we propose AgentQuest {--} a framework where (i) both benchmarks and metrics are modular and easily extensible through well documented and easy-to-use APIs; (ii) we offer two new evaluation metrics that can reliably track LLM agent progress while solving a task. We exemplify the utility of the metrics on two use cases wherein we identify common failure points and refine the agent architecture to obtain a significant performance increase. Together with the research community, we hope to extend AgentQuest further and therefore we make it available under https://github.com/nec-research/agentquest. | [
"Gioacchini, Luca",
"Siracusano, Giuseppe",
"Sanvito, Davide",
"Gashteovski, Kiril",
"Friede, David",
"Bifulco, Roberto",
"Lawrence, Carolin"
] | AgentQuest: A Modular Benchmark Framework to Measure Progress and Improve LLM Agents | naacl-demo.19 | Poster | 2404.06411 | [
"https://github.com/nec-research/agentquest"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-demo.20.bib | https://aclanthology.org/2024.naacl-demo.20/ | @inproceedings{du-etal-2024-zhujiu,
title = "{Z}hu{J}iu-Knowledge: A Fairer Platform for Evaluating Multiple Knowledge Types in Large Language Models",
author = "Du, Pengfan and
Liang, Sirui and
Zhang, Baoli and
Cao, Pengfei and
Chen, Yubo and
Liu, Kang and
Zhao, Jun",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.20",
doi = "10.18653/v1/2024.naacl-demo.20",
pages = "194--206",
abstract = "The swift advancement in large language models (LLMs) has heightened the importance of model evaluations. LLMs have acquired a substantial amount of knowledge, and evaluating the knowledge of these LLMs is crucial. To address this, we introduce the ZhuJiu-Knowledge benchmark which carefully considers the following factors: (1) For knowledge scope, we concentrate on three domains: commonsense knowledge, world knowledge, language knowledge, which comes from ATOMIC, Conceptnet, Wikidata, and Wordnet. (2) For data construction, to prevent data contamination, we utilize knowledge derived from corpora and knowledge graphs to formulate novel questions which are ensured not to appear in the training corpus. A multitude of prompts is purposefully devised to mitigate the impact of prompt design on evaluation and to further analyze the LLMs{'} sensitivity to various prompts. (3) For evaluation criteria, we propose a novel voting methodology for assessing generative text, aligning the model{'}s evaluation with human preferences to reduce biases inherent in individual model assessments. We evaluate 14 current mainstream LLMs and conduct a comprehensive discussion and analysis of their results. The ZhuJiu-Knowledge benchmark and open-participation leaderboard are publicly released at http://zhujiu-knowledge.top and we also provide a demo video at https://youtu.be/QJp4qlEHVH8.",
}
| The swift advancement in large language models (LLMs) has heightened the importance of model evaluations. LLMs have acquired a substantial amount of knowledge, and evaluating the knowledge of these LLMs is crucial. To address this, we introduce the ZhuJiu-Knowledge benchmark which carefully considers the following factors: (1) For knowledge scope, we concentrate on three domains: commonsense knowledge, world knowledge, language knowledge, which comes from ATOMIC, Conceptnet, Wikidata, and Wordnet. (2) For data construction, to prevent data contamination, we utilize knowledge derived from corpora and knowledge graphs to formulate novel questions which are ensured not to appear in the training corpus. A multitude of prompts is purposefully devised to mitigate the impact of prompt design on evaluation and to further analyze the LLMs{'} sensitivity to various prompts. (3) For evaluation criteria, we propose a novel voting methodology for assessing generative text, aligning the model{'}s evaluation with human preferences to reduce biases inherent in individual model assessments. We evaluate 14 current mainstream LLMs and conduct a comprehensive discussion and analysis of their results. The ZhuJiu-Knowledge benchmark and open-participation leaderboard are publicly released at http://zhujiu-knowledge.top and we also provide a demo video at https://youtu.be/QJp4qlEHVH8. | [
"Du, Pengfan",
"Liang, Sirui",
"Zhang, Baoli",
"Cao, Pengfei",
"Chen, Yubo",
"Liu, Kang",
"Zhao, Jun"
] | ZhuJiu-Knowledge: A Fairer Platform for Evaluating Multiple Knowledge Types in Large Language Models | naacl-demo.20 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-demo.21.bib | https://aclanthology.org/2024.naacl-demo.21/ | @inproceedings{bandel-etal-2024-unitxt,
title = "Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation for Generative {AI}",
author = "Bandel, Elron and
Perlitz, Yotam and
Venezian, Elad and
Friedman, Roni and
Arviv, Ofir and
Orbach, Matan and
Don-Yehiya, Shachar and
Sheinwald, Dafna and
Gera, Ariel and
Choshen, Leshem and
Shmueli-Scheuer, Michal and
Katz, Yoav",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.21",
doi = "10.18653/v1/2024.naacl-demo.21",
pages = "207--215",
abstract = "In the dynamic landscape of generative NLP, traditional text processing pipelines limit research flexibility and reproducibility, as they are tailored to specific dataset, task, and model combinations. The escalating complexity, involving system prompts, model-specific formats, instructions, and more, calls for a shift to a structured, modular, and customizable solution.Addressing this need, we present Unitxt, an innovative library for customizable textual data preparation and evaluation tailored to generative language models. Unitxt natively integrates with common libraries like HuggingFace and LM-eval-harness and deconstructs processing flows into modular components, enabling easy customization and sharing between practitioners. These components encompass model-specific formats, task prompts, and many other comprehensive dataset processing definitions. The Unitxt Catalog centralizes these components, fostering collaboration and exploration in modern textual data workflows. Beyond being a tool, Unitxt is a community-driven platform, empowering users to build, share, and advance their pipelines collaboratively. Join the Unitxt community at https://github.com/IBM/unitxt",
}
| In the dynamic landscape of generative NLP, traditional text processing pipelines limit research flexibility and reproducibility, as they are tailored to specific dataset, task, and model combinations. The escalating complexity, involving system prompts, model-specific formats, instructions, and more, calls for a shift to a structured, modular, and customizable solution.Addressing this need, we present Unitxt, an innovative library for customizable textual data preparation and evaluation tailored to generative language models. Unitxt natively integrates with common libraries like HuggingFace and LM-eval-harness and deconstructs processing flows into modular components, enabling easy customization and sharing between practitioners. These components encompass model-specific formats, task prompts, and many other comprehensive dataset processing definitions. The Unitxt Catalog centralizes these components, fostering collaboration and exploration in modern textual data workflows. Beyond being a tool, Unitxt is a community-driven platform, empowering users to build, share, and advance their pipelines collaboratively. Join the Unitxt community at https://github.com/IBM/unitxt | [
"B",
"el, Elron",
"Perlitz, Yotam",
"Venezian, Elad",
"Friedman, Roni",
"Arviv, Ofir",
"Orbach, Matan",
"Don-Yehiya, Shachar",
"Sheinwald, Dafna",
"Gera, Ariel",
"Choshen, Leshem",
"Shmueli-Scheuer, Michal",
"Katz, Yoav"
] | Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation for Generative AI | naacl-demo.21 | Poster | 2401.14019 | [
"https://github.com/ibm/unitxt"
] | https://huggingface.co/papers/2401.14019 | 9 | 19 | 1 | 12 | 1 | [] | [
"unitxt/data"
] | [
"unitxt/metric"
] |
https://aclanthology.org/2024.naacl-srw.1.bib | https://aclanthology.org/2024.naacl-srw.1/ | @inproceedings{huang-etal-2024-systematic,
title = "Systematic Analysis for Pretrained Language Model Priming for Parameter-Efficient Fine-tuning",
author = "Huang, Shih-Cheng and
Wang, Shih-Heng and
Shih, Min-Han and
Sahay, Saurav and
Lee, Hung-yi",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.1",
doi = "10.18653/v1/2024.naacl-srw.1",
pages = "1--7",
abstract = "Parameter-efficient (PE) methods (like Prompts or Adapters) for adapting pre-trained language models (PLM) to downstream tasks have been popular recently. However, hindrances still prevent these methods from reaching their full potential. For example, two significant challenges are few-shot adaptation and cross-task generalization. To tackle these issues, we propose a general PE priming framework to enhance and explore the few-shot adaptation and generalization ability of PE methods. In this framework, PLMs are primed with PE methods for rapidly adapting to various target tasks. To evaluate the generalization ability of these PE methods, we conduct experiments on a few-shot cross-domain benchmark containing 160 diverse NLP tasks. Our experiment not only reveals the best priming strategy but also verifies that priming facilitates the adaptation to target tasks.",
}
| Parameter-efficient (PE) methods (like Prompts or Adapters) for adapting pre-trained language models (PLM) to downstream tasks have been popular recently. However, hindrances still prevent these methods from reaching their full potential. For example, two significant challenges are few-shot adaptation and cross-task generalization. To tackle these issues, we propose a general PE priming framework to enhance and explore the few-shot adaptation and generalization ability of PE methods. In this framework, PLMs are primed with PE methods for rapidly adapting to various target tasks. To evaluate the generalization ability of these PE methods, we conduct experiments on a few-shot cross-domain benchmark containing 160 diverse NLP tasks. Our experiment not only reveals the best priming strategy but also verifies that priming facilitates the adaptation to target tasks. | [
"Huang, Shih-Cheng",
"Wang, Shih-Heng",
"Shih, Min-Han",
"Sahay, Saurav",
"Lee, Hung-yi"
] | Systematic Analysis for Pretrained Language Model Priming for Parameter-Efficient Fine-tuning | naacl-srw.1 | Poster | 2212.01032 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-srw.2.bib | https://aclanthology.org/2024.naacl-srw.2/ | @inproceedings{yang-etal-2024-rephrasing,
title = "Rephrasing Invokes Better Generations for Large Language Models",
author = "Yang, Haoran and
Lu, Hongyuan and
Lam, Wai",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.2",
doi = "10.18653/v1/2024.naacl-srw.2",
pages = "8--15",
abstract = "In the realm of emerging multitasking abilities of Large language models (LLMs), methodologies like prompt tuning enable low-cost adaptation to downstream tasks without retraining the model. However, automatic input pre-processing when LLMs are unavailable is currently under-studied. This paper proposes ReLLM (Rephrasing for LLMs), a method that automatically paraphrases input content for better output generations. ReLLM replaces low-frequency lexical items with their high-frequency counterparts. This substitution is particularly beneficial for low-resource language tasks that lack sufficient training data and resources. ReLLM is user-friendly and requires no additional LLM training. Experimental results in cross-lingual summarization, and natural language inference demonstrate the effectiveness of ReLLM.",
}
| In the realm of emerging multitasking abilities of Large language models (LLMs), methodologies like prompt tuning enable low-cost adaptation to downstream tasks without retraining the model. However, automatic input pre-processing when LLMs are unavailable is currently under-studied. This paper proposes ReLLM (Rephrasing for LLMs), a method that automatically paraphrases input content for better output generations. ReLLM replaces low-frequency lexical items with their high-frequency counterparts. This substitution is particularly beneficial for low-resource language tasks that lack sufficient training data and resources. ReLLM is user-friendly and requires no additional LLM training. Experimental results in cross-lingual summarization, and natural language inference demonstrate the effectiveness of ReLLM. | [
"Yang, Haoran",
"Lu, Hongyuan",
"Lam, Wai"
] | Rephrasing Invokes Better Generations for Large Language Models | naacl-srw.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-srw.3.bib | https://aclanthology.org/2024.naacl-srw.3/ | @inproceedings{yang-etal-2024-exploring,
title = "Exploring Compositional Generalization of Large Language Models",
author = "Yang, Haoran and
Lu, Hongyuan and
Lam, Wai and
Cai, Deng",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.3",
doi = "10.18653/v1/2024.naacl-srw.3",
pages = "16--24",
abstract = "In this paper, we study the generalization ability of large language models (LLMs) with respect to compositional instructions, which are instructions that can be decomposed into several sub-instructions. We argue that the ability to generalize from simple instructions to more intricate compositional instructions represents a key aspect of the out-of-distribution generalization for LLMs. Since there are no specialized datasets for studying this phenomenon, we first construct a dataset with the help of ChatGPT, guided by the self-instruct technique. Then, we fine-tune and evaluate LLMs on these datasets. Interestingly, our experimental results indicate that training LLMs on higher-order compositional instructions enhances their performance on lower-order ones, but the reverse does not hold true.",
}
| In this paper, we study the generalization ability of large language models (LLMs) with respect to compositional instructions, which are instructions that can be decomposed into several sub-instructions. We argue that the ability to generalize from simple instructions to more intricate compositional instructions represents a key aspect of the out-of-distribution generalization for LLMs. Since there are no specialized datasets for studying this phenomenon, we first construct a dataset with the help of ChatGPT, guided by the self-instruct technique. Then, we fine-tune and evaluate LLMs on these datasets. Interestingly, our experimental results indicate that training LLMs on higher-order compositional instructions enhances their performance on lower-order ones, but the reverse does not hold true. | [
"Yang, Haoran",
"Lu, Hongyuan",
"Lam, Wai",
"Cai, Deng"
] | Exploring Compositional Generalization of Large Language Models | naacl-srw.3 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-srw.4.bib | https://aclanthology.org/2024.naacl-srw.4/ | @inproceedings{jung-etal-2024-explainable,
title = "Explainable {CED}: A Dataset for Explainable Critical Error Detection in Machine Translation",
author = "Jung, Dahyun and
Eo, Sugyeong and
Park, Chanjun and
Lim, Heuiseok",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.4",
doi = "10.18653/v1/2024.naacl-srw.4",
pages = "25--35",
abstract = "Critical error detection (CED) in machine translation is a task that aims to detect errors that significantly distort the intended meaning. However, the existing study of CED lacks explainability due to the absence of content addressing the reasons for catastrophic errors. To address this limitation, we propose Explainable CED, a dataset that introduces the attributes of error explanation and correction regarding critical errors. Considering the advantage of reducing time costs and mitigating human annotation bias, we leverage a large language model in the data construction process. To improve the quality of the dataset and mitigate hallucination, we compare responses from the model and introduce an additional data filtering method through feedback scoring. The experiment demonstrates that the dataset appropriately reflects a consistent explanation and revision for errors, validating the reliability of the dataset.",
}
| Critical error detection (CED) in machine translation is a task that aims to detect errors that significantly distort the intended meaning. However, the existing study of CED lacks explainability due to the absence of content addressing the reasons for catastrophic errors. To address this limitation, we propose Explainable CED, a dataset that introduces the attributes of error explanation and correction regarding critical errors. Considering the advantage of reducing time costs and mitigating human annotation bias, we leverage a large language model in the data construction process. To improve the quality of the dataset and mitigate hallucination, we compare responses from the model and introduce an additional data filtering method through feedback scoring. The experiment demonstrates that the dataset appropriately reflects a consistent explanation and revision for errors, validating the reliability of the dataset. | [
"Jung, Dahyun",
"Eo, Sugyeong",
"Park, Chanjun",
"Lim, Heuiseok"
] | Explainable CED: A Dataset for Explainable Critical Error Detection in Machine Translation | naacl-srw.4 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-srw.5.bib | https://aclanthology.org/2024.naacl-srw.5/ | @inproceedings{baillargeon-lamontagne-2024-smartr,
title = "{SMARTR}: A Framework for Early Detection using Survival Analysis of Longitudinal Texts",
author = "Baillargeon, Jean-Thomas and
Lamontagne, Luc",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.5",
doi = "10.18653/v1/2024.naacl-srw.5",
pages = "36--41",
abstract = "This paper presents an innovative approach to the early detection of expensive insurance claims by leveraging survival analysis concepts within a deep learning framework exploiting textual information from claims notes. Our proposed SMARTR model addresses limitations of state-of-the-art models, such as handling data-label mismatches and non-uniform data frequency, to enhance a posteriori classification and early detection. Our results suggest that incorporating temporal dynamics and empty period representation improves model performance, highlighting the importance of considering time in insurance claim analysis. The approach appears promising for application to other insurance datasets.",
}
| This paper presents an innovative approach to the early detection of expensive insurance claims by leveraging survival analysis concepts within a deep learning framework exploiting textual information from claims notes. Our proposed SMARTR model addresses limitations of state-of-the-art models, such as handling data-label mismatches and non-uniform data frequency, to enhance a posteriori classification and early detection. Our results suggest that incorporating temporal dynamics and empty period representation improves model performance, highlighting the importance of considering time in insurance claim analysis. The approach appears promising for application to other insurance datasets. | [
"Baillargeon, Jean-Thomas",
"Lamontagne, Luc"
] | SMARTR: A Framework for Early Detection using Survival Analysis of Longitudinal Texts | naacl-srw.5 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-srw.6.bib | https://aclanthology.org/2024.naacl-srw.6/ | @inproceedings{zhu-2024-fast,
title = "Fast Exact Retrieval for Nearest-neighbor Lookup ({FERN})",
author = "Zhu, Richard",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.6",
doi = "10.18653/v1/2024.naacl-srw.6",
pages = "42--47",
abstract = "Exact nearest neighbor search is a computationally intensive process, and even its simpler sibling {---} vector retrieval {---} can be computationally complex. This is exacerbated when retrieving vectors which have high-dimension $d$ relative to the number of vectors, $N$, in the database. Exact nearest neighbor retrieval has been generally acknowledged to be a $O(Nd)$ problem with no sub-linear solutions. Attention has instead shifted towards Approximate Nearest-Neighbor (ANN) retrieval techniques, many of which have sub-linear or even logarithmic time complexities. However, if our intuition from binary search problems (e.g. $d=1$ vector retrieval) carries, there ought to be a way to retrieve an organized representation of vectors without brute-forcing our way to a solution. For low dimension (e.g. $d=2$ or $d=3$ cases), kd-trees provide a $O(d\log N)$ algorithm for retrieval. Unfortunately the algorithm deteriorates rapidly to a $O(dN)$ solution at high dimensions (e.g. $k=128$), in practice. We propose a novel algorithm for logarithmic Fast Exact Retrieval for Nearest-neighbor lookup (FERN), inspired by kd-trees. The algorithm achieves $O(d\log N)$ look-up with 100{\%} recall on 10 million $d=128$ uniformly randomly generated vectors.",
}
| Exact nearest neighbor search is a computationally intensive process, and even its simpler sibling {---} vector retrieval {---} can be computationally complex. This is exacerbated when retrieving vectors which have high-dimension $d$ relative to the number of vectors, $N$, in the database. Exact nearest neighbor retrieval has been generally acknowledged to be a $O(Nd)$ problem with no sub-linear solutions. Attention has instead shifted towards Approximate Nearest-Neighbor (ANN) retrieval techniques, many of which have sub-linear or even logarithmic time complexities. However, if our intuition from binary search problems (e.g. $d=1$ vector retrieval) carries, there ought to be a way to retrieve an organized representation of vectors without brute-forcing our way to a solution. For low dimension (e.g. $d=2$ or $d=3$ cases), kd-trees provide a $O(d\log N)$ algorithm for retrieval. Unfortunately the algorithm deteriorates rapidly to a $O(dN)$ solution at high dimensions (e.g. $k=128$), in practice. We propose a novel algorithm for logarithmic Fast Exact Retrieval for Nearest-neighbor lookup (FERN), inspired by kd-trees. The algorithm achieves $O(d\log N)$ look-up with 100{\%} recall on 10 million $d=128$ uniformly randomly generated vectors. | [
"Zhu, Richard"
] | Fast Exact Retrieval for Nearest-neighbor Lookup (FERN) | naacl-srw.6 | Poster | 2405.04435 | [
"https://github.com/richardzhu123/ferns"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-srw.7.bib | https://aclanthology.org/2024.naacl-srw.7/ | @inproceedings{luo-etal-2024-start,
title = "Start Simple: Progressive Difficulty Multitask Learning",
author = "Luo, Yunfei and
Liu, Yuyang and
Cai, Rukai and
Rahman, Tauhidur",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.7",
doi = "10.18653/v1/2024.naacl-srw.7",
pages = "48--55",
abstract = "The opaque nature of neural networks, often described as black boxes, poses significant challenges in understanding their learning mechanisms, which limit our ability to fully optimize and trust these models.Inspired by how humans learn, this paper proposes a novel neural network training strategy that employs multitask learning with progressive difficulty subtasks, which we believe can potentially shed light on the internal learning mechanisms of neural networks.We implemented this strategy across a range of NLP tasks, data sets, and neural network architectures and observed notable improvements in model performance.This suggests that neural networks may be able to extract common features and internalize shared representations across similar subtasks that differ in their difficulty.Analyzing this strategy could lead us to more interpretable and robust neural networks, enhancing both their performance and our understanding of their nature.",
}
| The opaque nature of neural networks, often described as black boxes, poses significant challenges in understanding their learning mechanisms, which limit our ability to fully optimize and trust these models.Inspired by how humans learn, this paper proposes a novel neural network training strategy that employs multitask learning with progressive difficulty subtasks, which we believe can potentially shed light on the internal learning mechanisms of neural networks.We implemented this strategy across a range of NLP tasks, data sets, and neural network architectures and observed notable improvements in model performance.This suggests that neural networks may be able to extract common features and internalize shared representations across similar subtasks that differ in their difficulty.Analyzing this strategy could lead us to more interpretable and robust neural networks, enhancing both their performance and our understanding of their nature. | [
"Luo, Yunfei",
"Liu, Yuyang",
"Cai, Rukai",
"Rahman, Tauhidur"
] | Start Simple: Progressive Difficulty Multitask Learning | naacl-srw.7 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-srw.8.bib | https://aclanthology.org/2024.naacl-srw.8/ | @inproceedings{stacey-etal-2024-lucid,
title = "{LUCID}: {LLM}-Generated Utterances for Complex and Interesting Dialogues",
author = "Stacey, Joe and
Cheng, Jianpeng and
Torr, John and
Guigue, Tristan and
Driesen, Joris and
Coca, Alexandru and
Gaynor, Mark and
Johannsen, Anders",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.8",
doi = "10.18653/v1/2024.naacl-srw.8",
pages = "56--74",
abstract = "Spurred by recent advances in Large Language Models (LLMs), virtual assistants are poised to take a leap forward in terms of their dialogue capabilities. Yet a major bottleneck to achieving genuinely transformative task-oriented dialogue capabilities remains the scarcity of high quality data. Existing datasets, while impressive in scale, have limited domain coverage and contain few genuinely challenging conversational phenomena; those which are present are typically unlabelled, making it difficult to assess the strengths and weaknesses of models without time-consuming and costly human evaluation. Moreover, creating high quality dialogue data has until now required considerable human input, limiting both the scale of these datasets and the ability to rapidly bootstrap data for a new target domain. We aim to overcome these issues with LUCID, a modularised and highly automated LLM-driven data generation system that produces realistic, diverse and challenging dialogues. We use LUCID to generate a seed dataset of 4,277 conversations across 100 intents to demonstrate its capabilities, with a human review finding consistently high quality labels in the generated data.",
}
| Spurred by recent advances in Large Language Models (LLMs), virtual assistants are poised to take a leap forward in terms of their dialogue capabilities. Yet a major bottleneck to achieving genuinely transformative task-oriented dialogue capabilities remains the scarcity of high quality data. Existing datasets, while impressive in scale, have limited domain coverage and contain few genuinely challenging conversational phenomena; those which are present are typically unlabelled, making it difficult to assess the strengths and weaknesses of models without time-consuming and costly human evaluation. Moreover, creating high quality dialogue data has until now required considerable human input, limiting both the scale of these datasets and the ability to rapidly bootstrap data for a new target domain. We aim to overcome these issues with LUCID, a modularised and highly automated LLM-driven data generation system that produces realistic, diverse and challenging dialogues. We use LUCID to generate a seed dataset of 4,277 conversations across 100 intents to demonstrate its capabilities, with a human review finding consistently high quality labels in the generated data. | [
"Stacey, Joe",
"Cheng, Jianpeng",
"Torr, John",
"Guigue, Tristan",
"Driesen, Joris",
"Coca, Alex",
"ru",
"Gaynor, Mark",
"Johannsen, Anders"
] | LUCID: LLM-Generated Utterances for Complex and Interesting Dialogues | naacl-srw.8 | Poster | 2403.00462 | [
"https://github.com/apple/ml-lucid-datagen"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-srw.9.bib | https://aclanthology.org/2024.naacl-srw.9/ | @inproceedings{bahad-etal-2024-fine,
title = "Fine-tuning Pre-trained Named Entity Recognition Models For {I}ndian Languages",
author = "Bahad, Sankalp and
Mishra, Pruthwik and
Krishnamurthy, Parameswari and
Sharma, Dipti",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.9",
doi = "10.18653/v1/2024.naacl-srw.9",
pages = "75--82",
abstract = "Named Entity Recognition (NER) is a use-ful component in Natural Language Process-ing (NLP) applications. It is used in varioustasks such as Machine Translation, Summa-rization, Information Retrieval, and Question-Answering systems. The research on NER iscentered around English and some other ma-jor languages, whereas limited attention hasbeen given to Indian languages. We analyze thechallenges and propose techniques that can betailored for Multilingual Named Entity Recog-nition for Indian Languages. We present a hu-man annotated named entity corpora of â¼40Ksentences for 4 Indian languages from two ofthe major Indian language families. Addition-ally, we show the transfer learning capabilitiesof pre-trained transformer models from a highresource language to multiple low resource lan-guages through a series of experiments. Wealso present a multilingual model fine-tunedon our dataset, which achieves an F1 score ofâ¼0.80 on our dataset on average. We achievecomparable performance on completely unseenbenchmark datasets for Indian languages whichaffirms the usability of our model.",
}
| Named Entity Recognition (NER) is a use-ful component in Natural Language Process-ing (NLP) applications. It is used in varioustasks such as Machine Translation, Summa-rization, Information Retrieval, and Question-Answering systems. The research on NER iscentered around English and some other ma-jor languages, whereas limited attention hasbeen given to Indian languages. We analyze thechallenges and propose techniques that can betailored for Multilingual Named Entity Recog-nition for Indian Languages. We present a hu-man annotated named entity corpora of â¼40Ksentences for 4 Indian languages from two ofthe major Indian language families. Addition-ally, we show the transfer learning capabilitiesof pre-trained transformer models from a highresource language to multiple low resource lan-guages through a series of experiments. Wealso present a multilingual model fine-tunedon our dataset, which achieves an F1 score ofâ¼0.80 on our dataset on average. We achievecomparable performance on completely unseenbenchmark datasets for Indian languages whichaffirms the usability of our model. | [
"Bahad, Sankalp",
"Mishra, Pruthwik",
"Krishnamurthy, Parameswari",
"Sharma, Dipti"
] | Fine-tuning Pre-trained Named Entity Recognition Models For Indian Languages | naacl-srw.9 | Poster | 2405.04829 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-srw.10.bib | https://aclanthology.org/2024.naacl-srw.10/ | @inproceedings{baez-santamaria-2024-knowledge,
title = "Knowledge-centered conversational agents with a drive to learn",
author = "Baez Santamaria, Selene",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.10",
doi = "10.18653/v1/2024.naacl-srw.10",
pages = "83--92",
abstract = "We create an adaptive conversational agent that assesses the quality of its knowledge and is driven to become more knowledgeable. Unlike agents with predefined tasks, ours can leverage people as diverse sources to meet its knowledge needs. We test the agent in social contexts, where personal and subjective information can be obtained through dialogue. We provide the agent both with generic methods for assessing its knowledge quality (e.g. correctness, completeness, redundancy, interconnectedness, and diversity), as well as with generic capabilities to improve its knowledge by leveraging external sources. We demonstrate that the agent can learn effective policies to acquire the knowledge needed by assessing the efficiency of these capabilities during interaction. Our framework enables on-the-fly learning, offering a dynamic and adaptive approach to shaping conversational interactions.",
}
| We create an adaptive conversational agent that assesses the quality of its knowledge and is driven to become more knowledgeable. Unlike agents with predefined tasks, ours can leverage people as diverse sources to meet its knowledge needs. We test the agent in social contexts, where personal and subjective information can be obtained through dialogue. We provide the agent both with generic methods for assessing its knowledge quality (e.g. correctness, completeness, redundancy, interconnectedness, and diversity), as well as with generic capabilities to improve its knowledge by leveraging external sources. We demonstrate that the agent can learn effective policies to acquire the knowledge needed by assessing the efficiency of these capabilities during interaction. Our framework enables on-the-fly learning, offering a dynamic and adaptive approach to shaping conversational interactions. | [
"Baez Santamaria, Selene"
] | Knowledge-centered conversational agents with a drive to learn | naacl-srw.10 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-srw.11.bib | https://aclanthology.org/2024.naacl-srw.11/ | @inproceedings{lee-etal-2024-exploring-inherent,
title = "Exploring Inherent Biases in {LLM}s within {K}orean Social Context: A Comparative Analysis of {C}hat{GPT} and {GPT}-4",
author = "Lee, Seungyoon and
Kim, Dong and
Jung, Dahyun and
Park, Chanjun and
Lim, Heuiseok",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.11",
doi = "10.18653/v1/2024.naacl-srw.11",
pages = "93--104",
abstract = "Large Language Models (LLMs) have significantly impacted various fields requiring advanced linguistic understanding, yet concerns regarding their inherent biases and ethical considerations have also increased. Notably, LLMs have been critiqued for perpetuating stereotypes against diverse groups based on race, sexual orientation, and other attributes. However, most research analyzing these biases has predominantly focused on communities where English is the primary language, neglecting to consider the cultural and linguistic nuances of other societies. In this paper, we aim to explore the inherent biases and toxicity of LLMs, specifically within the social context of Korea. We devise a set of prompts that reflect major societal issues in Korea and assign varied personas to both ChatGPT and GPT-4 to assess the toxicity of the generated sentences. Our findings indicate that certain personas or prompt combinations consistently yield harmful content, highlighting the potential risks associated with specific persona-issue alignments within the Korean cultural framework. Furthermore, we discover that GPT-4 can produce more than twice the level of toxic content than ChatGPT under certain conditions.",
}
| Large Language Models (LLMs) have significantly impacted various fields requiring advanced linguistic understanding, yet concerns regarding their inherent biases and ethical considerations have also increased. Notably, LLMs have been critiqued for perpetuating stereotypes against diverse groups based on race, sexual orientation, and other attributes. However, most research analyzing these biases has predominantly focused on communities where English is the primary language, neglecting to consider the cultural and linguistic nuances of other societies. In this paper, we aim to explore the inherent biases and toxicity of LLMs, specifically within the social context of Korea. We devise a set of prompts that reflect major societal issues in Korea and assign varied personas to both ChatGPT and GPT-4 to assess the toxicity of the generated sentences. Our findings indicate that certain personas or prompt combinations consistently yield harmful content, highlighting the potential risks associated with specific persona-issue alignments within the Korean cultural framework. Furthermore, we discover that GPT-4 can produce more than twice the level of toxic content than ChatGPT under certain conditions. | [
"Lee, Seungyoon",
"Kim, Dong",
"Jung, Dahyun",
"Park, Chanjun",
"Lim, Heuiseok"
] | Exploring Inherent Biases in LLMs within Korean Social Context: A Comparative Analysis of ChatGPT and GPT-4 | naacl-srw.11 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-srw.12.bib | https://aclanthology.org/2024.naacl-srw.12/ | @inproceedings{leippert-etal-2024-clarify,
title = "To Clarify or not to Clarify: A Comparative Analysis of Clarification Classification with Fine-Tuning, Prompt Tuning, and Prompt Engineering",
author = "Leippert, Alina and
Anikina, Tatiana and
Kiefer, Bernd and
Genabith, Josef",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.12",
doi = "10.18653/v1/2024.naacl-srw.12",
pages = "105--115",
abstract = "Misunderstandings occur all the time in human conversation but deciding on when to ask for clarification is a challenging task for conversational systems that requires a balance between asking too many unnecessary questions and running the risk of providing incorrect information. This work investigates clarification identification based on the task and data from (Xu et al., 2019), reproducing their Transformer baseline and extending it by comparing pre-trained language model fine-tuning, prompt tuning and manual prompt engineering on the task of clarification identification. Our experiments show strong performance with LM and a prompt tuning approach with BERT and RoBERTa, outperforming standard LM fine-tuning, while manual prompt engineering with GPT-3.5 proved to be less effective, although informative prompt instructions have the potential of steering the model towards generating more accurate explanations for why clarification is needed.",
}
| Misunderstandings occur all the time in human conversation but deciding on when to ask for clarification is a challenging task for conversational systems that requires a balance between asking too many unnecessary questions and running the risk of providing incorrect information. This work investigates clarification identification based on the task and data from (Xu et al., 2019), reproducing their Transformer baseline and extending it by comparing pre-trained language model fine-tuning, prompt tuning and manual prompt engineering on the task of clarification identification. Our experiments show strong performance with LM and a prompt tuning approach with BERT and RoBERTa, outperforming standard LM fine-tuning, while manual prompt engineering with GPT-3.5 proved to be less effective, although informative prompt instructions have the potential of steering the model towards generating more accurate explanations for why clarification is needed. | [
"Leippert, Alina",
"Anikina, Tatiana",
"Kiefer, Bernd",
"Genabith, Josef"
] | To Clarify or not to Clarify: A Comparative Analysis of Clarification Classification with Fine-Tuning, Prompt Tuning, and Prompt Engineering | naacl-srw.12 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-srw.13.bib | https://aclanthology.org/2024.naacl-srw.13/ | @inproceedings{kamei-etal-2024-detecting,
title = "Detecting Response Generation Not Requiring Factual Judgment",
author = "Kamei, Ryohei and
Shiono, Daiki and
Akama, Reina and
Suzuki, Jun",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.13",
doi = "10.18653/v1/2024.naacl-srw.13",
pages = "116--123",
abstract = "With the remarkable development of large language models (LLMs), ensuring the factuality of output has become a challenge.However, having all the contents of the response with given knowledge or facts is not necessarily a good thing in dialogues.This study aimed to achieve both attractiveness and factuality in a dialogue response for which a task was set to predict sentences that do not require factual correctness judgment such as agreeing, or personal opinions/feelings.We created a dataset, dialogue dataset annotated with fact-check-needed label (DDFC), for this task via crowdsourcing, and classification tasks were performed on several models using this dataset.The model with the highest classification accuracy could yield about 88{\%} accurate classification results.",
}
| With the remarkable development of large language models (LLMs), ensuring the factuality of output has become a challenge.However, having all the contents of the response with given knowledge or facts is not necessarily a good thing in dialogues.This study aimed to achieve both attractiveness and factuality in a dialogue response for which a task was set to predict sentences that do not require factual correctness judgment such as agreeing, or personal opinions/feelings.We created a dataset, dialogue dataset annotated with fact-check-needed label (DDFC), for this task via crowdsourcing, and classification tasks were performed on several models using this dataset.The model with the highest classification accuracy could yield about 88{\%} accurate classification results. | [
"Kamei, Ryohei",
"Shiono, Daiki",
"Akama, Reina",
"Suzuki, Jun"
] | Detecting Response Generation Not Requiring Factual Judgment | naacl-srw.13 | Poster | 2406.09702 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-srw.14.bib | https://aclanthology.org/2024.naacl-srw.14/ | @inproceedings{tufa-etal-2024-unknown,
title = "Unknown Script: Impact of Script on Cross-Lingual Transfer",
author = "Tufa, Wondimagegnhue and
Markov, Ilia and
Vossen, Piek",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.14",
doi = "10.18653/v1/2024.naacl-srw.14",
pages = "124--129",
abstract = "Cross-lingual transfer has become an effective way of transferring knowledge between languages. In this paper, we explore an often overlooked aspect in this domain: the influence of the source language of a language model on language transfer performance. We consider a case where the target language and its script are not part of the pre-trained model. We conduct a series of experiments on monolingual and multilingual models that are pre-trained on different tokenization methods to determine factors that affect cross-lingual transfer to a new language with a unique script. Our findings reveal the importance of the tokenizer as a stronger factor than the shared script, language similarity, and model size.",
}
| Cross-lingual transfer has become an effective way of transferring knowledge between languages. In this paper, we explore an often overlooked aspect in this domain: the influence of the source language of a language model on language transfer performance. We consider a case where the target language and its script are not part of the pre-trained model. We conduct a series of experiments on monolingual and multilingual models that are pre-trained on different tokenization methods to determine factors that affect cross-lingual transfer to a new language with a unique script. Our findings reveal the importance of the tokenizer as a stronger factor than the shared script, language similarity, and model size. | [
"Tufa, Wondimagegnhue",
"Markov, Ilia",
"Vossen, Piek"
] | Unknown Script: Impact of Script on Cross-Lingual Transfer | naacl-srw.14 | Poster | 2404.18810 | [
"https://github.com/cltl/unkown_script"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-srw.15.bib | https://aclanthology.org/2024.naacl-srw.15/ | @inproceedings{kondo-etal-2024-improving,
title = "Improving Repository-level Code Search with Text Conversion",
author = "Kondo, Mizuki and
Kawahara, Daisuke and
Kurabayashi, Toshiyuki",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.15",
doi = "10.18653/v1/2024.naacl-srw.15",
pages = "130--137",
abstract = "The ability to generate code using large language models (LLMs) has been increasing year by year. However, studies on code generation at the repository level are not very active. In repository-level code generation, it is necessary to refer to related code snippets among multiple files. By taking the similarity between code snippets, related files are searched and input into an LLM, and generation is performed. This paper proposes a method to search for related files (code search) by taking similarities not between code snippets but between the texts converted from the code snippets by the LLM. We confirmed that converting to text improves the accuracy of code search.",
}
| The ability to generate code using large language models (LLMs) has been increasing year by year. However, studies on code generation at the repository level are not very active. In repository-level code generation, it is necessary to refer to related code snippets among multiple files. By taking the similarity between code snippets, related files are searched and input into an LLM, and generation is performed. This paper proposes a method to search for related files (code search) by taking similarities not between code snippets but between the texts converted from the code snippets by the LLM. We confirmed that converting to text improves the accuracy of code search. | [
"Kondo, Mizuki",
"Kawahara, Daisuke",
"Kurabayashi, Toshiyuki"
] | Improving Repository-level Code Search with Text Conversion | naacl-srw.15 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-srw.16.bib | https://aclanthology.org/2024.naacl-srw.16/ | @inproceedings{park-etal-2024-improving-multi,
title = "Improving Multi-lingual Alignment Through Soft Contrastive Learning",
author = "Park, Minsu and
Choi, Seyeon and
Choi, Chanyeol and
Kim, Jun-Seong and
Sohn, Jy-yong",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.16",
doi = "10.18653/v1/2024.naacl-srw.16",
pages = "138--145",
abstract = "Making decent multi-lingual sentence representations is critical to achieve high performances in cross-lingual downstream tasks. In this work, we propose a novel method to align multi-lingual embeddings based on the similarity of sentences measured by a pre-trained mono-lingual embedding model. Given translation sentence pairs, we train a multi-lingual model in a way that the similarity between cross-lingual embeddings follows the similarity of sentences measured at the mono-lingual teacher model. Our method can be considered as contrastive learning with soft labels defined as the similarity between sentences. Our experimental results on five languages show that our contrastive loss with soft labels far outperforms conventional constrastive loss with hard labels in various benchmarks for bitext mining tasks and STS tasks. In addition, our method outperforms existing multi-lingual embeddings including LaBSE, for Tatoeba dataset.",
}
| Making decent multi-lingual sentence representations is critical to achieve high performances in cross-lingual downstream tasks. In this work, we propose a novel method to align multi-lingual embeddings based on the similarity of sentences measured by a pre-trained mono-lingual embedding model. Given translation sentence pairs, we train a multi-lingual model in a way that the similarity between cross-lingual embeddings follows the similarity of sentences measured at the mono-lingual teacher model. Our method can be considered as contrastive learning with soft labels defined as the similarity between sentences. Our experimental results on five languages show that our contrastive loss with soft labels far outperforms conventional constrastive loss with hard labels in various benchmarks for bitext mining tasks and STS tasks. In addition, our method outperforms existing multi-lingual embeddings including LaBSE, for Tatoeba dataset. | [
"Park, Minsu",
"Choi, Seyeon",
"Choi, Chanyeol",
"Kim, Jun-Seong",
"Sohn, Jy-yong"
] | Improving Multi-lingual Alignment Through Soft Contrastive Learning | naacl-srw.16 | Poster | 2405.16155 | [
"https://github.com/yai12xlinq-b/imascl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-srw.17.bib | https://aclanthology.org/2024.naacl-srw.17/ | @inproceedings{tuo-etal-2024-shot,
title = "Few-Shot Event Argument Extraction Based on a Meta-Learning Approach",
author = "Tuo, Aboubacar and
Besan{\c{c}}on, Romaric and
Ferret, Olivier and
Tourille, Julien",
editor = "Cao, Yang (Trista) and
Papadimitriou, Isabel and
Ovalle, Anaelia and
Zampieri, Marcos and
Ferraro, Francis and
Swayamdipta, Swabha",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-srw.17",
doi = "10.18653/v1/2024.naacl-srw.17",
pages = "146--153",
abstract = "Few-shot learning techniques for Event Extraction are developed to alleviate the cost of data annotation. However, most studies on few-shot event extraction only focus on event trigger detection and no study has been proposed on argument extraction in a meta-learning context. In this paper, we investigate few-shot event argument extraction using prototypical networks, casting the task as a relation classification problem. Furthermore, we propose to enhance the relation embeddings by injecting syntactic knowledge into the model using graph convolutional networks. Our experimental results show that our proposed approach achieves strong performance on ACE 2005 in several few-shot configurations, and highlight the importance of syntactic knowledge for this task. More generally, our paper provides a unified evaluation framework for meta-learning approaches for argument extraction.",
}
| Few-shot learning techniques for Event Extraction are developed to alleviate the cost of data annotation. However, most studies on few-shot event extraction only focus on event trigger detection and no study has been proposed on argument extraction in a meta-learning context. In this paper, we investigate few-shot event argument extraction using prototypical networks, casting the task as a relation classification problem. Furthermore, we propose to enhance the relation embeddings by injecting syntactic knowledge into the model using graph convolutional networks. Our experimental results show that our proposed approach achieves strong performance on ACE 2005 in several few-shot configurations, and highlight the importance of syntactic knowledge for this task. More generally, our paper provides a unified evaluation framework for meta-learning approaches for argument extraction. | [
"Tuo, Aboubacar",
"Besan{\\c{c}}on, Romaric",
"Ferret, Olivier",
"Tourille, Julien"
] | Few-Shot Event Argument Extraction Based on a Meta-Learning Approach | naacl-srw.17 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |