bibtex_url
stringlengths 41
53
| proceedings
stringlengths 38
50
| bibtext
stringlengths 566
3.75k
| abstract
stringlengths 4
3.1k
| authors
sequencelengths 1
66
| title
stringlengths 12
172
| id
stringlengths 7
19
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringlengths 0
40
| n_linked_authors
int64 -1
21
| upvotes
int64 -1
116
| num_comments
int64 -1
11
| n_authors
int64 -1
61
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
100
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
100
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.emnlp-main.401.bib | https://aclanthology.org/2024.emnlp-main.401/ | @inproceedings{lin-etal-2024-towards-understanding,
title = "Towards Understanding Jailbreak Attacks in {LLM}s: A Representation Space Analysis",
author = "Lin, Yuping and
He, Pengfei and
Xu, Han and
Xing, Yue and
Yamada, Makoto and
Liu, Hui and
Tang, Jiliang",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.401",
pages = "7067--7085",
abstract = "Large language models (LLMs) are susceptible to a type of attack known as jailbreaking, which misleads LLMs to output harmful contents. Although there are diverse jailbreak attack strategies, there is no unified understanding on why some methods succeed and others fail. This paper explores the behavior of harmful and harmless prompts in the LLM{'}s representation space to investigate the intrinsic properties of successful jailbreak attacks. We hypothesize that successful attacks share some similar properties: They are effective in moving the representation of the harmful prompt towards the direction to the harmless prompts. We leverage hidden representations into the objective of existing jailbreak attacks to move the attacks along the acceptance direction, and conduct experiments to validate the above hypothesis using the proposed objective. We hope this study provides new insights into understanding how LLMs understand harmfulness information.",
}
| Large language models (LLMs) are susceptible to a type of attack known as jailbreaking, which misleads LLMs to output harmful contents. Although there are diverse jailbreak attack strategies, there is no unified understanding on why some methods succeed and others fail. This paper explores the behavior of harmful and harmless prompts in the LLM{'}s representation space to investigate the intrinsic properties of successful jailbreak attacks. We hypothesize that successful attacks share some similar properties: They are effective in moving the representation of the harmful prompt towards the direction to the harmless prompts. We leverage hidden representations into the objective of existing jailbreak attacks to move the attacks along the acceptance direction, and conduct experiments to validate the above hypothesis using the proposed objective. We hope this study provides new insights into understanding how LLMs understand harmfulness information. | [
"Lin, Yuping",
"He, Pengfei",
"Xu, Han",
"Xing, Yue",
"Yamada, Makoto",
"Liu, Hui",
"Tang, Jiliang"
] | Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis | emnlp-main.401 | Poster | 2406.10794 | [
"https://github.com/yuplin2333/representation-space-jailbreak"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.402.bib | https://aclanthology.org/2024.emnlp-main.402/ | @inproceedings{gao-etal-2024-enhancing-legal,
title = "Enhancing Legal Case Retrieval via Scaling High-quality Synthetic Query-Candidate Pairs",
author = "Gao, Cheng and
Xiao, Chaojun and
Liu, Zhenghao and
Chen, Huimin and
Liu, Zhiyuan and
Sun, Maosong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.402",
pages = "7086--7100",
abstract = "Legal case retrieval (LCR) aims to provide similar cases as references for a given fact description. This task is crucial for promoting consistent judgments in similar cases, effectively enhancing judicial fairness and improving work efficiency for judges. However, existing works face two main challenges for real-world applications: existing works mainly focus on case-to-case retrieval using lengthy queries, which does not match real-world scenarios; and the limited data scale, with current datasets containing only hundreds of queries, is insufficient to satisfy the training requirements of existing data-hungry neural models. To address these issues, we introduce an automated method to construct synthetic query-candidate pairs and build the largest LCR dataset to date, LEAD, which is hundreds of times larger than existing datasets. This data construction method can provide ample training signals for LCR models. Experimental results demonstrate that model training with our constructed data can achieve state-of-the-art results on two widely-used LCR benchmarks. Besides, the construction method can also be applied to civil cases and achieve promising results. The data and codes can be found in https://github.com/thunlp/LEAD.",
}
| Legal case retrieval (LCR) aims to provide similar cases as references for a given fact description. This task is crucial for promoting consistent judgments in similar cases, effectively enhancing judicial fairness and improving work efficiency for judges. However, existing works face two main challenges for real-world applications: existing works mainly focus on case-to-case retrieval using lengthy queries, which does not match real-world scenarios; and the limited data scale, with current datasets containing only hundreds of queries, is insufficient to satisfy the training requirements of existing data-hungry neural models. To address these issues, we introduce an automated method to construct synthetic query-candidate pairs and build the largest LCR dataset to date, LEAD, which is hundreds of times larger than existing datasets. This data construction method can provide ample training signals for LCR models. Experimental results demonstrate that model training with our constructed data can achieve state-of-the-art results on two widely-used LCR benchmarks. Besides, the construction method can also be applied to civil cases and achieve promising results. The data and codes can be found in https://github.com/thunlp/LEAD. | [
"Gao, Cheng",
"Xiao, Chaojun",
"Liu, Zhenghao",
"Chen, Huimin",
"Liu, Zhiyuan",
"Sun, Maosong"
] | Enhancing Legal Case Retrieval via Scaling High-quality Synthetic Query-Candidate Pairs | emnlp-main.402 | Poster | 2410.06581 | [
"https://github.com/thunlp/lead"
] | https://huggingface.co/papers/2410.06581 | 0 | 0 | 0 | 6 | [
"JamesChengGao/LEAD-model"
] | [
"JamesChengGao/LEAD"
] | [] | [
"JamesChengGao/LEAD-model"
] | [
"JamesChengGao/LEAD"
] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.403.bib | https://aclanthology.org/2024.emnlp-main.403/ | @inproceedings{song-etal-2024-large,
title = "Does Large Language Model Contain Task-Specific Neurons?",
author = "Song, Ran and
He, Shizhu and
Jiang, Shuting and
Xian, Yantuan and
Gao, Shengxiang and
Liu, Kang and
Yu, Zhengtao",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.403",
pages = "7101--7113",
abstract = "Large language models (LLMs) have demonstrated remarkable capabilities in comprehensively handling various types of natural language processing (NLP) tasks. However, there are significant differences in the knowledge and abilities required for different tasks. Therefore, it is important to understand whether the same LLM processes different tasks in the same way. Are there specific neurons in a LLM for different tasks? Inspired by neuroscience, this paper pioneers the exploration of whether distinct neurons are activated when a LLM handles different tasks. Compared with current research exploring the neurons of language and knowledge, task-specific neurons present a greater challenge due to their abstractness, diversity, and complexity. To address these challenges, this paper proposes a method for task-specific neuron localization based on Causal Gradient Variation with Special Tokens (CGVST). CGVST identifies task-specific neurons by concentrating on the most significant tokens during task processing, thereby eliminating redundant tokens and minimizing interference from non-essential neurons. Compared to traditional neuron localization methods, our approach can more effectively identify task-specific neurons. We conduct experiments across eight different public tasks. Experiments involving the inhibition and amplification of identified neurons demonstrate that our method can accurately locate task-specific neurons.",
}
| Large language models (LLMs) have demonstrated remarkable capabilities in comprehensively handling various types of natural language processing (NLP) tasks. However, there are significant differences in the knowledge and abilities required for different tasks. Therefore, it is important to understand whether the same LLM processes different tasks in the same way. Are there specific neurons in a LLM for different tasks? Inspired by neuroscience, this paper pioneers the exploration of whether distinct neurons are activated when a LLM handles different tasks. Compared with current research exploring the neurons of language and knowledge, task-specific neurons present a greater challenge due to their abstractness, diversity, and complexity. To address these challenges, this paper proposes a method for task-specific neuron localization based on Causal Gradient Variation with Special Tokens (CGVST). CGVST identifies task-specific neurons by concentrating on the most significant tokens during task processing, thereby eliminating redundant tokens and minimizing interference from non-essential neurons. Compared to traditional neuron localization methods, our approach can more effectively identify task-specific neurons. We conduct experiments across eight different public tasks. Experiments involving the inhibition and amplification of identified neurons demonstrate that our method can accurately locate task-specific neurons. | [
"Song, Ran",
"He, Shizhu",
"Jiang, Shuting",
"Xian, Yantuan",
"Gao, Shengxiang",
"Liu, Kang",
"Yu, Zhengtao"
] | Does Large Language Model Contain Task-Specific Neurons? | emnlp-main.403 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.404.bib | https://aclanthology.org/2024.emnlp-main.404/ | @inproceedings{mondorf-plank-2024-liar,
title = "Liar, Liar, Logical Mire: A Benchmark for Suppositional Reasoning in Large Language Models",
author = "Mondorf, Philipp and
Plank, Barbara",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.404",
pages = "7114--7137",
}
| No abstract found | [
"Mondorf, Philipp",
"Plank, Barbara"
] | Liar, Liar, Logical Mire: A Benchmark for Suppositional Reasoning in Large Language Models | emnlp-main.404 | Poster | 2406.12546 | [
"https://github.com/mainlp/TruthQuest"
] | https://huggingface.co/papers/2406.12546 | 0 | 0 | 0 | 2 | [] | [
"mainlp/TruthQuest-Human-Annotations",
"mainlp/TruthQuest-AI-Annotations",
"mainlp/TruthQuest"
] | [] | [] | [
"mainlp/TruthQuest-Human-Annotations",
"mainlp/TruthQuest-AI-Annotations",
"mainlp/TruthQuest"
] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.405.bib | https://aclanthology.org/2024.emnlp-main.405/ | @inproceedings{liu-etal-2024-advancing,
title = "Advancing Test-Time Adaptation in Wild Acoustic Test Settings",
author = "Liu, Hongfu and
Huang, Hengguan and
Wang, Ye",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.405",
pages = "7138--7155",
abstract = "Acoustic foundation models, fine-tuned for Automatic Speech Recognition (ASR), suffer from performance degradation in wild acoustic test settings when deployed in real-world scenarios. Stabilizing online Test-Time Adaptation (TTA) under these conditions remains an open and unexplored question. Existing wild vision TTA methods often fail to handle speech data effectively due to the unique characteristics of high-entropy speech frames, which are unreliably filtered out even when containing crucial semantic content. Furthermore, unlike static vision data, speech signals follow short-term consistency, requiring specialized adaptation strategies. In this work, we propose a novel wild acoustic TTA method tailored for ASR fine-tuned acoustic foundation models. Our method, Confidence-Enhanced Adaptation, performs frame-level adaptation using a confidence-aware weight scheme to avoid filtering out essential information in high-entropy frames. Additionally, we apply consistency regularization during test-time optimization to leverage the inherent short-term consistency of speech signals. Our experiments on both synthetic and real-world datasets demonstrate that our approach outperforms existing baselines under various wild acoustic test settings, including Gaussian noise, environmental sounds, accent variations, and sung speech.",
}
| Acoustic foundation models, fine-tuned for Automatic Speech Recognition (ASR), suffer from performance degradation in wild acoustic test settings when deployed in real-world scenarios. Stabilizing online Test-Time Adaptation (TTA) under these conditions remains an open and unexplored question. Existing wild vision TTA methods often fail to handle speech data effectively due to the unique characteristics of high-entropy speech frames, which are unreliably filtered out even when containing crucial semantic content. Furthermore, unlike static vision data, speech signals follow short-term consistency, requiring specialized adaptation strategies. In this work, we propose a novel wild acoustic TTA method tailored for ASR fine-tuned acoustic foundation models. Our method, Confidence-Enhanced Adaptation, performs frame-level adaptation using a confidence-aware weight scheme to avoid filtering out essential information in high-entropy frames. Additionally, we apply consistency regularization during test-time optimization to leverage the inherent short-term consistency of speech signals. Our experiments on both synthetic and real-world datasets demonstrate that our approach outperforms existing baselines under various wild acoustic test settings, including Gaussian noise, environmental sounds, accent variations, and sung speech. | [
"Liu, Hongfu",
"Huang, Hengguan",
"Wang, Ye"
] | Advancing Test-Time Adaptation in Wild Acoustic Test Settings | emnlp-main.405 | Oral | 2310.09505 | [
"https://github.com/Waffle-Liu/CEA"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.406.bib | https://aclanthology.org/2024.emnlp-main.406/ | @inproceedings{chen-etal-2024-learning-retrieve,
title = "Learning to Retrieve Iteratively for In-Context Learning",
author = "Chen, Yunmo and
Chen, Tongfei and
Jhamtani, Harsh and
Xia, Patrick and
Shin, Richard and
Eisner, Jason and
Van Durme, Benjamin",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.406",
pages = "7156--7168",
abstract = "We introduce iterative retrieval, a novel framework that empowers retrievers to make iterative decisions through policy optimization. Finding an optimal portfolio of retrieved items is a combinatorial optimization problem, generally considered NP-hard. This approach provides a learned approximation to such a solution, meeting specific task requirements under a given family of large language models (LLMs). We propose a training procedure based on reinforcement learning, incorporating feedback from LLMs. We instantiate an iterative retriever for composing in-context learning (ICL) exemplars and apply it to various semantic parsing tasks that demand synthesized programs as outputs. By adding only 4M additional parameters for state encoding, we convert an off-the-shelf dense retriever into a stateful iterative retriever, outperforming previous methods in selecting ICL exemplars on semantic parsing datasets such as CalFlow, TreeDST, and MTOP. Additionally, the trained iterative retriever generalizes across different inference LLMs beyond the one used during training.",
}
| We introduce iterative retrieval, a novel framework that empowers retrievers to make iterative decisions through policy optimization. Finding an optimal portfolio of retrieved items is a combinatorial optimization problem, generally considered NP-hard. This approach provides a learned approximation to such a solution, meeting specific task requirements under a given family of large language models (LLMs). We propose a training procedure based on reinforcement learning, incorporating feedback from LLMs. We instantiate an iterative retriever for composing in-context learning (ICL) exemplars and apply it to various semantic parsing tasks that demand synthesized programs as outputs. By adding only 4M additional parameters for state encoding, we convert an off-the-shelf dense retriever into a stateful iterative retriever, outperforming previous methods in selecting ICL exemplars on semantic parsing datasets such as CalFlow, TreeDST, and MTOP. Additionally, the trained iterative retriever generalizes across different inference LLMs beyond the one used during training. | [
"Chen, Yunmo",
"Chen, Tongfei",
"Jhamtani, Harsh",
"Xia, Patrick",
"Shin, Richard",
"Eisner, Jason",
"Van Durme, Benjamin"
] | Learning to Retrieve Iteratively for In-Context Learning | emnlp-main.406 | Poster | 2406.14739 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.407.bib | https://aclanthology.org/2024.emnlp-main.407/ | @inproceedings{kang-etal-2024-taxonomy,
title = "Taxonomy-guided Semantic Indexing for Academic Paper Search",
author = "Kang, SeongKu and
Zhang, Yunyi and
Jiang, Pengcheng and
Lee, Dongha and
Han, Jiawei and
Yu, Hwanjo",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.407",
pages = "7169--7184",
abstract = "Academic paper search is an essential task for efficient literature discovery and scientific advancement. While dense retrieval has advanced various ad-hoc searches, it often struggles to match the underlying academic concepts between queries and documents, which is critical for paper search. To enable effective academic concept matching for paper search, we propose Taxonomy-guided Semantic Indexing (TaxoIndex) framework. TaxoIndex extracts key concepts from papers and organizes them as a semantic index guided by an academic taxonomy, and then leverages this index as foundational knowledge to identify academic concepts and link queries and documents. As a plug-and-play framework, TaxoIndex can be flexibly employed to enhance existing dense retrievers. Extensive experiments show that TaxoIndex brings significant improvements, even with highly limited training data, and greatly enhances interpretability.",
}
| Academic paper search is an essential task for efficient literature discovery and scientific advancement. While dense retrieval has advanced various ad-hoc searches, it often struggles to match the underlying academic concepts between queries and documents, which is critical for paper search. To enable effective academic concept matching for paper search, we propose Taxonomy-guided Semantic Indexing (TaxoIndex) framework. TaxoIndex extracts key concepts from papers and organizes them as a semantic index guided by an academic taxonomy, and then leverages this index as foundational knowledge to identify academic concepts and link queries and documents. As a plug-and-play framework, TaxoIndex can be flexibly employed to enhance existing dense retrievers. Extensive experiments show that TaxoIndex brings significant improvements, even with highly limited training data, and greatly enhances interpretability. | [
"Kang, SeongKu",
"Zhang, Yunyi",
"Jiang, Pengcheng",
"Lee, Dongha",
"Han, Jiawei",
"Yu, Hwanjo"
] | Taxonomy-guided Semantic Indexing for Academic Paper Search | emnlp-main.407 | Oral | 2410.19218 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.408.bib | https://aclanthology.org/2024.emnlp-main.408/ | @inproceedings{luo-etal-2024-python,
title = "Python is Not Always the Best Choice: Embracing Multilingual Program of Thoughts",
author = "Luo, Xianzhen and
Zhu, Qingfu and
Zhang, Zhiming and
Qin, Libo and
Zhang, Xuanyu and
Yang, Qing and
Xu, Dongliang and
Che, Wanxiang",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.408",
pages = "7185--7212",
abstract = "Program of Thoughts (PoT) is an approach characterized by its executable intermediate steps, which ensure the accuracy of the logical calculations in the reasoning process. Currently, PoT primarily uses Python. However, relying solely on a single language may result in suboptimal solutions and overlook the potential benefits of other programming languages. In this paper, we conduct comprehensive experiments on the programming languages used in PoT and find that no single language consistently delivers optimal performance across all tasks and models. The effectiveness of each language varies depending on the specific scenarios. Inspired by this, we propose a task and model agnostic approach called MultiPoT, which harnesses strength and diversity from various languages. Experimental results reveal that it significantly outperforms Python Self-Consistency. Furthermore, it achieves comparable or superior performance compared to the best monolingual PoT in almost all tasks across all models. In particular, MultiPoT achieves more than 4.6{\%} improvement on average on ChatGPT (gpt-3.5-turbo-0701).",
}
| Program of Thoughts (PoT) is an approach characterized by its executable intermediate steps, which ensure the accuracy of the logical calculations in the reasoning process. Currently, PoT primarily uses Python. However, relying solely on a single language may result in suboptimal solutions and overlook the potential benefits of other programming languages. In this paper, we conduct comprehensive experiments on the programming languages used in PoT and find that no single language consistently delivers optimal performance across all tasks and models. The effectiveness of each language varies depending on the specific scenarios. Inspired by this, we propose a task and model agnostic approach called MultiPoT, which harnesses strength and diversity from various languages. Experimental results reveal that it significantly outperforms Python Self-Consistency. Furthermore, it achieves comparable or superior performance compared to the best monolingual PoT in almost all tasks across all models. In particular, MultiPoT achieves more than 4.6{\%} improvement on average on ChatGPT (gpt-3.5-turbo-0701). | [
"Luo, Xianzhen",
"Zhu, Qingfu",
"Zhang, Zhiming",
"Qin, Libo",
"Zhang, Xuanyu",
"Yang, Qing",
"Xu, Dongliang",
"Che, Wanxiang"
] | Python is Not Always the Best Choice: Embracing Multilingual Program of Thoughts | emnlp-main.408 | Poster | 2402.10691 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.409.bib | https://aclanthology.org/2024.emnlp-main.409/ | @inproceedings{liu-etal-2024-advancing-adversarial,
title = "Advancing Adversarial Suffix Transfer Learning on Aligned Large Language Models",
author = "Liu, Hongfu and
Xie, Yuxi and
Wang, Ye and
Shieh, Michael",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.409",
pages = "7213--7224",
abstract = "Language Language Models (LLMs) face safety concerns due to potential misuse by malicious users. Recent red-teaming efforts have identified adversarial suffixes capable of jailbreaking LLMs using the gradient-based search algorithm Greedy Coordinate Gradient (GCG). However, GCG struggles with computational inefficiency, limiting further investigations regarding suffix transferability and scalability across models and data. In this work, we bridge the connection between search efficiency and suffix transferability. We propose a two-stage transfer learning framework, DeGCG, which decouples the search process into behavior-agnostic pre-searching and behavior-relevant post-searching. Specifically, we employ direct first target token optimization in pre-searching to facilitate the search process. We apply our approach to cross-model, cross-data, and self-transfer scenarios. Furthermore, we introduce an interleaved variant of our approach, i-DeGCG, which iteratively leverages self-transferability to accelerate the search process. Experiments on HarmBench demonstrate the efficiency of our approach across various models and domains. Notably, our i-DeGCG outperforms the baseline on Llama2-chat-7b with ASRs of 43.9 ($+ 22.2$) and 39.0 ($+19.5$) on valid and test sets, respectively. Further analysis on cross-model transfer indicates the pivotal role of first target token optimization in leveraging suffix transferability for efficient searching.",
}
| Language Language Models (LLMs) face safety concerns due to potential misuse by malicious users. Recent red-teaming efforts have identified adversarial suffixes capable of jailbreaking LLMs using the gradient-based search algorithm Greedy Coordinate Gradient (GCG). However, GCG struggles with computational inefficiency, limiting further investigations regarding suffix transferability and scalability across models and data. In this work, we bridge the connection between search efficiency and suffix transferability. We propose a two-stage transfer learning framework, DeGCG, which decouples the search process into behavior-agnostic pre-searching and behavior-relevant post-searching. Specifically, we employ direct first target token optimization in pre-searching to facilitate the search process. We apply our approach to cross-model, cross-data, and self-transfer scenarios. Furthermore, we introduce an interleaved variant of our approach, i-DeGCG, which iteratively leverages self-transferability to accelerate the search process. Experiments on HarmBench demonstrate the efficiency of our approach across various models and domains. Notably, our i-DeGCG outperforms the baseline on Llama2-chat-7b with ASRs of 43.9 ($+ 22.2$) and 39.0 ($+19.5$) on valid and test sets, respectively. Further analysis on cross-model transfer indicates the pivotal role of first target token optimization in leveraging suffix transferability for efficient searching. | [
"Liu, Hongfu",
"Xie, Yuxi",
"Wang, Ye",
"Shieh, Michael"
] | Advancing Adversarial Suffix Transfer Learning on Aligned Large Language Models | emnlp-main.409 | Poster | 2408.14866 | [
"https://github.com/Waffle-Liu/DeGCG"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.410.bib | https://aclanthology.org/2024.emnlp-main.410/ | @inproceedings{cao-etal-2024-incomplete,
title = "Incomplete Utterance Rewriting with Editing Operation Guidance and Utterance Augmentation",
author = "Cao, Zhiyu and
Li, Peifeng and
Fan, Yaxin and
Zhu, Qiaoming",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.410",
pages = "7225--7238",
abstract = "Although existing fashionable generation methods on Incomplete Utterance Rewriting (IUR) can generate coherent utterances, they often result in the inclusion of irrelevant and redundant tokens in rewritten utterances due to their inability to focus on critical tokens in dialogue context. Furthermore, the limited size of the training datasets also contributes to the insufficient training of the IUR model. To address the first issue, we propose a multi-task learning framework EO-IUR (Editing Operation-guided Incomplete Utterance Rewriting) that introduces the editing operation labels generated by sequence labeling module to guide generation model to focus on critical tokens. Furthermore, we introduce a token-level heterogeneous graph to represent dialogues. To address the second issue, we propose a two-dimensional utterance augmentation strategy, namely editing operation-based incomplete utterance augmentation and LLM-based historical utterance augmentation. The experimental results on three datasets demonstrate that our EO-IUR outperforms previous state-of-the-art (SOTA) baselines in both open-domain and task-oriented dialogue.",
}
| Although existing fashionable generation methods on Incomplete Utterance Rewriting (IUR) can generate coherent utterances, they often result in the inclusion of irrelevant and redundant tokens in rewritten utterances due to their inability to focus on critical tokens in dialogue context. Furthermore, the limited size of the training datasets also contributes to the insufficient training of the IUR model. To address the first issue, we propose a multi-task learning framework EO-IUR (Editing Operation-guided Incomplete Utterance Rewriting) that introduces the editing operation labels generated by sequence labeling module to guide generation model to focus on critical tokens. Furthermore, we introduce a token-level heterogeneous graph to represent dialogues. To address the second issue, we propose a two-dimensional utterance augmentation strategy, namely editing operation-based incomplete utterance augmentation and LLM-based historical utterance augmentation. The experimental results on three datasets demonstrate that our EO-IUR outperforms previous state-of-the-art (SOTA) baselines in both open-domain and task-oriented dialogue. | [
"Cao, Zhiyu",
"Li, Peifeng",
"Fan, Yaxin",
"Zhu, Qiaoming"
] | Incomplete Utterance Rewriting with Editing Operation Guidance and Utterance Augmentation | emnlp-main.410 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.411.bib | https://aclanthology.org/2024.emnlp-main.411/ | @inproceedings{li-etal-2024-frog,
title = "{FR}o{G}: Evaluating Fuzzy Reasoning of Generalized Quantifiers in {LLM}s",
author = "Li, Yiyuan and
Sun, Shichao and
Liu, Pengfei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.411",
pages = "7239--7256",
abstract = "Fuzzy reasoning is vital due to the frequent use of imprecise information in daily contexts. However, the ability of current large language models (LLMs) to handle such reasoning remains largely uncharted. In this paper, we introduce a new benchmark, FRoG, for fuzzy reasoning, featuring real-world mathematical word problems that incorporate generalized quantifiers. Our experimental findings reveal that fuzzy reasoning continues to pose significant challenges for LLMs. Moreover, we find that existing methods designed to enhance reasoning do not consistently improve performance in tasks involving fuzzy logic. Additionally, our results show an inverse scaling effect in the performance of LLMs on FRoG. Interestingly, we also demonstrate that strong mathematical reasoning skills are not necessarily indicative of success on our benchmark.",
}
| Fuzzy reasoning is vital due to the frequent use of imprecise information in daily contexts. However, the ability of current large language models (LLMs) to handle such reasoning remains largely uncharted. In this paper, we introduce a new benchmark, FRoG, for fuzzy reasoning, featuring real-world mathematical word problems that incorporate generalized quantifiers. Our experimental findings reveal that fuzzy reasoning continues to pose significant challenges for LLMs. Moreover, we find that existing methods designed to enhance reasoning do not consistently improve performance in tasks involving fuzzy logic. Additionally, our results show an inverse scaling effect in the performance of LLMs on FRoG. Interestingly, we also demonstrate that strong mathematical reasoning skills are not necessarily indicative of success on our benchmark. | [
"Li, Yiyuan",
"Sun, Shichao",
"Liu, Pengfei"
] | FRoG: Evaluating Fuzzy Reasoning of Generalized Quantifiers in LLMs | emnlp-main.411 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.412.bib | https://aclanthology.org/2024.emnlp-main.412/ | @inproceedings{stammbach-etal-2024-aligning,
title = "Aligning Large Language Models with Diverse Political Viewpoints",
author = "Stammbach, Dominik and
Widmer, Philine and
Cho, Eunjung and
Gulcehre, Caglar and
Ash, Elliott",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.412",
pages = "7257--7267",
abstract = "Large language models such as ChatGPT exhibit striking political biases. If users query them about political information, they often take a normative stance. To overcome this, we align LLMs with diverse political viewpoints from 100,000 comments written by candidates running for national parliament in Switzerland. Models aligned with this data can generate more accurate political viewpoints from Swiss parties, compared to commercial models such as ChatGPT. We also propose a procedure to generate balanced overviews summarizing multiple viewpoints using such models. The replication package contains all code and data.",
}
| Large language models such as ChatGPT exhibit striking political biases. If users query them about political information, they often take a normative stance. To overcome this, we align LLMs with diverse political viewpoints from 100,000 comments written by candidates running for national parliament in Switzerland. Models aligned with this data can generate more accurate political viewpoints from Swiss parties, compared to commercial models such as ChatGPT. We also propose a procedure to generate balanced overviews summarizing multiple viewpoints using such models. The replication package contains all code and data. | [
"Stammbach, Dominik",
"Widmer, Philine",
"Cho, Eunjung",
"Gulcehre, Caglar",
"Ash, Elliott"
] | Aligning Large Language Models with Diverse Political Viewpoints | emnlp-main.412 | Poster | 2406.14155 | [
"https://github.com/dominiksinsaarland/aligning-LLMs-with-political-views"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.413.bib | https://aclanthology.org/2024.emnlp-main.413/ | @inproceedings{nghiem-etal-2024-gotta,
title = "{``}You Gotta be a Doctor, Lin{''} : An Investigation of Name-Based Bias of Large Language Models in Employment Recommendations",
author = "Nghiem, Huy and
Prindle, John and
Zhao, Jieyu and
Daum{\'e} Iii, Hal",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.413",
pages = "7268--7287",
abstract = "Social science research has shown that candidates with names indicative of certain races or genders often face discrimination in employment practices. Similarly, Large Language Models (LLMs) have demonstrated racial and gender biases in various applications. In this study, we utilize GPT-3.5-Turbo and Llama 3-70B-Instruct to simulate hiring decisions and salary recommendations for candidates with 320 first names that strongly signal their race and gender, across over 750,000 prompts. Our empirical results indicate a preference among these models for hiring candidates with White female-sounding names over other demographic groups across 40 occupations. Additionally, even among candidates with identical qualifications, salary recommendations vary by as much as 5{\%} between different subgroups. A comparison with real-world labor data reveals inconsistent alignment with U.S. labor market characteristics, underscoring the necessity of risk investigation of LLM-powered systems.",
}
| Social science research has shown that candidates with names indicative of certain races or genders often face discrimination in employment practices. Similarly, Large Language Models (LLMs) have demonstrated racial and gender biases in various applications. In this study, we utilize GPT-3.5-Turbo and Llama 3-70B-Instruct to simulate hiring decisions and salary recommendations for candidates with 320 first names that strongly signal their race and gender, across over 750,000 prompts. Our empirical results indicate a preference among these models for hiring candidates with White female-sounding names over other demographic groups across 40 occupations. Additionally, even among candidates with identical qualifications, salary recommendations vary by as much as 5{\%} between different subgroups. A comparison with real-world labor data reveals inconsistent alignment with U.S. labor market characteristics, underscoring the necessity of risk investigation of LLM-powered systems. | [
"Nghiem, Huy",
"Prindle, John",
"Zhao, Jieyu",
"Daum{\\'e} Iii, Hal"
] | “You Gotta be a Doctor, Lin” : An Investigation of Name-Based Bias of Large Language Models in Employment Recommendations | emnlp-main.413 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.414.bib | https://aclanthology.org/2024.emnlp-main.414/ | @inproceedings{wu-etal-2024-extending,
title = "Extending Context Window of Large Language Models from a Distributional Perspective",
author = "Wu, Yingsheng and
Gu, Yuxuan and
Feng, Xiaocheng and
Zhong, Weihong and
Xu, Dongliang and
Yang, Qing and
Liu, Hongtao and
Qin, Bing",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.414",
pages = "7288--7301",
abstract = "Scaling the rotary position embedding (RoPE) has become a common method for extending the context window of RoPE-based large language models (LLMs). However, existing scaling methods often rely on empirical approaches and lack a profound understanding of the internal distribution within RoPE, resulting in suboptimal performance in extending the context window length. In this paper, we propose to optimize the context window extending task from the view of rotary angle distribution. Specifically, we first estimate the distribution of the rotary angles within the model and analyze the extent to which length extension perturbs this distribution. Then, we present a novel extension strategy that minimizes the disturbance between rotary angle distributions to maintain consistency with the pre-training phase, enhancing the model{'}s capability to generalize to longer sequences. Experimental results compared to the strong baseline methods demonstrate that our approach reduces by up to 72{\%} of the distributional disturbance when extending LLaMA2{'}s context window to 8k, and reduces by up to 32{\%} when extending to 16k. On the LongBench-E benchmark, our method achieves an average improvement of up to 4.33{\%} over existing state-of-the-art methods. Furthermore, Our method maintains the model{'}s performance on the Hugging Face Open LLM benchmark after context window extension, with only an average performance fluctuation ranging from -0.12 to +0.22.",
}
| Scaling the rotary position embedding (RoPE) has become a common method for extending the context window of RoPE-based large language models (LLMs). However, existing scaling methods often rely on empirical approaches and lack a profound understanding of the internal distribution within RoPE, resulting in suboptimal performance in extending the context window length. In this paper, we propose to optimize the context window extending task from the view of rotary angle distribution. Specifically, we first estimate the distribution of the rotary angles within the model and analyze the extent to which length extension perturbs this distribution. Then, we present a novel extension strategy that minimizes the disturbance between rotary angle distributions to maintain consistency with the pre-training phase, enhancing the model{'}s capability to generalize to longer sequences. Experimental results compared to the strong baseline methods demonstrate that our approach reduces by up to 72{\%} of the distributional disturbance when extending LLaMA2{'}s context window to 8k, and reduces by up to 32{\%} when extending to 16k. On the LongBench-E benchmark, our method achieves an average improvement of up to 4.33{\%} over existing state-of-the-art methods. Furthermore, Our method maintains the model{'}s performance on the Hugging Face Open LLM benchmark after context window extension, with only an average performance fluctuation ranging from -0.12 to +0.22. | [
"Wu, Yingsheng",
"Gu, Yuxuan",
"Feng, Xiaocheng",
"Zhong, Weihong",
"Xu, Dongliang",
"Yang, Qing",
"Liu, Hongtao",
"Qin, Bing"
] | Extending Context Window of Large Language Models from a Distributional Perspective | emnlp-main.414 | Poster | 2410.01490 | [
""
] | https://huggingface.co/papers/2410.01490 | 0 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.415.bib | https://aclanthology.org/2024.emnlp-main.415/ | @inproceedings{sung-kyle-2024-leveraging,
title = "Leveraging pre-trained language models for linguistic analysis: A case of argument structure constructions",
author = "Sung, Hakyung and
Kyle, Kristopher",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.415",
pages = "7302--7314",
abstract = "This study evaluates the effectiveness of pre-trained language models in identifying argument structure constructions, important for modeling both first and second language learning. We examine three methodologies: (1) supervised training with RoBERTa using a gold-standard ASC treebank, including by-tag accuracy evaluation for sentences from both native and non-native English speakers, (2) prompt-guided annotation with GPT-4, and (3) generating training data through prompts with GPT-4, followed by RoBERTa training. Our findings indicate that RoBERTa trained on gold-standard data shows the best performance. While data generated through GPT-4 enhances training, it does not exceed the benchmarks set by gold-standard data.",
}
| This study evaluates the effectiveness of pre-trained language models in identifying argument structure constructions, important for modeling both first and second language learning. We examine three methodologies: (1) supervised training with RoBERTa using a gold-standard ASC treebank, including by-tag accuracy evaluation for sentences from both native and non-native English speakers, (2) prompt-guided annotation with GPT-4, and (3) generating training data through prompts with GPT-4, followed by RoBERTa training. Our findings indicate that RoBERTa trained on gold-standard data shows the best performance. While data generated through GPT-4 enhances training, it does not exceed the benchmarks set by gold-standard data. | [
"Sung, Hakyung",
"Kyle, Kristopher"
] | Leveraging pre-trained language models for linguistic analysis: A case of argument structure constructions | emnlp-main.415 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.416.bib | https://aclanthology.org/2024.emnlp-main.416/ | @inproceedings{xu-etal-2024-magic,
title = "{MA}g{IC}: Investigation of Large Language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and Collaboration",
author = "Xu, Lin and
Hu, Zhiyuan and
Zhou, Daquan and
Ren, Hongyu and
Dong, Zhen and
Keutzer, Kurt and
Ng, See-Kiong and
Feng, Jiashi",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.416",
pages = "7315--7332",
abstract = "Large Language Models (LLMs) have significantly advanced natural language processing, demonstrating exceptional reasoning, tool usage, and memory capabilities. As their applications expand into multi-agent environments, there arises a need for a comprehensive evaluation framework that captures LLMs{'} reasoning, planning, collaboration, and other social abilities. This work introduces a novel competition-based benchmark framework specifically designed to assess LLMs within multi-agent settings, providing quantitative metrics to evaluate their judgment, reasoning, deception, self-awareness, cooperation, coordination, and rationality.We utilize two social deduction games alongside three game-theory scenarios to create diverse environments.Our frame is fortified with the probabilistic graphic modeling (PGM) method, enhancing the LLMs{'} capabilities in navigating complex social and cognitive dimensions. We evaluate seven LLMs, quantitatively highlighting a significant capability gap of over threefold between the strongest, GPT o1, and the weakest, Llama-2-70B. It also confirms that our PGM enhancement boosts the abilities of all selected models by an average of 37{\%}. Our data and code can be found here https://github.com/cathyxl/MAgIC.",
}
| Large Language Models (LLMs) have significantly advanced natural language processing, demonstrating exceptional reasoning, tool usage, and memory capabilities. As their applications expand into multi-agent environments, there arises a need for a comprehensive evaluation framework that captures LLMs{'} reasoning, planning, collaboration, and other social abilities. This work introduces a novel competition-based benchmark framework specifically designed to assess LLMs within multi-agent settings, providing quantitative metrics to evaluate their judgment, reasoning, deception, self-awareness, cooperation, coordination, and rationality.We utilize two social deduction games alongside three game-theory scenarios to create diverse environments.Our frame is fortified with the probabilistic graphic modeling (PGM) method, enhancing the LLMs{'} capabilities in navigating complex social and cognitive dimensions. We evaluate seven LLMs, quantitatively highlighting a significant capability gap of over threefold between the strongest, GPT o1, and the weakest, Llama-2-70B. It also confirms that our PGM enhancement boosts the abilities of all selected models by an average of 37{\%}. Our data and code can be found here https://github.com/cathyxl/MAgIC. | [
"Xu, Lin",
"Hu, Zhiyuan",
"Zhou, Daquan",
"Ren, Hongyu",
"Dong, Zhen",
"Keutzer, Kurt",
"Ng, See-Kiong",
"Feng, Jiashi"
] | MAgIC: Investigation of Large Language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and Collaboration | emnlp-main.416 | Poster | 2311.08562 | [
"https://github.com/cathyxl/magic"
] | https://huggingface.co/papers/2311.08562 | 1 | 0 | 0 | 8 | [] | [] | [
"agentharbor/agenta"
] | [] | [] | [
"agentharbor/agenta"
] | 1 |
https://aclanthology.org/2024.emnlp-main.417.bib | https://aclanthology.org/2024.emnlp-main.417/ | @inproceedings{he-etal-2024-position,
title = "Position Engineering: Boosting Large Language Models through Positional Information Manipulation",
author = "He, Zhiyuan and
Jiang, Huiqiang and
Wang, Zilong and
Yang, Yuqing and
Qiu, Luna K. and
Qiu, Lili",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.417",
pages = "7333--7345",
abstract = "The performance of large language models (LLMs) is significantly influenced by the quality of the prompts provided. In response, researchers have developed enormous prompt engineering strategies aimed at modifying the prompt text to enhance task performance. In this paper, we introduce a novel technique termed position engineering, which offers a more efficient way to guide large language models. Unlike prompt engineering, which requires substantial effort to modify the text provided to LLMs, position engineering merely involves altering the positional information in the prompt without modifying the text itself. We have evaluated position engineering in two widely-used LLM scenarios: retrieval-augmented generation (RAG) and in-context learning (ICL). Our findings show that position engineering substantially improves upon the baseline in both cases. Position engineering thus represents a promising new strategy for exploiting the capabilities of large language models.",
}
| The performance of large language models (LLMs) is significantly influenced by the quality of the prompts provided. In response, researchers have developed enormous prompt engineering strategies aimed at modifying the prompt text to enhance task performance. In this paper, we introduce a novel technique termed position engineering, which offers a more efficient way to guide large language models. Unlike prompt engineering, which requires substantial effort to modify the text provided to LLMs, position engineering merely involves altering the positional information in the prompt without modifying the text itself. We have evaluated position engineering in two widely-used LLM scenarios: retrieval-augmented generation (RAG) and in-context learning (ICL). Our findings show that position engineering substantially improves upon the baseline in both cases. Position engineering thus represents a promising new strategy for exploiting the capabilities of large language models. | [
"He, Zhiyuan",
"Jiang, Huiqiang",
"Wang, Zilong",
"Yang, Yuqing",
"Qiu, Luna K.",
"Qiu, Lili"
] | Position Engineering: Boosting Large Language Models through Positional Information Manipulation | emnlp-main.417 | Poster | 2404.11216 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.418.bib | https://aclanthology.org/2024.emnlp-main.418/ | @inproceedings{chen-etal-2024-towards-injecting,
title = "Towards Injecting Medical Visual Knowledge into Multimodal {LLM}s at Scale",
author = "Chen, Junying and
Gui, Chi and
Ouyang, Ruyi and
Gao, Anningzhe and
Chen, Shunian and
Chen, Guiming Hardy and
Wang, Xidong and
Cai, Zhenyang and
Ji, Ke and
Wan, Xiang and
Wang, Benyou",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.418",
pages = "7346--7370",
abstract = "The rapid development of multimodal large language models (MLLMs), such as GPT-4V, has led to significant advancements. However, these models still face challenges in medical multimodal capabilities due to limitations in the quantity and quality of medical vision-text data, stemming from data privacy concerns and high annotation costs. While pioneering approaches utilize PubMed{'}s large-scale, de-identified medical image-text pairs to address these limitations, they often fall short due to inherent data noise. To tackle this, we refined medical image-text pairs from PubMed and employed MLLMs (GPT-4V) in an {`}unblinded{'} capacity to denoise and reformat the data, resulting in the creation of the **PubMedVision** dataset with 1.3 million medical VQA samples. Our validation demonstrates that: (1) PubMedVision can significantly enhance the medical multimodal capabilities of MLLMs, showing significant improvement in benchmarks including the MMMU Health {\&} Medicine track; (2) manual checks by medical experts and empirical results validate the superior data quality of our dataset compared to other data construction methods. Using PubMedVision, we train a 34B medical MLLM **HuatuoGPT-Vision**, which shows superior performance in medical multimodal scenarios among open-source MLLMs. Our code and data are available at https://github.com/FreedomIntelligence/HuatuoGPT-Vision.",
}
| The rapid development of multimodal large language models (MLLMs), such as GPT-4V, has led to significant advancements. However, these models still face challenges in medical multimodal capabilities due to limitations in the quantity and quality of medical vision-text data, stemming from data privacy concerns and high annotation costs. While pioneering approaches utilize PubMed{'}s large-scale, de-identified medical image-text pairs to address these limitations, they often fall short due to inherent data noise. To tackle this, we refined medical image-text pairs from PubMed and employed MLLMs (GPT-4V) in an {`}unblinded{'} capacity to denoise and reformat the data, resulting in the creation of the **PubMedVision** dataset with 1.3 million medical VQA samples. Our validation demonstrates that: (1) PubMedVision can significantly enhance the medical multimodal capabilities of MLLMs, showing significant improvement in benchmarks including the MMMU Health {\&} Medicine track; (2) manual checks by medical experts and empirical results validate the superior data quality of our dataset compared to other data construction methods. Using PubMedVision, we train a 34B medical MLLM **HuatuoGPT-Vision**, which shows superior performance in medical multimodal scenarios among open-source MLLMs. Our code and data are available at https://github.com/FreedomIntelligence/HuatuoGPT-Vision. | [
"Chen, Junying",
"Gui, Chi",
"Ouyang, Ruyi",
"Gao, Anningzhe",
"Chen, Shunian",
"Chen, Guiming Hardy",
"Wang, Xidong",
"Cai, Zhenyang",
"Ji, Ke",
"Wan, Xiang",
"Wang, Benyou"
] | Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale | emnlp-main.418 | Poster | [
"https://github.com/freedomintelligence/huatuogpt-vision"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.419.bib | https://aclanthology.org/2024.emnlp-main.419/ | @inproceedings{qi-etal-2024-adelie,
title = "{ADELIE}: Aligning Large Language Models on Information Extraction",
author = "Qi, Yunjia and
Peng, Hao and
Wang, Xiaozhi and
Xu, Bin and
Hou, Lei and
Li, Juanzi",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.419",
pages = "7371--7387",
}
| No abstract found | [
"Qi, Yunjia",
"Peng, Hao",
"Wang, Xiaozhi",
"Xu, Bin",
"Hou, Lei",
"Li, Juanzi"
] | ADELIE: Aligning Large Language Models on Information Extraction | emnlp-main.419 | Oral | 2405.05008 | [
"https://github.com/THU-KEG/ADELIE"
] | https://huggingface.co/papers/2405.05008 | 1 | 1 | 0 | 6 | [
"THU-KEG/ADELIE-SFT",
"THU-KEG/ADELIE-DPO",
"THU-KEG/ADELIE-SFT-3B",
"THU-KEG/ADELIE-DPO-3B",
"THU-KEG/ADELIE-SFT-1.5B",
"THU-KEG/ADELIE-DPO-1.5B"
] | [] | [] | [
"THU-KEG/ADELIE-SFT",
"THU-KEG/ADELIE-DPO",
"THU-KEG/ADELIE-SFT-3B",
"THU-KEG/ADELIE-DPO-3B",
"THU-KEG/ADELIE-SFT-1.5B",
"THU-KEG/ADELIE-DPO-1.5B"
] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.420.bib | https://aclanthology.org/2024.emnlp-main.420/ | @inproceedings{wang-etal-2024-unveiling,
title = "Unveiling Factual Recall Behaviors of Large Language Models through Knowledge Neurons",
author = "Wang, Yifei and
Chen, Yuheng and
Wen, Wanting and
Sheng, Yu and
Li, Linjing and
Zeng, Daniel Dajun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.420",
pages = "7388--7402",
abstract = "In this paper, we investigate whether Large Language Models (LLMs) actively recall or retrieve their internal repositories of factual knowledge when faced with reasoning tasks. Through an analysis of LLMs{'} internal factual recall at each reasoning step via Knowledge Neurons, we reveal that LLMs fail to harness the critical factual associations under certain circumstances. Instead, they tend to opt for alternative, shortcut-like pathways to answer reasoning questions. By manually manipulating the recall process of parametric knowledge in LLMs, we demonstrate that enhancing this recall process directly improves reasoning performance whereas suppressing it leads to notable degradation. Furthermore, we assess the effect of Chain-of-Thought (CoT) prompting, a powerful technique for addressing complex reasoning tasks. Our findings indicate that CoT can intensify the recall of factual knowledge by encouraging LLMs to engage in orderly and reliable reasoning. Furthermore, we explored how contextual conflicts affect the retrieval of facts during the reasoning process to gain a comprehensive understanding of the factual recall behaviors of LLMs. Code and data will be available soon.",
}
| In this paper, we investigate whether Large Language Models (LLMs) actively recall or retrieve their internal repositories of factual knowledge when faced with reasoning tasks. Through an analysis of LLMs{'} internal factual recall at each reasoning step via Knowledge Neurons, we reveal that LLMs fail to harness the critical factual associations under certain circumstances. Instead, they tend to opt for alternative, shortcut-like pathways to answer reasoning questions. By manually manipulating the recall process of parametric knowledge in LLMs, we demonstrate that enhancing this recall process directly improves reasoning performance whereas suppressing it leads to notable degradation. Furthermore, we assess the effect of Chain-of-Thought (CoT) prompting, a powerful technique for addressing complex reasoning tasks. Our findings indicate that CoT can intensify the recall of factual knowledge by encouraging LLMs to engage in orderly and reliable reasoning. Furthermore, we explored how contextual conflicts affect the retrieval of facts during the reasoning process to gain a comprehensive understanding of the factual recall behaviors of LLMs. Code and data will be available soon. | [
"Wang, Yifei",
"Chen, Yuheng",
"Wen, Wanting",
"Sheng, Yu",
"Li, Linjing",
"Zeng, Daniel Dajun"
] | Unveiling Factual Recall Behaviors of Large Language Models through Knowledge Neurons | emnlp-main.420 | Poster | 2408.03247 | [
"https://github.com/wangyifei0047/tfrkn"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.421.bib | https://aclanthology.org/2024.emnlp-main.421/ | @inproceedings{libovicky-helcl-2024-lexically,
title = "Lexically Grounded Subword Segmentation",
author = "Libovick{\'y}, Jind{\v{r}}ich and
Helcl, Jind{\v{r}}ich",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.421",
pages = "7403--7420",
abstract = "We present three innovations in tokenization and subword segmentation. First, we propose to use unsupervised morphological analysis with Morfessor as pre-tokenization. Second, we present an algebraic method for obtaining subword embeddings grounded in a word embedding space. Based on that, we design a novel subword segmentation algorithm that uses the embeddings, ensuring that the procedure considers lexical meaning. Third, we introduce an efficient segmentation algorithm based on a subword bigram model that can be initialized with the lexically aware segmentation method to avoid using Morfessor and large embedding tables at inference time. We evaluate the proposed approaches using two intrinsic metrics and measure their performance on two downstream tasks: part-of-speech tagging and machine translation. Our experiments show significant improvements in the morphological plausibility of the segmentation when evaluated using segmentation precision on morpheme boundaries and improved R{\'e}nyi efficiency in 8 languages. Although the proposed tokenization methods do not have a large impact on automatic translation quality, we observe consistent performance gains in the arguably more morphological task of part-of-speech tagging.",
}
| We present three innovations in tokenization and subword segmentation. First, we propose to use unsupervised morphological analysis with Morfessor as pre-tokenization. Second, we present an algebraic method for obtaining subword embeddings grounded in a word embedding space. Based on that, we design a novel subword segmentation algorithm that uses the embeddings, ensuring that the procedure considers lexical meaning. Third, we introduce an efficient segmentation algorithm based on a subword bigram model that can be initialized with the lexically aware segmentation method to avoid using Morfessor and large embedding tables at inference time. We evaluate the proposed approaches using two intrinsic metrics and measure their performance on two downstream tasks: part-of-speech tagging and machine translation. Our experiments show significant improvements in the morphological plausibility of the segmentation when evaluated using segmentation precision on morpheme boundaries and improved R{\'e}nyi efficiency in 8 languages. Although the proposed tokenization methods do not have a large impact on automatic translation quality, we observe consistent performance gains in the arguably more morphological task of part-of-speech tagging. | [
"Libovick{\\'y}, Jind{\\v{r}}ich",
"Helcl, Jind{\\v{r}}ich"
] | Lexically Grounded Subword Segmentation | emnlp-main.421 | Poster | 2406.13560 | [
"https://github.com/ufal/legros-paper"
] | https://huggingface.co/papers/2406.13560 | 1 | 0 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.422.bib | https://aclanthology.org/2024.emnlp-main.422/ | @inproceedings{li-etal-2024-eagle,
title = "{EAGLE}-2: Faster Inference of Language Models with Dynamic Draft Trees",
author = "Li, Yuhui and
Wei, Fangyun and
Zhang, Chao and
Zhang, Hongyang",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.422",
pages = "7421--7432",
abstract = "Inference with modern Large Language Models (LLMs) is expensive and time-consuming, and speculative sampling has proven to be an effective solution. Most speculative sampling methods such as EAGLE use a static draft tree, implicitly assuming that the acceptance rate of draft tokens depends only on their position. Interestingly, we found that the acceptance rate of draft tokens is also context-dependent. In this paper, building upon EAGLE, we propose EAGLE-2, which introduces a new technique of context-aware dynamic draft tree into drafting modeling. This improvement leverages the fact that the draft model of EAGLE is well-calibrated: the confidence scores from the draft model approximate acceptance rates with small errors. We conducted extensive evaluations on three series of LLMs and six tasks, with EAGLE-2 achieving speedup ratios of up to **5x**, which is 1.3x that of EAGLE. EAGLE-2 also ensures that the distribution of the generated text remains unchanged, making it a **lossless** acceleration algorithm.",
}
| Inference with modern Large Language Models (LLMs) is expensive and time-consuming, and speculative sampling has proven to be an effective solution. Most speculative sampling methods such as EAGLE use a static draft tree, implicitly assuming that the acceptance rate of draft tokens depends only on their position. Interestingly, we found that the acceptance rate of draft tokens is also context-dependent. In this paper, building upon EAGLE, we propose EAGLE-2, which introduces a new technique of context-aware dynamic draft tree into drafting modeling. This improvement leverages the fact that the draft model of EAGLE is well-calibrated: the confidence scores from the draft model approximate acceptance rates with small errors. We conducted extensive evaluations on three series of LLMs and six tasks, with EAGLE-2 achieving speedup ratios of up to **5x**, which is 1.3x that of EAGLE. EAGLE-2 also ensures that the distribution of the generated text remains unchanged, making it a **lossless** acceleration algorithm. | [
"Li, Yuhui",
"Wei, Fangyun",
"Zhang, Chao",
"Zhang, Hongyang"
] | EAGLE-2: Faster Inference of Language Models with Dynamic Draft Trees | emnlp-main.422 | Poster | 2406.16858 | [
"https://github.com/safeailab/eagle"
] | https://huggingface.co/papers/2406.16858 | 0 | 1 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.423.bib | https://aclanthology.org/2024.emnlp-main.423/ | @inproceedings{nguyen-etal-2024-text,
title = "Do Text-to-Vis Benchmarks Test Real Use of Visualisations?",
author = "Nguyen, Hy and
He, Xuefei and
Reeson, Andrew and
Paris, Cecile and
Poon, Josiah and
Kummerfeld, Jonathan K.",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.423",
pages = "7433--7441",
abstract = "Large language models are able to generate code for visualisations in response to simple user requests.This is a useful application and an appealing one for NLP research because plots of data provide grounding for language.However, there are relatively few benchmarks, and those that exist may not be representative of what users do in practice.This paper investigates whether benchmarks reflect real-world use through an empirical study comparing benchmark datasets with code from public repositories.Our findings reveal a substantial gap, with evaluations not testing the same distribution of chart types, attributes, and actions as real-world examples.One dataset is representative, but requires extensive modification to become a practical end-to-end benchmark. This shows that new benchmarks are needed to support the development of systems that truly address users{'} visualisation needs.These observations will guide future data creation, highlighting which features hold genuine significance for users.",
}
| Large language models are able to generate code for visualisations in response to simple user requests.This is a useful application and an appealing one for NLP research because plots of data provide grounding for language.However, there are relatively few benchmarks, and those that exist may not be representative of what users do in practice.This paper investigates whether benchmarks reflect real-world use through an empirical study comparing benchmark datasets with code from public repositories.Our findings reveal a substantial gap, with evaluations not testing the same distribution of chart types, attributes, and actions as real-world examples.One dataset is representative, but requires extensive modification to become a practical end-to-end benchmark. This shows that new benchmarks are needed to support the development of systems that truly address users{'} visualisation needs.These observations will guide future data creation, highlighting which features hold genuine significance for users. | [
"Nguyen, Hy",
"He, Xuefei",
"Reeson, Andrew",
"Paris, Cecile",
"Poon, Josiah",
"Kummerfeld, Jonathan K."
] | Do Text-to-Vis Benchmarks Test Real Use of Visualisations? | emnlp-main.423 | Poster | 2407.19726 | [
"https://github.com/giahy2507/text-to-vis-benchmarks-assessment"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.424.bib | https://aclanthology.org/2024.emnlp-main.424/ | @inproceedings{liu-etal-2024-gold,
title = "Gold Panning in Vocabulary: An Adaptive Method for Vocabulary Expansion of Domain-Specific {LLM}s",
author = "Liu, Chengyuan and
Wang, Shihang and
Qing, Lizhi and
Kuang, Kun and
Kang, Yangyang and
Sun, Changlong and
Wu, Fei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.424",
pages = "7442--7459",
abstract = "While Large Language Models (LLMs) demonstrate impressive generation abilities, they frequently struggle when it comes to specialized domains due to their limited domain-specific knowledge. Studies on domain-specific LLMs resort to expanding the vocabulary before fine-tuning on domain-specific corpus, aiming to decrease the sequence length and enhance efficiency during decoding, without thoroughly investigating the results of vocabulary expansion to LLMs over different domains. Our pilot study reveals that expansion with only a subset of the entire vocabulary may lead to superior performance. Guided by the discovery, this paper explores how to identify a vocabulary subset to achieve the optimal results. We introduce VEGAD, an adaptive method that automatically identifies valuable words from a given domain vocabulary. Our method has been validated through experiments on three Chinese datasets, demonstrating its effectiveness. Additionally, we have undertaken comprehensive analyses of the method. The selection of a optimal subset for expansion has shown to enhance performance on both domain-specific tasks and general tasks, showcasing the potential of VEGAD.",
}
| While Large Language Models (LLMs) demonstrate impressive generation abilities, they frequently struggle when it comes to specialized domains due to their limited domain-specific knowledge. Studies on domain-specific LLMs resort to expanding the vocabulary before fine-tuning on domain-specific corpus, aiming to decrease the sequence length and enhance efficiency during decoding, without thoroughly investigating the results of vocabulary expansion to LLMs over different domains. Our pilot study reveals that expansion with only a subset of the entire vocabulary may lead to superior performance. Guided by the discovery, this paper explores how to identify a vocabulary subset to achieve the optimal results. We introduce VEGAD, an adaptive method that automatically identifies valuable words from a given domain vocabulary. Our method has been validated through experiments on three Chinese datasets, demonstrating its effectiveness. Additionally, we have undertaken comprehensive analyses of the method. The selection of a optimal subset for expansion has shown to enhance performance on both domain-specific tasks and general tasks, showcasing the potential of VEGAD. | [
"Liu, Chengyuan",
"Wang, Shihang",
"Qing, Lizhi",
"Kuang, Kun",
"Kang, Yangyang",
"Sun, Changlong",
"Wu, Fei"
] | Gold Panning in Vocabulary: An Adaptive Method for Vocabulary Expansion of Domain-Specific LLMs | emnlp-main.424 | Poster | 2410.01188 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.425.bib | https://aclanthology.org/2024.emnlp-main.425/ | @inproceedings{hu-etal-2024-strategic,
title = "Strategic Demonstration Selection for Improved Fairness in {LLM} In-Context Learning",
author = "Hu, Jingyu and
Liu, Weiru and
Du, Mengnan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.425",
pages = "7460--7475",
abstract = "Recent studies highlight the effectiveness of using in-context learning (ICL) to steer large language models (LLMs) in processing tabular data, a challenging task given the structured nature of such data. Despite advancements in performance, the fairness implications of these methods are less understood. This study investigates how varying demonstrations within ICL prompts influence the fairness outcomes of LLMs. Our findings reveal that deliberately including minority group samples in prompts significantly boosts fairness without sacrificing predictive accuracy. Further experiments demonstrate that the proportion of minority to majority samples in demonstrations affects the trade-off between fairness and prediction accuracy. Based on these insights, we introduce a mitigation technique that employs clustering and evolutionary strategies to curate a diverse and representative sample set from the training data. This approach aims to enhance both predictive performance and fairness in ICL applications. Experimental results validate that our proposed method dramatically improves fairness across various metrics, showing its efficacy in real-world scenarios.",
}
| Recent studies highlight the effectiveness of using in-context learning (ICL) to steer large language models (LLMs) in processing tabular data, a challenging task given the structured nature of such data. Despite advancements in performance, the fairness implications of these methods are less understood. This study investigates how varying demonstrations within ICL prompts influence the fairness outcomes of LLMs. Our findings reveal that deliberately including minority group samples in prompts significantly boosts fairness without sacrificing predictive accuracy. Further experiments demonstrate that the proportion of minority to majority samples in demonstrations affects the trade-off between fairness and prediction accuracy. Based on these insights, we introduce a mitigation technique that employs clustering and evolutionary strategies to curate a diverse and representative sample set from the training data. This approach aims to enhance both predictive performance and fairness in ICL applications. Experimental results validate that our proposed method dramatically improves fairness across various metrics, showing its efficacy in real-world scenarios. | [
"Hu, Jingyu",
"Liu, Weiru",
"Du, Mengnan"
] | Strategic Demonstration Selection for Improved Fairness in LLM In-Context Learning | emnlp-main.425 | Poster | 2408.09757 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.426.bib | https://aclanthology.org/2024.emnlp-main.426/ | @inproceedings{dinh-etal-2024-multi,
title = "Multi-Dialect {V}ietnamese: Task, Dataset, Baseline Models and Challenges",
author = "Dinh, Nguyen Van and
Dang, Thanh Chi and
Thanh Nguyen, Luan and
Nguyen, Kiet Van",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.426",
pages = "7476--7498",
abstract = "Vietnamese, a low-resource language, is typically categorized into three primary dialect groups that belong to Northern, Central, and Southern Vietnam. However, each province within these regions exhibits its own distinct pronunciation variations. Despite the existence of various speech recognition datasets, none of them has provided a fine-grained classification of the 63 dialects specific to individual provinces of Vietnam. To address this gap, we introduce Vietnamese Multi-Dialect (ViMD) dataset, a novel comprehensive dataset capturing the rich diversity of 63 provincial dialects spoken across Vietnam. Our dataset comprises 102.56 hours of audio, consisting of approximately 19,000 utterances, and the associated transcripts contain over 1.2 million words. To provide benchmarks and simultaneously demonstrate the challenges of our dataset, we fine-tune state-of-the-art pre-trained models for two downstream tasks: (1) Dialect identification and (2) Speech recognition. The empirical results suggest two implications including the influence of geographical factors on dialects, and the constraints of current approaches in speech recognition tasks involving multi-dialect speech data. Our dataset is available for research purposes.",
}
| Vietnamese, a low-resource language, is typically categorized into three primary dialect groups that belong to Northern, Central, and Southern Vietnam. However, each province within these regions exhibits its own distinct pronunciation variations. Despite the existence of various speech recognition datasets, none of them has provided a fine-grained classification of the 63 dialects specific to individual provinces of Vietnam. To address this gap, we introduce Vietnamese Multi-Dialect (ViMD) dataset, a novel comprehensive dataset capturing the rich diversity of 63 provincial dialects spoken across Vietnam. Our dataset comprises 102.56 hours of audio, consisting of approximately 19,000 utterances, and the associated transcripts contain over 1.2 million words. To provide benchmarks and simultaneously demonstrate the challenges of our dataset, we fine-tune state-of-the-art pre-trained models for two downstream tasks: (1) Dialect identification and (2) Speech recognition. The empirical results suggest two implications including the influence of geographical factors on dialects, and the constraints of current approaches in speech recognition tasks involving multi-dialect speech data. Our dataset is available for research purposes. | [
"Dinh, Nguyen Van",
"Dang, Thanh Chi",
"Thanh Nguyen, Luan",
"Nguyen, Kiet Van"
] | Multi-Dialect Vietnamese: Task, Dataset, Baseline Models and Challenges | emnlp-main.426 | Poster | 2410.03458 | [
"https://github.com/nguyen-dv/ViMD_Dataset"
] | https://huggingface.co/papers/2410.03458 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.427.bib | https://aclanthology.org/2024.emnlp-main.427/ | @inproceedings{raina-etal-2024-llm,
title = "Is {LLM}-as-a-Judge Robust? Investigating Universal Adversarial Attacks on Zero-shot {LLM} Assessment",
author = "Raina, Vyas and
Liusie, Adian and
Gales, Mark",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.427",
pages = "7499--7517",
abstract = "Large Language Models (LLMs) are powerful zero-shot assessors used in real-world situations such as assessing written exams and benchmarking systems. Despite these critical applications, no existing work has analyzed the vulnerability of judge-LLMs to adversarial manipulation. This work presents the first study on the adversarial robustness of assessment LLMs, where we demonstrate that short universal adversarial phrases can be concatenated to deceive judge LLMs to predict inflated scores. Since adversaries may not know or have access to the judge-LLMs, we propose a simple surrogate attack where a surrogate model is first attacked, and the learned attack phrase then transferred to unknown judge-LLMs. We propose a practical algorithm to determine the short universal attack phrases and demonstrate that when transferred to unseen models, scores can be drastically inflated such that irrespective of the assessed text, maximum scores are predicted. It is found that judge-LLMs are significantly more susceptible to these adversarial attacks when used for absolute scoring, as opposed to comparative assessment. Our findings raise concerns on the reliability of LLM-as-a-judge methods, and emphasize the importance of addressing vulnerabilities in LLM assessment methods before deployment in high-stakes real-world scenarios.",
}
| Large Language Models (LLMs) are powerful zero-shot assessors used in real-world situations such as assessing written exams and benchmarking systems. Despite these critical applications, no existing work has analyzed the vulnerability of judge-LLMs to adversarial manipulation. This work presents the first study on the adversarial robustness of assessment LLMs, where we demonstrate that short universal adversarial phrases can be concatenated to deceive judge LLMs to predict inflated scores. Since adversaries may not know or have access to the judge-LLMs, we propose a simple surrogate attack where a surrogate model is first attacked, and the learned attack phrase then transferred to unknown judge-LLMs. We propose a practical algorithm to determine the short universal attack phrases and demonstrate that when transferred to unseen models, scores can be drastically inflated such that irrespective of the assessed text, maximum scores are predicted. It is found that judge-LLMs are significantly more susceptible to these adversarial attacks when used for absolute scoring, as opposed to comparative assessment. Our findings raise concerns on the reliability of LLM-as-a-judge methods, and emphasize the importance of addressing vulnerabilities in LLM assessment methods before deployment in high-stakes real-world scenarios. | [
"Raina, Vyas",
"Liusie, Adian",
"Gales, Mark"
] | Is LLM-as-a-Judge Robust? Investigating Universal Adversarial Attacks on Zero-shot LLM Assessment | emnlp-main.427 | Poster | 2402.14016 | [
"https://github.com/rainavyas/attack-comparative-assessment"
] | https://huggingface.co/papers/2402.14016 | 0 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.428.bib | https://aclanthology.org/2024.emnlp-main.428/ | @inproceedings{lu-etal-2024-rethinking,
title = "Rethinking the Reversal Curse of {LLM}s: a Prescription from Human Knowledge Reversal",
author = "Lu, Zhicong and
Jin, Li and
Li, Peiguang and
Tian, Yu and
Zhang, Linhao and
Wang, Sirui and
Xu, Guangluan and
Tian, Changyuan and
Cai, Xunliang",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.428",
pages = "7518--7530",
abstract = "Large Language Models (LLMs) have exhibited exceptional performance across diverse domains. However, recent studies reveal that LLMs are plagued by the {``}reversal curse{''}. Most existing methods rely on aggressive sample permutation and pay little attention to delving into the underlying reasons for this issue, resulting in only partial mitigation. In this paper, inspired by human knowledge reversal, we investigate and quantify the individual influence of three potential reasons on the reversal curse: 1) knowledge clarity, 2) entity correlation modeling, and 3) pairwise relationship reasoning capability. Motivated by the analysis of these reasons, we propose a novel **P**airwise entity **O**rder- and **R**elationship-**E**nhanced (**PORE**) data strategy, which facilitates bidirectional entity correlation modeling and pairwise relationship reasoning to overcome the reversal curse. Specifically, PORE augments the samples with entity order-reversal and semantically preserved question-answer pairs, enhancing the encoding of entity correlations in both directions. PORE also employs entity-interleaved pairwise relationship data, which elevates the model{'}s capability for relationship reasoning. Additionally, to improve the recall of reverse relationships, we leverage knowledge clarity to construct high-clarity data for PORE. Extensive experimental results on available and two newly assembled datasets demonstrate the effectiveness and generalization of our method in both data-sufficient and -constrained situations.",
}
| Large Language Models (LLMs) have exhibited exceptional performance across diverse domains. However, recent studies reveal that LLMs are plagued by the {``}reversal curse{''}. Most existing methods rely on aggressive sample permutation and pay little attention to delving into the underlying reasons for this issue, resulting in only partial mitigation. In this paper, inspired by human knowledge reversal, we investigate and quantify the individual influence of three potential reasons on the reversal curse: 1) knowledge clarity, 2) entity correlation modeling, and 3) pairwise relationship reasoning capability. Motivated by the analysis of these reasons, we propose a novel **P**airwise entity **O**rder- and **R**elationship-**E**nhanced (**PORE**) data strategy, which facilitates bidirectional entity correlation modeling and pairwise relationship reasoning to overcome the reversal curse. Specifically, PORE augments the samples with entity order-reversal and semantically preserved question-answer pairs, enhancing the encoding of entity correlations in both directions. PORE also employs entity-interleaved pairwise relationship data, which elevates the model{'}s capability for relationship reasoning. Additionally, to improve the recall of reverse relationships, we leverage knowledge clarity to construct high-clarity data for PORE. Extensive experimental results on available and two newly assembled datasets demonstrate the effectiveness and generalization of our method in both data-sufficient and -constrained situations. | [
"Lu, Zhicong",
"Jin, Li",
"Li, Peiguang",
"Tian, Yu",
"Zhang, Linhao",
"Wang, Sirui",
"Xu, Guangluan",
"Tian, Changyuan",
"Cai, Xunliang"
] | Rethinking the Reversal Curse of LLMs: a Prescription from Human Knowledge Reversal | emnlp-main.428 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.429.bib | https://aclanthology.org/2024.emnlp-main.429/ | @inproceedings{liu-etal-2024-catastrophic,
title = "More Than Catastrophic Forgetting: Integrating General Capabilities For Domain-Specific {LLM}s",
author = "Liu, Chengyuan and
Kang, Yangyang and
Wang, Shihang and
Qing, Lizhi and
Zhao, Fubang and
Wu, Chao and
Sun, Changlong and
Kuang, Kun and
Wu, Fei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.429",
pages = "7531--7548",
abstract = "The performance on general tasks decreases after Large Language Models (LLMs) are fine-tuned on domain-specific tasks, the phenomenon is known as Catastrophic Forgetting (CF). However, this paper presents a further challenge for real application of domain-specific LLMs beyond CF, called General Capabilities Integration (GCI), which necessitates the integration of both the general capabilities and domain knowledge within a single instance. The objective of GCI is not merely to retain previously acquired general capabilities alongside new domain knowledge, but to harmonize and utilize both sets of skills in a cohesive manner to enhance performance on domain-specific tasks. Taking legal domain as an example, we carefully design three groups of training and testing tasks without lacking practicability, and construct the corresponding datasets. To better incorporate general capabilities across domain-specific scenarios, we introduce ALoRA, which utilizes a multi-head attention module upon LoRA, facilitating direct information transfer from preceding tokens to the current one. This enhancement permits the representation to dynamically switch between domain-specific knowledge and general competencies according to the attention. Extensive experiments are conducted on the proposed tasks. The results exhibit the significance of our setting, and the effectiveness of our method.",
}
| The performance on general tasks decreases after Large Language Models (LLMs) are fine-tuned on domain-specific tasks, the phenomenon is known as Catastrophic Forgetting (CF). However, this paper presents a further challenge for real application of domain-specific LLMs beyond CF, called General Capabilities Integration (GCI), which necessitates the integration of both the general capabilities and domain knowledge within a single instance. The objective of GCI is not merely to retain previously acquired general capabilities alongside new domain knowledge, but to harmonize and utilize both sets of skills in a cohesive manner to enhance performance on domain-specific tasks. Taking legal domain as an example, we carefully design three groups of training and testing tasks without lacking practicability, and construct the corresponding datasets. To better incorporate general capabilities across domain-specific scenarios, we introduce ALoRA, which utilizes a multi-head attention module upon LoRA, facilitating direct information transfer from preceding tokens to the current one. This enhancement permits the representation to dynamically switch between domain-specific knowledge and general competencies according to the attention. Extensive experiments are conducted on the proposed tasks. The results exhibit the significance of our setting, and the effectiveness of our method. | [
"Liu, Chengyuan",
"Kang, Yangyang",
"Wang, Shihang",
"Qing, Lizhi",
"Zhao, Fubang",
"Wu, Chao",
"Sun, Changlong",
"Kuang, Kun",
"Wu, Fei"
] | More Than Catastrophic Forgetting: Integrating General Capabilities For Domain-Specific LLMs | emnlp-main.429 | Poster | 2405.17830 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.430.bib | https://aclanthology.org/2024.emnlp-main.430/ | @inproceedings{raina-etal-2024-muting,
title = "Muting Whisper: A Universal Acoustic Adversarial Attack on Speech Foundation Models",
author = "Raina, Vyas and
Ma, Rao and
McGhee, Charles and
Knill, Kate and
Gales, Mark",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.430",
pages = "7549--7565",
}
| No abstract found | [
"Raina, Vyas",
"Ma, Rao",
"McGhee, Charles",
"Knill, Kate",
"Gales, Mark"
] | Muting Whisper: A Universal Acoustic Adversarial Attack on Speech Foundation Models | emnlp-main.430 | Poster | 2405.06134 | [
"https://github.com/rainavyas/prepend_acoustic_attack"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.431.bib | https://aclanthology.org/2024.emnlp-main.431/ | @inproceedings{katsimpras-paliouras-2024-genra,
title = "{GENRA}: Enhancing Zero-shot Retrieval with Rank Aggregation",
author = "Katsimpras, Georgios and
Paliouras, Georgios",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.431",
pages = "7566--7577",
abstract = "Large Language Models (LLMs) have been shown to effectively perform zero-shot document retrieval, a process that typically consists of two steps: i) retrieving relevant documents, and ii) re-ranking them based on their relevance to the query. This paper presents GENRA, a new approach to zero-shot document retrieval that incorporates rank aggregation to improve retrieval effectiveness. Given a query, GENRA first utilizes LLMs to generate informative passages that capture the query{'}s intent. These passages are then employed to guide the retrieval process, selecting similar documents from the corpus. Next, we use LLMs again for a second refinement step. This step can be configured for either direct relevance assessment of each retrieved document or for re-ranking the retrieved documents. Ultimately, both approaches ensure that only the most relevant documents are kept. Upon this filtered set of documents, we perform multi-document retrieval, generating individual rankings for each document. As a final step, GENRA leverages rank aggregation, combining the individual rankings to produce a single refined ranking. Extensive experiments on benchmark datasets demonstrate that GENRA improves existing approaches, highlighting the effectiveness of the proposed methodology in zero-shot retrieval.",
}
| Large Language Models (LLMs) have been shown to effectively perform zero-shot document retrieval, a process that typically consists of two steps: i) retrieving relevant documents, and ii) re-ranking them based on their relevance to the query. This paper presents GENRA, a new approach to zero-shot document retrieval that incorporates rank aggregation to improve retrieval effectiveness. Given a query, GENRA first utilizes LLMs to generate informative passages that capture the query{'}s intent. These passages are then employed to guide the retrieval process, selecting similar documents from the corpus. Next, we use LLMs again for a second refinement step. This step can be configured for either direct relevance assessment of each retrieved document or for re-ranking the retrieved documents. Ultimately, both approaches ensure that only the most relevant documents are kept. Upon this filtered set of documents, we perform multi-document retrieval, generating individual rankings for each document. As a final step, GENRA leverages rank aggregation, combining the individual rankings to produce a single refined ranking. Extensive experiments on benchmark datasets demonstrate that GENRA improves existing approaches, highlighting the effectiveness of the proposed methodology in zero-shot retrieval. | [
"Katsimpras, Georgios",
"Paliouras, Georgios"
] | GENRA: Enhancing Zero-shot Retrieval with Rank Aggregation | emnlp-main.431 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.432.bib | https://aclanthology.org/2024.emnlp-main.432/ | @inproceedings{chen-etal-2024-xplainllm,
title = "{X}plain{LLM}: A Knowledge-Augmented Dataset for Reliable Grounded Explanations in {LLM}s",
author = "Chen, Zichen and
Chen, Jianda and
Singh, Ambuj and
Sra, Misha",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.432",
pages = "7578--7596",
abstract = "Large Language Models (LLMs) have achieved remarkable success in natural language tasks, yet understanding their reasoning processes remains a significant challenge. We address this by introducing XplainLLM, a dataset accompanying an explanation framework designed to enhance LLM transparency and reliability. Our dataset comprises 24,204 instances where each instance interprets the LLM{'}s reasoning behavior using knowledge graphs (KGs) and graph attention networks (GAT), and includes explanations of LLMs such as the decoder-only Llama-3 and the encoder-only RoBERTa. XplainLLM also features a framework for generating grounded explanations and the \textit{debugger-scores} for multidimensional quality analysis. Our explanations include \textit{why-choose} and \textit{why-not-choose} components, \textit{reason-elements}, and \textit{debugger-scores} that collectively illuminate the LLM{'}s reasoning behavior. Our evaluations demonstrate XplainLLM{'}s potential to reduce hallucinations and improve grounded explanation generation in LLMs. XplainLLM is a resource for researchers and practitioners to build trust and verify the reliability of LLM outputs. Our code and dataset are publicly available.",
}
| Large Language Models (LLMs) have achieved remarkable success in natural language tasks, yet understanding their reasoning processes remains a significant challenge. We address this by introducing XplainLLM, a dataset accompanying an explanation framework designed to enhance LLM transparency and reliability. Our dataset comprises 24,204 instances where each instance interprets the LLM{'}s reasoning behavior using knowledge graphs (KGs) and graph attention networks (GAT), and includes explanations of LLMs such as the decoder-only Llama-3 and the encoder-only RoBERTa. XplainLLM also features a framework for generating grounded explanations and the \textit{debugger-scores} for multidimensional quality analysis. Our explanations include \textit{why-choose} and \textit{why-not-choose} components, \textit{reason-elements}, and \textit{debugger-scores} that collectively illuminate the LLM{'}s reasoning behavior. Our evaluations demonstrate XplainLLM{'}s potential to reduce hallucinations and improve grounded explanation generation in LLMs. XplainLLM is a resource for researchers and practitioners to build trust and verify the reliability of LLM outputs. Our code and dataset are publicly available. | [
"Chen, Zichen",
"Chen, Ji",
"a",
"Singh, Ambuj",
"Sra, Misha"
] | XplainLLM: A Knowledge-Augmented Dataset for Reliable Grounded Explanations in LLMs | emnlp-main.432 | Poster | 2311.08614 | [
"https://github.com/chen-zichen/xplainllm_dataset"
] | https://huggingface.co/papers/2311.08614 | 1 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.433.bib | https://aclanthology.org/2024.emnlp-main.433/ | @inproceedings{zhou-wang-2024-divide,
title = "Divide and Conquer Radiology Report Generation via Observation Level Fine-grained Pretraining and Prompt Tuning",
author = "Zhou, Yuanpin and
Wang, Huogen",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.433",
pages = "7597--7610",
abstract = "The automation of radiology report generation (RRG) holds immense potential to alleviate radiologists{'} workloads and improve diagnostic accuracy. Despite advancements in image captioning and vision-language pretraining, RRG remains challenging due to the lengthy and complex nature of radiology reports. In this work, we proposes the Divide and Conquer Radiology Report Generation (DCRRG) model, which breaks down full-text radiology reports into concise observation descriptions. This approach enables the model to capture fine-grained representations from each observation through a two-stage process: an encoding stage focusing on observation prediction tasks to learn fine-grained representations, and a decoding stage for integrating these descriptions into cohesive and comprehensive radiology reports. Experimental results on two benchmark datasets demonstrate that DCRRG achieves significant improvements across all evaluated metrics, underscoring its capability to generate semantically coherent and clinically accurate radiology reports.",
}
| The automation of radiology report generation (RRG) holds immense potential to alleviate radiologists{'} workloads and improve diagnostic accuracy. Despite advancements in image captioning and vision-language pretraining, RRG remains challenging due to the lengthy and complex nature of radiology reports. In this work, we proposes the Divide and Conquer Radiology Report Generation (DCRRG) model, which breaks down full-text radiology reports into concise observation descriptions. This approach enables the model to capture fine-grained representations from each observation through a two-stage process: an encoding stage focusing on observation prediction tasks to learn fine-grained representations, and a decoding stage for integrating these descriptions into cohesive and comprehensive radiology reports. Experimental results on two benchmark datasets demonstrate that DCRRG achieves significant improvements across all evaluated metrics, underscoring its capability to generate semantically coherent and clinically accurate radiology reports. | [
"Zhou, Yuanpin",
"Wang, Huogen"
] | Divide and Conquer Radiology Report Generation via Observation Level Fine-grained Pretraining and Prompt Tuning | emnlp-main.433 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.434.bib | https://aclanthology.org/2024.emnlp-main.434/ | @inproceedings{sun-etal-2024-surf,
title = "{SUR}f: Teaching Large Vision-Language Models to Selectively Utilize Retrieved Information",
author = "Sun, Jiashuo and
Zhang, Jihai and
Zhou, Yucheng and
Su, Zhaochen and
Qu, Xiaoye and
Cheng, Yu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.434",
pages = "7611--7629",
abstract = "Large Vision-Language Models (LVLMs) have become pivotal at the intersection of computer vision and natural language processing. However, the full potential of LVLMs{'} Retrieval-Augmented Generation (RAG) capabilities remains underutilized. Existing works either focus solely on the text modality or are limited to specific tasks. Moreover, most LVLMs struggle to selectively utilize retrieved information and are sensitive to irrelevant or misleading references. To address these challenges, we propose a self-refinement framework designed to teach LVLMs to \textbf{S}electively \textbf{U}tilize \textbf{R}etrieved In\textbf{f}ormation (SURf). Specifically, when given questions that are incorrectly answered by the LVLM backbone, we obtain references that help correct the answers (positive references) and those that do not (negative references). We then fine-tune the LVLM backbone using a combination of these positive and negative references. Our experiments across three tasks and seven datasets demonstrate that our framework significantly enhances LVLMs{'} ability to effectively utilize retrieved multimodal references and improves their robustness against irrelevant or misleading information. The source code is available at https://anonymous.4open.science/r/SURf-6433.",
}
| Large Vision-Language Models (LVLMs) have become pivotal at the intersection of computer vision and natural language processing. However, the full potential of LVLMs{'} Retrieval-Augmented Generation (RAG) capabilities remains underutilized. Existing works either focus solely on the text modality or are limited to specific tasks. Moreover, most LVLMs struggle to selectively utilize retrieved information and are sensitive to irrelevant or misleading references. To address these challenges, we propose a self-refinement framework designed to teach LVLMs to \textbf{S}electively \textbf{U}tilize \textbf{R}etrieved In\textbf{f}ormation (SURf). Specifically, when given questions that are incorrectly answered by the LVLM backbone, we obtain references that help correct the answers (positive references) and those that do not (negative references). We then fine-tune the LVLM backbone using a combination of these positive and negative references. Our experiments across three tasks and seven datasets demonstrate that our framework significantly enhances LVLMs{'} ability to effectively utilize retrieved multimodal references and improves their robustness against irrelevant or misleading information. The source code is available at https://anonymous.4open.science/r/SURf-6433. | [
"Sun, Jiashuo",
"Zhang, Jihai",
"Zhou, Yucheng",
"Su, Zhaochen",
"Qu, Xiaoye",
"Cheng, Yu"
] | SURf: Teaching Large Vision-Language Models to Selectively Utilize Retrieved Information | emnlp-main.434 | Poster | 2409.14083 | [
"https://github.com/gasolsun36/surf"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.435.bib | https://aclanthology.org/2024.emnlp-main.435/ | @inproceedings{qin-etal-2024-uno,
title = "{UNO} Arena for Evaluating Sequential Decision-Making Capability of Large Language Models",
author = "Qin, Zhanyue and
Wang, Haochuan and
Liu, Deyuan and
Song, Ziyang and
Fan, Cunhang and
Lv, Zhao and
Wu, Jinlin and
Lei, Zhen and
Tu, Zhiying and
Chu, Dianhui and
Yu, Xiaoyan and
Sui, Dianbo",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.435",
pages = "7630--7645",
abstract = "Sequential decision-making refers to algorithms that take into account the dynamics of the environment, where early decisions affect subsequent decisions. With large language models (LLMs) demonstrating powerful capabilities between tasks, we can{'}t help but ask: Can Current LLMs Effectively Make Sequential Decisions? In order to answer this question, we propose the UNO Arena based on the card game UNO to evaluate the sequential decision-making capability of LLMs and explain in detail why we choose UNO. In UNO Arena, We evaluate the sequential decision-making capability of LLMs dynamically with novel metrics based Monte Carlo methods. We set up random players, DQN-based reinforcement learning players, and LLM players (e.g. GPT-4, Gemini-pro) for comparison testing. Furthermore, in order to improve the sequential decision-making capability of LLMs, we propose the TUTRI player, which can involves having LLMs reflect their own actions with the summary of game history and the game strategy. Numerous experiments demonstrate that the TUTRI player achieves a notable breakthrough in the performance of sequential decision-making compared to the vanilla LLM player.",
}
| Sequential decision-making refers to algorithms that take into account the dynamics of the environment, where early decisions affect subsequent decisions. With large language models (LLMs) demonstrating powerful capabilities between tasks, we can{'}t help but ask: Can Current LLMs Effectively Make Sequential Decisions? In order to answer this question, we propose the UNO Arena based on the card game UNO to evaluate the sequential decision-making capability of LLMs and explain in detail why we choose UNO. In UNO Arena, We evaluate the sequential decision-making capability of LLMs dynamically with novel metrics based Monte Carlo methods. We set up random players, DQN-based reinforcement learning players, and LLM players (e.g. GPT-4, Gemini-pro) for comparison testing. Furthermore, in order to improve the sequential decision-making capability of LLMs, we propose the TUTRI player, which can involves having LLMs reflect their own actions with the summary of game history and the game strategy. Numerous experiments demonstrate that the TUTRI player achieves a notable breakthrough in the performance of sequential decision-making compared to the vanilla LLM player. | [
"Qin, Zhanyue",
"Wang, Haochuan",
"Liu, Deyuan",
"Song, Ziyang",
"Fan, Cunhang",
"Lv, Zhao",
"Wu, Jinlin",
"Lei, Zhen",
"Tu, Zhiying",
"Chu, Dianhui",
"Yu, Xiaoyan",
"Sui, Dianbo"
] | UNO Arena for Evaluating Sequential Decision-Making Capability of Large Language Models | emnlp-main.435 | Poster | 2406.16382 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.436.bib | https://aclanthology.org/2024.emnlp-main.436/ | @inproceedings{gu-etal-2024-middleware,
title = "Middleware for {LLM}s: Tools Are Instrumental for Language Agents in Complex Environments",
author = "Gu, Yu and
Shu, Yiheng and
Yu, Hao and
Liu, Xiao and
Dong, Yuxiao and
Tang, Jie and
Srinivasa, Jayanth and
Latapie, Hugo and
Su, Yu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.436",
pages = "7646--7663",
abstract = "The applications of large language models (LLMs) have expanded well beyond the confines of text processing, signaling a new era where LLMs are envisioned as generalist agents capable of operating within complex environments. These environments are often highly expansive, making it impossible for the LLM to process them within its short-term memory. Motivated by recent research on extending the capabilities of LLMs with tools, we seek to investigate the intriguing potential of tools to augment LLMs in handling such complexity by introducing a novel class of tools, termed *middleware*, to aid in the proactive exploration within these massive environments. Such specialized tools can serve as a middleware layer shielding the LLM from environmental complexity. In two representative complex environments{---}knowledge bases (KBs) and databases{---}we demonstrate the significant potential of augmenting language agents with tools in complex environments. Notably, equipped with the middleware, GPT-4 achieves **2.8**X the performance of the best baseline in tasks requiring access to database content and **2.2**X in KB tasks. Our findings illuminate the path for advancing language agents in real-world applications.",
}
| The applications of large language models (LLMs) have expanded well beyond the confines of text processing, signaling a new era where LLMs are envisioned as generalist agents capable of operating within complex environments. These environments are often highly expansive, making it impossible for the LLM to process them within its short-term memory. Motivated by recent research on extending the capabilities of LLMs with tools, we seek to investigate the intriguing potential of tools to augment LLMs in handling such complexity by introducing a novel class of tools, termed *middleware*, to aid in the proactive exploration within these massive environments. Such specialized tools can serve as a middleware layer shielding the LLM from environmental complexity. In two representative complex environments{---}knowledge bases (KBs) and databases{---}we demonstrate the significant potential of augmenting language agents with tools in complex environments. Notably, equipped with the middleware, GPT-4 achieves **2.8**X the performance of the best baseline in tasks requiring access to database content and **2.2**X in KB tasks. Our findings illuminate the path for advancing language agents in real-world applications. | [
"Gu, Yu",
"Shu, Yiheng",
"Yu, Hao",
"Liu, Xiao",
"Dong, Yuxiao",
"Tang, Jie",
"Srinivasa, Jayanth",
"Latapie, Hugo",
"Su, Yu"
] | Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments | emnlp-main.436 | Poster | 2402.14672 | [
""
] | https://huggingface.co/papers/2402.14672 | 4 | 0 | 0 | 9 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.437.bib | https://aclanthology.org/2024.emnlp-main.437/ | @inproceedings{tang-etal-2024-morpheus,
title = "{MORPHEUS}: Modeling Role from Personalized Dialogue History by Exploring and Utilizing Latent Space",
author = "Tang, Yihong and
Wang, Bo and
Zhao, Dongming and
Jinxiaojia, Jinxiaojia and
Zhangjijun, Zhangjijun and
He, Ruifang and
Hou, Yuexian",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.437",
pages = "7664--7676",
abstract = "Personalized Dialogue Generation (PDG) aims to create coherent responses according to roles or personas. Traditional PDG relies on external role data, which can be scarce and raise privacy concerns. Approaches address these issues by extracting role information from dialogue history, which often fail to generically model roles in continuous space. To overcome these limitations, we introduce a novel framework Models Roles from Personalized Dialogue History by Exploring and Utilizing Latent Space (MORPHEUS) through a three-stage training process. Specifically, we create a persona codebook to represent roles in latent space compactly, and this codebook is used to construct a posterior distribution of role information. This method enables the model to generalize across roles, allowing the generation of personalized dialogues even for unseen roles. Experiments on both Chinese and English datasets demonstrate that MORPHEUS enhances the extraction of role information, and improves response generation without external role data. Additionally, MORPHEUS can be considered an efficient fine-tuning for large language models.",
}
| Personalized Dialogue Generation (PDG) aims to create coherent responses according to roles or personas. Traditional PDG relies on external role data, which can be scarce and raise privacy concerns. Approaches address these issues by extracting role information from dialogue history, which often fail to generically model roles in continuous space. To overcome these limitations, we introduce a novel framework Models Roles from Personalized Dialogue History by Exploring and Utilizing Latent Space (MORPHEUS) through a three-stage training process. Specifically, we create a persona codebook to represent roles in latent space compactly, and this codebook is used to construct a posterior distribution of role information. This method enables the model to generalize across roles, allowing the generation of personalized dialogues even for unseen roles. Experiments on both Chinese and English datasets demonstrate that MORPHEUS enhances the extraction of role information, and improves response generation without external role data. Additionally, MORPHEUS can be considered an efficient fine-tuning for large language models. | [
"Tang, Yihong",
"Wang, Bo",
"Zhao, Dongming",
"Jinxiaojia, Jinxiaojia",
"Zhangjijun, Zhangjijun",
"He, Ruifang",
"Hou, Yuexian"
] | MORPHEUS: Modeling Role from Personalized Dialogue History by Exploring and Utilizing Latent Space | emnlp-main.437 | Poster | 2407.02345 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.438.bib | https://aclanthology.org/2024.emnlp-main.438/ | @inproceedings{wang-etal-2024-knowledgesg,
title = "{K}nowledge{SG}: Privacy-Preserving Synthetic Text Generation with Knowledge Distillation from Server",
author = "Wang, WenHao and
Liang, Xiaoyu and
Ye, Rui and
Chai, Jingyi and
Chen, Siheng and
Wang, Yanfeng",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.438",
pages = "7677--7695",
abstract = "The success of large language models (LLMs) facilitate many parties to fine-tune LLMs on their own private data. However, this practice raises privacy concerns due to the memorization of LLMs. Existing solutions, such as utilizing synthetic data for substitution, struggle to simultaneously improve performance and preserve privacy.They either rely on a local model for generation, resulting in a performance decline, or take advantage of APIs, directly exposing the data to API servers. To address this issue, we propose \textit{KnowledgeSG}, a novel client-server framework which enhances synthetic data quality and improves model performance while ensuring privacy. We achieve this by learning local knowledge from the private data with differential privacy (DP) and distilling professional knowledge from the server. Additionally, inspired by federated learning, we transmit models rather than data between the client and server to prevent privacy leakage.Extensive experiments in medical and financial domains demonstrate the effectiveness of *KnowledgeSG*. Our code is now publicly available at https://github.com/wwh0411/KnowledgeSG.",
}
| The success of large language models (LLMs) facilitate many parties to fine-tune LLMs on their own private data. However, this practice raises privacy concerns due to the memorization of LLMs. Existing solutions, such as utilizing synthetic data for substitution, struggle to simultaneously improve performance and preserve privacy.They either rely on a local model for generation, resulting in a performance decline, or take advantage of APIs, directly exposing the data to API servers. To address this issue, we propose \textit{KnowledgeSG}, a novel client-server framework which enhances synthetic data quality and improves model performance while ensuring privacy. We achieve this by learning local knowledge from the private data with differential privacy (DP) and distilling professional knowledge from the server. Additionally, inspired by federated learning, we transmit models rather than data between the client and server to prevent privacy leakage.Extensive experiments in medical and financial domains demonstrate the effectiveness of *KnowledgeSG*. Our code is now publicly available at https://github.com/wwh0411/KnowledgeSG. | [
"Wang, WenHao",
"Liang, Xiaoyu",
"Ye, Rui",
"Chai, Jingyi",
"Chen, Siheng",
"Wang, Yanfeng"
] | KnowledgeSG: Privacy-Preserving Synthetic Text Generation with Knowledge Distillation from Server | emnlp-main.438 | Poster | 2410.05725 | [
"https://github.com/wwh0411/knowledgesg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.439.bib | https://aclanthology.org/2024.emnlp-main.439/ | @inproceedings{gong-etal-2024-damro,
title = "{DAMRO}: Dive into the Attention Mechanism of {LVLM} to Reduce Object Hallucination",
author = "Gong, Xuan and
Ming, Tianshi and
Wang, Xinpeng and
Wei, Zhihua",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.439",
pages = "7696--7712",
abstract = "Despite the great success of Large Vision-Language Models (LVLMs), they inevitably suffer from hallucination. As we know, both the visual encoder and the Large Language Model (LLM) decoder in LVLMs are Transformer-based, allowing the model to extract visual information and generate text outputs via attention mechanisms. We find that the attention distribution of LLM decoder on image tokens is highly consistent with the visual encoder and both distributions tend to focus on particular background tokens rather than the referred objects in the image. We attribute to the unexpected attention distribution to an inherent flaw in the visual encoder itself, which misguides LLMs to over emphasize the redundant information and generate object hallucination. To address the issue, we propose DAMRO, a novel training-free strategy that **D**ive into **A**ttention **M**echanism of LVLM to **R**educe **O**bject Hallucination. Specifically, our approach employs classification token (CLS) of ViT to filter out high-attention tokens scattered in the background and then eliminate their influence during decoding stage. We evaluate our method on LVLMs including LLaVA-1.5, LLaVA-NeXT and InstructBLIP, using various benchmarks such as POPE, CHAIR, MME and GPT-4V Aided Evaluation. The results demonstrate that our approach significantly reduces the impact of these outlier tokens, thus effectively alleviating the hallucination of LVLMs.",
}
| Despite the great success of Large Vision-Language Models (LVLMs), they inevitably suffer from hallucination. As we know, both the visual encoder and the Large Language Model (LLM) decoder in LVLMs are Transformer-based, allowing the model to extract visual information and generate text outputs via attention mechanisms. We find that the attention distribution of LLM decoder on image tokens is highly consistent with the visual encoder and both distributions tend to focus on particular background tokens rather than the referred objects in the image. We attribute to the unexpected attention distribution to an inherent flaw in the visual encoder itself, which misguides LLMs to over emphasize the redundant information and generate object hallucination. To address the issue, we propose DAMRO, a novel training-free strategy that **D**ive into **A**ttention **M**echanism of LVLM to **R**educe **O**bject Hallucination. Specifically, our approach employs classification token (CLS) of ViT to filter out high-attention tokens scattered in the background and then eliminate their influence during decoding stage. We evaluate our method on LVLMs including LLaVA-1.5, LLaVA-NeXT and InstructBLIP, using various benchmarks such as POPE, CHAIR, MME and GPT-4V Aided Evaluation. The results demonstrate that our approach significantly reduces the impact of these outlier tokens, thus effectively alleviating the hallucination of LVLMs. | [
"Gong, Xuan",
"Ming, Tianshi",
"Wang, Xinpeng",
"Wei, Zhihua"
] | DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination | emnlp-main.439 | Poster | 2410.04514 | [
""
] | https://huggingface.co/papers/2410.04514 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.440.bib | https://aclanthology.org/2024.emnlp-main.440/ | @inproceedings{men-etal-2024-unlocking,
title = "Unlocking the Future: Exploring Look-Ahead Planning Mechanistic Interpretability in Large Language Models",
author = "Men, Tianyi and
Cao, Pengfei and
Jin, Zhuoran and
Chen, Yubo and
Liu, Kang and
Zhao, Jun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.440",
pages = "7713--7724",
abstract = "Planning, as the core module of agents, is crucial in various fields such as embodied agents, web navigation, and tool using. With the development of large language models (LLMs), some researchers treat large language models as intelligent agents to stimulate and evaluate their planning capabilities. However, the planning mechanism is still unclear. In this work, we focus on exploring the look-ahead planning mechanism in large language models from the perspectives of information flow and internal representations. First, we study how planning is done internally by analyzing the multi-layer perception (MLP) and multi-head self-attention (MHSA) components at the last token. We find that the output of MHSA in the middle layers at the last token can directly decode the decision to some extent. Based on this discovery, we further trace the source of MHSA by information flow, and we reveal that MHSA extracts information from spans of the goal states and recent steps. According to information flow, we continue to study what information is encoded within it. Specifically, we explore whether future decisions have been considered in advance in the representation of flow. We demonstrate that the middle and upper layers encode a few short-term future decisions. Overall, our research analyzes the look-ahead planning mechanisms of LLMs, facilitating future research on LLMs performing planning tasks.",
}
| Planning, as the core module of agents, is crucial in various fields such as embodied agents, web navigation, and tool using. With the development of large language models (LLMs), some researchers treat large language models as intelligent agents to stimulate and evaluate their planning capabilities. However, the planning mechanism is still unclear. In this work, we focus on exploring the look-ahead planning mechanism in large language models from the perspectives of information flow and internal representations. First, we study how planning is done internally by analyzing the multi-layer perception (MLP) and multi-head self-attention (MHSA) components at the last token. We find that the output of MHSA in the middle layers at the last token can directly decode the decision to some extent. Based on this discovery, we further trace the source of MHSA by information flow, and we reveal that MHSA extracts information from spans of the goal states and recent steps. According to information flow, we continue to study what information is encoded within it. Specifically, we explore whether future decisions have been considered in advance in the representation of flow. We demonstrate that the middle and upper layers encode a few short-term future decisions. Overall, our research analyzes the look-ahead planning mechanisms of LLMs, facilitating future research on LLMs performing planning tasks. | [
"Men, Tianyi",
"Cao, Pengfei",
"Jin, Zhuoran",
"Chen, Yubo",
"Liu, Kang",
"Zhao, Jun"
] | Unlocking the Future: Exploring Look-Ahead Planning Mechanistic Interpretability in Large Language Models | emnlp-main.440 | Poster | 2406.16033 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.441.bib | https://aclanthology.org/2024.emnlp-main.441/ | @inproceedings{zheng-etal-2024-breaking,
title = "Breaking Language Barriers: Cross-Lingual Continual Pre-Training at Scale",
author = "Zheng, Wenzhen and
Pan, Wenbo and
Xu, Xu and
Qin, Libo and
Yue, Li and
Zhou, Ming",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.441",
pages = "7725--7738",
abstract = "In recent years, Large Language Models (LLMs) have made significant strides towards Artificial General Intelligence. However, training these models from scratch requires substantial computational resources and vast amounts of text data. In this paper, we explores an alternative approach to constructing a LLM for a new language by continually pre-training (CPT) from existing pre-trained LLMs, instead of using randomly initialized parameters. Based on parallel experiments on 40 model sizes ranging from 40M to 5B parameters, we find that 1) CPT converges faster and saves significant resources in a scalable manner. 2) CPT adheres to an extended scaling law derived from with a joint data-parameter scaling term. 3) The compute-optimal data-parameter allocation for CPT markedly differs based on our estimated scaling factors. 4) The effectiveness of transfer scale is influenced by training duration and linguistic properties, while robust to data replaying, a method that effectively mitigates catastrophic forgetting in CPT. We hope our findings provide deeper insights into the transferability of LLMs at scale for the research community.",
}
| In recent years, Large Language Models (LLMs) have made significant strides towards Artificial General Intelligence. However, training these models from scratch requires substantial computational resources and vast amounts of text data. In this paper, we explores an alternative approach to constructing a LLM for a new language by continually pre-training (CPT) from existing pre-trained LLMs, instead of using randomly initialized parameters. Based on parallel experiments on 40 model sizes ranging from 40M to 5B parameters, we find that 1) CPT converges faster and saves significant resources in a scalable manner. 2) CPT adheres to an extended scaling law derived from with a joint data-parameter scaling term. 3) The compute-optimal data-parameter allocation for CPT markedly differs based on our estimated scaling factors. 4) The effectiveness of transfer scale is influenced by training duration and linguistic properties, while robust to data replaying, a method that effectively mitigates catastrophic forgetting in CPT. We hope our findings provide deeper insights into the transferability of LLMs at scale for the research community. | [
"Zheng, Wenzhen",
"Pan, Wenbo",
"Xu, Xu",
"Qin, Libo",
"Yue, Li",
"Zhou, Ming"
] | Breaking Language Barriers: Cross-Lingual Continual Pre-Training at Scale | emnlp-main.441 | Poster | 2407.02118 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.442.bib | https://aclanthology.org/2024.emnlp-main.442/ | @inproceedings{payoungkhamdee-etal-2024-empirical,
title = "An Empirical Study of Multilingual Reasoning Distillation for Question Answering",
author = "Payoungkhamdee, Patomporn and
Limkonchotiwat, Peerat and
Baek, Jinheon and
Manakul, Potsawee and
Udomcharoenchaikit, Can and
Chuangsuwanich, Ekapol and
Nutanong, Sarana",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.442",
pages = "7739--7751",
abstract = "Reasoning is one crucial capability in Large Language Models (LLMs), allowing them to perform complex tasks such as solving math problems and multi-step planning. While reasoning capability can emerge in larger models, smaller ones usually have to rely on distillation to transfer this capability from a larger model. However, recent efforts to distill reasoning capabilities have focused mainly on English, leaving multilingual distillation underexplored. To address this gap, this paper examines existing English reasoning distillation methods that utilize a variety of positive rationales in multilingual settings and proposes d-CoT-nR, a novel approach that incorporates incorrect rationales as additional guidance. Empirical results from multilingual high-school examinations show that d-CoT-nR significantly surpasses the baseline, improving accuracy in unseen languages and correctness in step-by-step reasoning.",
}
| Reasoning is one crucial capability in Large Language Models (LLMs), allowing them to perform complex tasks such as solving math problems and multi-step planning. While reasoning capability can emerge in larger models, smaller ones usually have to rely on distillation to transfer this capability from a larger model. However, recent efforts to distill reasoning capabilities have focused mainly on English, leaving multilingual distillation underexplored. To address this gap, this paper examines existing English reasoning distillation methods that utilize a variety of positive rationales in multilingual settings and proposes d-CoT-nR, a novel approach that incorporates incorrect rationales as additional guidance. Empirical results from multilingual high-school examinations show that d-CoT-nR significantly surpasses the baseline, improving accuracy in unseen languages and correctness in step-by-step reasoning. | [
"Payoungkhamdee, Patomporn",
"Limkonchotiwat, Peerat",
"Baek, Jinheon",
"Manakul, Potsawee",
"Udomcharoenchaikit, Can",
"Chuangsuwanich, Ekapol",
"Nutanong, Sarana"
] | An Empirical Study of Multilingual Reasoning Distillation for Question Answering | emnlp-main.442 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.443.bib | https://aclanthology.org/2024.emnlp-main.443/ | @inproceedings{yona-etal-2024-large,
title = "Can Large Language Models Faithfully Express Their Intrinsic Uncertainty in Words?",
author = "Yona, Gal and
Aharoni, Roee and
Geva, Mor",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.443",
pages = "7752--7764",
abstract = "We posit that large language models (LLMs) should be capable of expressing their intrinsic uncertainty in natural language. For example, if the LLM is equally likely to output two contradicting answers to the same question, then its generated response should reflect this uncertainty by hedging its answer (e.g., {``}I{'}m not sure, but I think...{''}). We formalize faithful response uncertainty based on the gap between the model{'}s intrinsic confidence in the assertions it makes and the decisiveness by which they are conveyed. This example-level metric reliably indicates whether the model reflects its uncertainty, as it penalizes both excessive and insufficient hedging. We evaluate a variety of aligned LLMs at faithfully conveying uncertainty on several knowledge-intensive question answering tasks. Our results provide strong evidence that modern LLMs are poor at faithfully conveying their uncertainty, and that better alignment is necessary to improve their trustworthiness.",
}
| We posit that large language models (LLMs) should be capable of expressing their intrinsic uncertainty in natural language. For example, if the LLM is equally likely to output two contradicting answers to the same question, then its generated response should reflect this uncertainty by hedging its answer (e.g., {``}I{'}m not sure, but I think...{''}). We formalize faithful response uncertainty based on the gap between the model{'}s intrinsic confidence in the assertions it makes and the decisiveness by which they are conveyed. This example-level metric reliably indicates whether the model reflects its uncertainty, as it penalizes both excessive and insufficient hedging. We evaluate a variety of aligned LLMs at faithfully conveying uncertainty on several knowledge-intensive question answering tasks. Our results provide strong evidence that modern LLMs are poor at faithfully conveying their uncertainty, and that better alignment is necessary to improve their trustworthiness. | [
"Yona, Gal",
"Aharoni, Roee",
"Geva, Mor"
] | Can Large Language Models Faithfully Express Their Intrinsic Uncertainty in Words? | emnlp-main.443 | Poster | 2405.16908 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.444.bib | https://aclanthology.org/2024.emnlp-main.444/ | @inproceedings{gekhman-etal-2024-fine,
title = "Does Fine-Tuning {LLM}s on New Knowledge Encourage Hallucinations?",
author = "Gekhman, Zorik and
Yona, Gal and
Aharoni, Roee and
Eyal, Matan and
Feder, Amir and
Reichart, Roi and
Herzig, Jonathan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.444",
pages = "7765--7784",
abstract = "When large language models are aligned via supervised fine-tuning, they may encounter new factual information that was not acquired through pre-training. It is often conjectured that this can teach the model the behavior of hallucinating factually incorrect responses, as the model is trained to generate facts that are not grounded in its pre-existing knowledge. In this work, we study the impact of such exposure to new knowledge on the capability of the fine-tuned model to utilize its pre-existing knowledge. To this end, we design a controlled setup, focused on closed-book QA, where we vary the proportion of the fine-tuning examples that introduce new knowledge. We demonstrate that large language models struggle to acquire new factual knowledge through fine-tuning, as fine-tuning examples that introduce new knowledge are learned significantly slower than those consistent with the model{'}s knowledge. However, we also find that as the examples with new knowledge are eventually learned, they linearly increase the model{'}s tendency to hallucinate. Taken together, our results highlight the risk in introducing new factual knowledge through fine-tuning, and support the view that large language models mostly acquire factual knowledge through pre-training, whereas fine-tuning teaches them to use it more efficiently.",
}
| When large language models are aligned via supervised fine-tuning, they may encounter new factual information that was not acquired through pre-training. It is often conjectured that this can teach the model the behavior of hallucinating factually incorrect responses, as the model is trained to generate facts that are not grounded in its pre-existing knowledge. In this work, we study the impact of such exposure to new knowledge on the capability of the fine-tuned model to utilize its pre-existing knowledge. To this end, we design a controlled setup, focused on closed-book QA, where we vary the proportion of the fine-tuning examples that introduce new knowledge. We demonstrate that large language models struggle to acquire new factual knowledge through fine-tuning, as fine-tuning examples that introduce new knowledge are learned significantly slower than those consistent with the model{'}s knowledge. However, we also find that as the examples with new knowledge are eventually learned, they linearly increase the model{'}s tendency to hallucinate. Taken together, our results highlight the risk in introducing new factual knowledge through fine-tuning, and support the view that large language models mostly acquire factual knowledge through pre-training, whereas fine-tuning teaches them to use it more efficiently. | [
"Gekhman, Zorik",
"Yona, Gal",
"Aharoni, Roee",
"Eyal, Matan",
"Feder, Amir",
"Reichart, Roi",
"Herzig, Jonathan"
] | Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? | emnlp-main.444 | Poster | 2405.05904 | [
""
] | https://huggingface.co/papers/2405.05904 | 1 | 6 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.445.bib | https://aclanthology.org/2024.emnlp-main.445/ | @inproceedings{hee-etal-2024-bridging,
title = "Bridging Modalities: Enhancing Cross-Modality Hate Speech Detection with Few-Shot In-Context Learning",
author = "Hee, Ming Shan and
Kumaresan, Aditi and
Lee, Roy Ka-Wei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.445",
pages = "7785--7799",
abstract = "The widespread presence of hate speech on the internet, including formats such as text-based tweets and multimodal memes, poses a significant challenge to digital platform safety. Recent research has developed detection models tailored to specific modalities; however, there is a notable gap in transferring detection capabilities across different formats. This study conducts extensive experiments using few-shot in-context learning with large language models to explore the transferability of hate speech detection between modalities. Our findings demonstrate that text-based hate speech examples can significantly enhance the classification accuracy of vision-language hate speech. Moreover, text-based demonstrations outperform vision-language demonstrations in few-shot learning settings. These results highlight the effectiveness of cross-modality knowledge transfer and offer valuable insights for improving hate speech detection systems.",
}
| The widespread presence of hate speech on the internet, including formats such as text-based tweets and multimodal memes, poses a significant challenge to digital platform safety. Recent research has developed detection models tailored to specific modalities; however, there is a notable gap in transferring detection capabilities across different formats. This study conducts extensive experiments using few-shot in-context learning with large language models to explore the transferability of hate speech detection between modalities. Our findings demonstrate that text-based hate speech examples can significantly enhance the classification accuracy of vision-language hate speech. Moreover, text-based demonstrations outperform vision-language demonstrations in few-shot learning settings. These results highlight the effectiveness of cross-modality knowledge transfer and offer valuable insights for improving hate speech detection systems. | [
"Hee, Ming Shan",
"Kumaresan, Aditi",
"Lee, Roy Ka-Wei"
] | Bridging Modalities: Enhancing Cross-Modality Hate Speech Detection with Few-Shot In-Context Learning | emnlp-main.445 | Poster | 2410.05600 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.446.bib | https://aclanthology.org/2024.emnlp-main.446/ | @inproceedings{xu-etal-2024-mind,
title = "{MIND}: Multimodal Shopping Intention Distillation from Large Vision-language Models for {E}-commerce Purchase Understanding",
author = "Xu, Baixuan and
Wang, Weiqi and
Shi, Haochen and
Ding, Wenxuan and
Jing, Huihao and
Fang, Tianqing and
Bai, Jiaxin and
Liu, Xin and
Yu, Changlong and
Li, Zheng and
Luo, Chen and
Yin, Qingyu and
Yin, Bing and
Chen, Long and
Song, Yangqiu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.446",
pages = "7800--7815",
abstract = "Improving user experience and providing personalized search results in E-commerce platforms heavily rely on understanding purchase intention. However, existing methods for acquiring large-scale intentions bank on distilling large language models with human annotation for verification. Such an approach tends to generate product-centric intentions, overlook valuable visual information from product images, and incurs high costs for scalability. To address these issues, we introduce MIND, a multimodal framework that allows Large Vision-Language Models (LVLMs) to infer purchase intentions from multimodal product metadata and prioritize human-centric ones. Using Amazon Review data, we apply MIND and create a multimodal intention knowledge base, which contains 1,264,441 intentions derived from 126,142 co-buy shopping records across 107,215 products. Extensive human evaluations demonstrate the high plausibility and typicality of our obtained intentions and validate the effectiveness of our distillation framework and filtering mechanism. Further experiments reveal the positive downstream benefits that MIND brings to intention comprehension tasks and highlight the importance of multimodal generation and role-aware filtering. Additionally, MIND shows robustness to different prompts and superior generation quality compared to previous methods.",
}
| Improving user experience and providing personalized search results in E-commerce platforms heavily rely on understanding purchase intention. However, existing methods for acquiring large-scale intentions bank on distilling large language models with human annotation for verification. Such an approach tends to generate product-centric intentions, overlook valuable visual information from product images, and incurs high costs for scalability. To address these issues, we introduce MIND, a multimodal framework that allows Large Vision-Language Models (LVLMs) to infer purchase intentions from multimodal product metadata and prioritize human-centric ones. Using Amazon Review data, we apply MIND and create a multimodal intention knowledge base, which contains 1,264,441 intentions derived from 126,142 co-buy shopping records across 107,215 products. Extensive human evaluations demonstrate the high plausibility and typicality of our obtained intentions and validate the effectiveness of our distillation framework and filtering mechanism. Further experiments reveal the positive downstream benefits that MIND brings to intention comprehension tasks and highlight the importance of multimodal generation and role-aware filtering. Additionally, MIND shows robustness to different prompts and superior generation quality compared to previous methods. | [
"Xu, Baixuan",
"Wang, Weiqi",
"Shi, Haochen",
"Ding, Wenxuan",
"Jing, Huihao",
"Fang, Tianqing",
"Bai, Jiaxin",
"Liu, Xin",
"Yu, Changlong",
"Li, Zheng",
"Luo, Chen",
"Yin, Qingyu",
"Yin, Bing",
"Chen, Long",
"Song, Yangqiu"
] | MIND: Multimodal Shopping Intention Distillation from Large Vision-language Models for E-commerce Purchase Understanding | emnlp-main.446 | Poster | 2406.10701 | [
"https://github.com/HKUST-KnowComp/MIND_Distillation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.447.bib | https://aclanthology.org/2024.emnlp-main.447/ | @inproceedings{jiayang-etal-2024-econ,
title = "{ECON}: On the Detection and Resolution of Evidence Conflicts",
author = "Jiayang, Cheng and
Chan, Chunkit and
Zhuang, Qianqian and
Qiu, Lin and
Zhang, Tianhang and
Liu, Tengxiao and
Song, Yangqiu and
Zhang, Yue and
Liu, Pengfei and
Zhang, Zheng",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.447",
pages = "7816--7844",
abstract = "The rise of large language models (LLMs) has significantly influenced the quality of information in decision-making systems, leading to the prevalence of AI-generated content and challenges in detecting misinformation and managing conflicting information, or {``}inter-evidence conflicts.{''} This study introduces a method for generating diverse, validated evidence conflicts to simulate real-world misinformation scenarios. We evaluate conflict detection methods, including Natural Language Inference (NLI) models, factual consistency (FC) models, and LLMs, on these conflicts (RQ1) and analyze LLMs{'} conflict resolution behaviors (RQ2). Our key findings include: (1) NLI and LLM models exhibit high precision in detecting answer conflicts, though weaker models suffer from low recall; (2) FC models struggle with lexically similar answer conflicts, while NLI and LLM models handle these better; and (3) stronger models like GPT-4 show robust performance, especially with nuanced conflicts. For conflict resolution, LLMs often favor one piece of conflicting evidence without justification and rely on internal knowledge if they have prior beliefs.",
}
| The rise of large language models (LLMs) has significantly influenced the quality of information in decision-making systems, leading to the prevalence of AI-generated content and challenges in detecting misinformation and managing conflicting information, or {``}inter-evidence conflicts.{''} This study introduces a method for generating diverse, validated evidence conflicts to simulate real-world misinformation scenarios. We evaluate conflict detection methods, including Natural Language Inference (NLI) models, factual consistency (FC) models, and LLMs, on these conflicts (RQ1) and analyze LLMs{'} conflict resolution behaviors (RQ2). Our key findings include: (1) NLI and LLM models exhibit high precision in detecting answer conflicts, though weaker models suffer from low recall; (2) FC models struggle with lexically similar answer conflicts, while NLI and LLM models handle these better; and (3) stronger models like GPT-4 show robust performance, especially with nuanced conflicts. For conflict resolution, LLMs often favor one piece of conflicting evidence without justification and rely on internal knowledge if they have prior beliefs. | [
"Jiayang, Cheng",
"Chan, Chunkit",
"Zhuang, Qianqian",
"Qiu, Lin",
"Zhang, Tianhang",
"Liu, Tengxiao",
"Song, Yangqiu",
"Zhang, Yue",
"Liu, Pengfei",
"Zhang, Zheng"
] | ECON: On the Detection and Resolution of Evidence Conflicts | emnlp-main.447 | Poster | 2410.04068 | [
"https://github.com/HKUST-KnowComp/EvidenceConflict"
] | https://huggingface.co/papers/2410.04068 | 0 | 0 | 0 | 10 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.448.bib | https://aclanthology.org/2024.emnlp-main.448/ | @inproceedings{tonglet-etal-2024-image,
title = "{``}Image, Tell me your story!{''} Predicting the original meta-context of visual misinformation",
author = "Tonglet, Jonathan and
Moens, Marie-Francine and
Gurevych, Iryna",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.448",
pages = "7845--7864",
abstract = "To assist human fact-checkers, researchers have developed automated approaches for visual misinformation detection. These methods assign veracity scores by identifying inconsistencies between the image and its caption, or by detecting forgeries in the image. However, they neglect a crucial point of the human fact-checking process: identifying the original meta-context of the image. By explaining what is actually true about the image, fact-checkers can better detect misinformation, focus their efforts on check-worthy visual content, engage in counter-messaging before misinformation spreads widely, and make their explanation more convincing. Here, we fill this gap by introducing the task of automated image contextualization. We create 5Pils, a dataset of 1,676 fact-checked images with question-answer pairs about their original meta-context. Annotations are based on the 5 Pillars fact-checking framework. We implement a first baseline that grounds the image in its original meta-context using the content of the image and textual evidence retrieved from the open web. Our experiments show promising results while highlighting several open challenges in retrieval and reasoning.",
}
| To assist human fact-checkers, researchers have developed automated approaches for visual misinformation detection. These methods assign veracity scores by identifying inconsistencies between the image and its caption, or by detecting forgeries in the image. However, they neglect a crucial point of the human fact-checking process: identifying the original meta-context of the image. By explaining what is actually true about the image, fact-checkers can better detect misinformation, focus their efforts on check-worthy visual content, engage in counter-messaging before misinformation spreads widely, and make their explanation more convincing. Here, we fill this gap by introducing the task of automated image contextualization. We create 5Pils, a dataset of 1,676 fact-checked images with question-answer pairs about their original meta-context. Annotations are based on the 5 Pillars fact-checking framework. We implement a first baseline that grounds the image in its original meta-context using the content of the image and textual evidence retrieved from the open web. Our experiments show promising results while highlighting several open challenges in retrieval and reasoning. | [
"Tonglet, Jonathan",
"Moens, Marie-Francine",
"Gurevych, Iryna"
] | “Image, Tell me your story!” Predicting the original meta-context of visual misinformation | emnlp-main.448 | Poster | [
"https://github.com/ukplab/5pils"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.449.bib | https://aclanthology.org/2024.emnlp-main.449/ | @inproceedings{shen-etal-2024-improving,
title = "Improving Retrieval-augmented Text-to-{SQL} with {AST}-based Ranking and Schema Pruning",
author = "Shen, Zhili and
Vougiouklis, Pavlos and
Diao, Chenxin and
Vyas, Kaustubh and
Ji, Yuanyi and
Pan, Jeff Z.",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.449",
pages = "7865--7879",
}
| No abstract found | [
"Shen, Zhili",
"Vougiouklis, Pavlos",
"Diao, Chenxin",
"Vyas, Kaustubh",
"Ji, Yuanyi",
"Pan, Jeff Z."
] | Improving Retrieval-augmented Text-to-SQL with AST-based Ranking and Schema Pruning | emnlp-main.449 | Poster | 2407.03227 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.450.bib | https://aclanthology.org/2024.emnlp-main.450/ | @inproceedings{wu-etal-2024-mixture-subspaces,
title = "Mixture-of-Subspaces in Low-Rank Adaptation",
author = "Wu, Taiqiang and
Wang, Jiahao and
Zhao, Zhe and
Wong, Ngai",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.450",
pages = "7880--7899",
abstract = "In this paper, we introduce a subspace-inspired Low-Rank Adaptation (LoRA) method, which is computationally efficient, easy to implement, and readily applicable to large language, multimodal, and diffusion models. Initially, we equivalently decompose the weights of LoRA into two subspaces, and find that simply mixing them can enhance performance. To study such a phenomenon, we revisit it through a fine-grained subspace lens, showing that such modification is equivalent to employing a fixed mixer to fuse the subspaces. To be more flexible, we jointly learn the mixer with the original LoRA weights, and term the method as Mixture-of-Subspaces LoRA (MoSLoRA). MoSLoRA consistently outperforms LoRA on tasks in different modalities, including commonsense reasoning, visual instruction tuning, and subject-driven text-to-image generation, demonstrating its effectiveness and robustness.",
}
| In this paper, we introduce a subspace-inspired Low-Rank Adaptation (LoRA) method, which is computationally efficient, easy to implement, and readily applicable to large language, multimodal, and diffusion models. Initially, we equivalently decompose the weights of LoRA into two subspaces, and find that simply mixing them can enhance performance. To study such a phenomenon, we revisit it through a fine-grained subspace lens, showing that such modification is equivalent to employing a fixed mixer to fuse the subspaces. To be more flexible, we jointly learn the mixer with the original LoRA weights, and term the method as Mixture-of-Subspaces LoRA (MoSLoRA). MoSLoRA consistently outperforms LoRA on tasks in different modalities, including commonsense reasoning, visual instruction tuning, and subject-driven text-to-image generation, demonstrating its effectiveness and robustness. | [
"Wu, Taiqiang",
"Wang, Jiahao",
"Zhao, Zhe",
"Wong, Ngai"
] | Mixture-of-Subspaces in Low-Rank Adaptation | emnlp-main.450 | Oral | 2406.11909 | [
"https://github.com/wutaiqiang/moslora"
] | https://huggingface.co/papers/2406.11909 | 1 | 3 | 1 | 4 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.451.bib | https://aclanthology.org/2024.emnlp-main.451/ | @inproceedings{watts-etal-2024-pariksha,
title = "{PARIKSHA}: A Large-Scale Investigation of Human-{LLM} Evaluator Agreement on Multilingual and Multi-Cultural Data",
author = "Watts, Ishaan and
Gumma, Varun and
Yadavalli, Aditya and
Seshadri, Vivek and
Swaminathan, Manohar and
Sitaram, Sunayana",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.451",
pages = "7900--7932",
abstract = "Evaluation of multilingual Large Language Models (LLMs) is challenging due to a variety of factors {--} the lack of benchmarks with sufficient linguistic diversity, contamination of popular benchmarks into LLM pre-training data and the lack of local, cultural nuances in translated benchmarks. In this work, we study human and LLM-based evaluation in a multilingual, multi-cultural setting. We evaluate 30 models across 10 Indic languages by conducting 90K human evaluations and 30K LLM-based evaluations and find that models such as GPT-4o and Llama-3 70B consistently perform best for most Indic languages. We build leaderboards for two evaluation settings - pairwise comparison and direct assessment and analyse the agreement between humans and LLMs. We find that humans and LLMs agree fairly well in the pairwise setting but the agreement drops for direct assessment evaluation especially for languages such as Bengali and Odia. We also check for various biases in human and LLM-based evaluation and find evidence of self-bias in the GPT-based evaluator. Our work presents a significant step towards scaling up multilingual evaluation of LLMs.",
}
| Evaluation of multilingual Large Language Models (LLMs) is challenging due to a variety of factors {--} the lack of benchmarks with sufficient linguistic diversity, contamination of popular benchmarks into LLM pre-training data and the lack of local, cultural nuances in translated benchmarks. In this work, we study human and LLM-based evaluation in a multilingual, multi-cultural setting. We evaluate 30 models across 10 Indic languages by conducting 90K human evaluations and 30K LLM-based evaluations and find that models such as GPT-4o and Llama-3 70B consistently perform best for most Indic languages. We build leaderboards for two evaluation settings - pairwise comparison and direct assessment and analyse the agreement between humans and LLMs. We find that humans and LLMs agree fairly well in the pairwise setting but the agreement drops for direct assessment evaluation especially for languages such as Bengali and Odia. We also check for various biases in human and LLM-based evaluation and find evidence of self-bias in the GPT-based evaluator. Our work presents a significant step towards scaling up multilingual evaluation of LLMs. | [
"Watts, Ishaan",
"Gumma, Varun",
"Yadavalli, Aditya",
"Seshadri, Vivek",
"Swaminathan, Manohar",
"Sitaram, Sunayana"
] | PARIKSHA: A Large-Scale Investigation of Human-LLM Evaluator Agreement on Multilingual and Multi-Cultural Data | emnlp-main.451 | Poster | 2406.15053 | [
""
] | https://huggingface.co/papers/2406.15053 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.452.bib | https://aclanthology.org/2024.emnlp-main.452/ | @inproceedings{fei-etal-2024-lawbench,
title = "{L}aw{B}ench: Benchmarking Legal Knowledge of Large Language Models",
author = "Fei, Zhiwei and
Shen, Xiaoyu and
Zhu, Dawei and
Zhou, Fengzhe and
Han, Zhuo and
Huang, Alan and
Zhang, Songyang and
Chen, Kai and
Yin, Zhixin and
Shen, Zongwen and
Ge, Jidong and
Ng, Vincent",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.452",
pages = "7933--7962",
abstract = "We present LawBench, the first evaluation benchmark composed of 20 tasks aimed to assess the ability of Large Language Models (LLMs) to perform Chinese legal-related tasks. LawBench is meticulously crafted to enable precise assessment of LLMs{'} legal capabilities from three cognitive levels that correspond to the widely accepted Bloom{'}s cognitive taxonomy. Using LawBench, we present a comprehensive evaluation of 21 popular LLMs and the first comparative analysis of the empirical results in order to reveal their relative strengths and weaknesses. All data, model predictions and evaluation code are accessible from https://github.com/open-compass/LawBench.",
}
| We present LawBench, the first evaluation benchmark composed of 20 tasks aimed to assess the ability of Large Language Models (LLMs) to perform Chinese legal-related tasks. LawBench is meticulously crafted to enable precise assessment of LLMs{'} legal capabilities from three cognitive levels that correspond to the widely accepted Bloom{'}s cognitive taxonomy. Using LawBench, we present a comprehensive evaluation of 21 popular LLMs and the first comparative analysis of the empirical results in order to reveal their relative strengths and weaknesses. All data, model predictions and evaluation code are accessible from https://github.com/open-compass/LawBench. | [
"Fei, Zhiwei",
"Shen, Xiaoyu",
"Zhu, Dawei",
"Zhou, Fengzhe",
"Han, Zhuo",
"Huang, Alan",
"Zhang, Songyang",
"Chen, Kai",
"Yin, Zhixin",
"Shen, Zongwen",
"Ge, Jidong",
"Ng, Vincent"
] | LawBench: Benchmarking Legal Knowledge of Large Language Models | emnlp-main.452 | Poster | 2309.16289 | [
"https://github.com/open-compass/lawbench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.453.bib | https://aclanthology.org/2024.emnlp-main.453/ | @inproceedings{sahinuc-etal-2024-efficient,
title = "Efficient Performance Tracking: Leveraging Large Language Models for Automated Construction of Scientific Leaderboards",
author = "{\c{S}}ahinu{\c{c}}, Furkan and
Tran, Thy Thy and
Grishina, Yulia and
Hou, Yufang and
Chen, Bei and
Gurevych, Iryna",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.453",
pages = "7963--7977",
abstract = "Scientific leaderboards are standardized ranking systems that facilitate evaluating and comparing competitive methods. Typically, a leaderboard is defined by a task, dataset, and evaluation metric (TDM) triple, allowing objective performance assessment and fostering innovation through benchmarking. However, the exponential increase in publications has made it infeasible to construct and maintain these leaderboards manually. Automatic leaderboard construction has emerged as a solution to reduce manual labor. Existing datasets for this task are based on the community-contributed leaderboards without additional curation. Our analysis shows that a large portion of these leaderboards are incomplete, and some of them contain incorrect information. In this work, we present SciLead, a manually-curated Scientific Leaderboard dataset that overcomes the aforementioned problems. Building on this dataset, we propose three experimental settings that simulate real-world scenarios where TDM triples are fully defined, partially defined, or undefined during leaderboard construction. While previous research has only explored the first setting, the latter two are more representative of real-world applications. To address these diverse settings, we develop a comprehensive LLM-based framework for constructing leaderboards. Our experiments and analysis reveal that various LLMs often correctly identify TDM triples while struggling to extract result values from publications. We make our code and data publicly available.",
}
| Scientific leaderboards are standardized ranking systems that facilitate evaluating and comparing competitive methods. Typically, a leaderboard is defined by a task, dataset, and evaluation metric (TDM) triple, allowing objective performance assessment and fostering innovation through benchmarking. However, the exponential increase in publications has made it infeasible to construct and maintain these leaderboards manually. Automatic leaderboard construction has emerged as a solution to reduce manual labor. Existing datasets for this task are based on the community-contributed leaderboards without additional curation. Our analysis shows that a large portion of these leaderboards are incomplete, and some of them contain incorrect information. In this work, we present SciLead, a manually-curated Scientific Leaderboard dataset that overcomes the aforementioned problems. Building on this dataset, we propose three experimental settings that simulate real-world scenarios where TDM triples are fully defined, partially defined, or undefined during leaderboard construction. While previous research has only explored the first setting, the latter two are more representative of real-world applications. To address these diverse settings, we develop a comprehensive LLM-based framework for constructing leaderboards. Our experiments and analysis reveal that various LLMs often correctly identify TDM triples while struggling to extract result values from publications. We make our code and data publicly available. | [
"{\\c{S}}ahinu{\\c{c}}, Furkan",
"Tran, Thy Thy",
"Grishina, Yulia",
"Hou, Yufang",
"Chen, Bei",
"Gurevych, Iryna"
] | Efficient Performance Tracking: Leveraging Large Language Models for Automated Construction of Scientific Leaderboards | emnlp-main.453 | Poster | 2409.12656 | [
"https://github.com/ukplab/arxiv2024-leaderboard-generation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.454.bib | https://aclanthology.org/2024.emnlp-main.454/ | @inproceedings{bulat-etal-2024-efficient,
title = "Efficient Vision-Language pre-training via domain-specific learning for human activities",
author = "Bulat, Adrian and
Ouali, Yassine and
Guerrero, Ricardo and
Martinez, Brais and
Tzimiropoulos, Georgios",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.454",
pages = "7978--8000",
abstract = "Current Vision-Language (VL) models owe their success to large-scale pre-training on web-collected data, which in turn requires high-capacity architectures and large compute resources for training. We posit that when the downstream tasks are known in advance, which is in practice common, the pretraining process can be aligned to the downstream domain, leading to more efficient and accurate models, while shortening the pretraining step. To this end, we introduce a domain-aligned pretraining strategy that, without additional data collection, improves the accuracy on a domain of interest, herein, that of human activities, while largely preserving the generalist knowledge. At the core of our approach stands a new LLM-based method that, provided with a simple set of concept seeds, produces a concept hierarchy with high coverage of the target domain.The concept hierarchy is used to filter a large-scale web-crawled dataset and, then, enhance the resulting instances with targeted synthetic labels. We study in depth how to train such approaches and their resulting behavior. We further show generalization to video-based data by introducing a fast adaptation approach for transitioning from a static (image) model to a dynamic one (i.e. with temporal modeling). On the domain of interest, our approach significantly outperforms models trained on up to $60\times$ more samples and between $10-100\times$ shorter training schedules for image retrieval, video retrieval and action recognition. Code will be released.",
}
| Current Vision-Language (VL) models owe their success to large-scale pre-training on web-collected data, which in turn requires high-capacity architectures and large compute resources for training. We posit that when the downstream tasks are known in advance, which is in practice common, the pretraining process can be aligned to the downstream domain, leading to more efficient and accurate models, while shortening the pretraining step. To this end, we introduce a domain-aligned pretraining strategy that, without additional data collection, improves the accuracy on a domain of interest, herein, that of human activities, while largely preserving the generalist knowledge. At the core of our approach stands a new LLM-based method that, provided with a simple set of concept seeds, produces a concept hierarchy with high coverage of the target domain.The concept hierarchy is used to filter a large-scale web-crawled dataset and, then, enhance the resulting instances with targeted synthetic labels. We study in depth how to train such approaches and their resulting behavior. We further show generalization to video-based data by introducing a fast adaptation approach for transitioning from a static (image) model to a dynamic one (i.e. with temporal modeling). On the domain of interest, our approach significantly outperforms models trained on up to $60\times$ more samples and between $10-100\times$ shorter training schedules for image retrieval, video retrieval and action recognition. Code will be released. | [
"Bulat, Adrian",
"Ouali, Yassine",
"Guerrero, Ricardo",
"Martinez, Brais",
"Tzimiropoulos, Georgios"
] | Efficient Vision-Language pre-training via domain-specific learning for human activities | emnlp-main.454 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.455.bib | https://aclanthology.org/2024.emnlp-main.455/ | @inproceedings{li-etal-2024-empowering-backbone,
title = "Empowering Backbone Models for Visual Text Generation with Input Granularity Control and Glyph-Aware Training",
author = "Li, Wenbo and
Li, Guohao and
Lan, Zhibin and
Xu, Xue and
Zhuang, Wanru and
Liu, Jiachen and
Xiao, Xinyan and
Su, Jinsong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.455",
pages = "8001--8014",
abstract = "Diffusion-based text-to-image models have demonstrated impressive achievements in diversity and aesthetics but struggle to generate images with legible visual texts. Existing backbone models have limitations such as misspelling, failing to generate texts, and lack of support for Chinese texts, but their development shows promising potential. In this paper, we propose a series of methods, aiming to empower backbone models to generate visual texts in English and Chinese. We first conduct a preliminary study revealing that BPE tokenization and insufficient learning of cross-attention modules restrict the performance of the backbone models. Based on these observations, we make the following improvements: (1) We design a mixed granularity input strategy to provide more suitable text representations; (2) We propose to augment the conventional training objective with three glyph-aware training losses, which enhance the learning of cross-attention modules and encourage the model to focus on visual texts. Through experiments, we demonstrate that our methods can effectively empower backbone models to generate semantic relevant, aesthetically appealing, and accurate visual text images, while maintaining their fundamental image generation quality.",
}
| Diffusion-based text-to-image models have demonstrated impressive achievements in diversity and aesthetics but struggle to generate images with legible visual texts. Existing backbone models have limitations such as misspelling, failing to generate texts, and lack of support for Chinese texts, but their development shows promising potential. In this paper, we propose a series of methods, aiming to empower backbone models to generate visual texts in English and Chinese. We first conduct a preliminary study revealing that BPE tokenization and insufficient learning of cross-attention modules restrict the performance of the backbone models. Based on these observations, we make the following improvements: (1) We design a mixed granularity input strategy to provide more suitable text representations; (2) We propose to augment the conventional training objective with three glyph-aware training losses, which enhance the learning of cross-attention modules and encourage the model to focus on visual texts. Through experiments, we demonstrate that our methods can effectively empower backbone models to generate semantic relevant, aesthetically appealing, and accurate visual text images, while maintaining their fundamental image generation quality. | [
"Li, Wenbo",
"Li, Guohao",
"Lan, Zhibin",
"Xu, Xue",
"Zhuang, Wanru",
"Liu, Jiachen",
"Xiao, Xinyan",
"Su, Jinsong"
] | Empowering Backbone Models for Visual Text Generation with Input Granularity Control and Glyph-Aware Training | emnlp-main.455 | Poster | 2410.04439 | [
""
] | https://huggingface.co/papers/2410.04439 | 0 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.456.bib | https://aclanthology.org/2024.emnlp-main.456/ | @inproceedings{yuan-etal-2024-evaluating,
title = "Evaluating Character Understanding of Large Language Models via Character Profiling from Fictional Works",
author = "Yuan, Xinfeng and
Yuan, Siyu and
Cui, Yuhan and
Lin, Tianhe and
Wang, Xintao and
Xu, Rui and
Chen, Jiangjie and
Yang, Deqing",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.456",
pages = "8015--8036",
abstract = "Large language models (LLMs) have demonstrated impressive performance and spurred numerous AI applications, in which role-playing agents (RPAs) are particularly popular, especially for fictional characters. The prerequisite for these RPAs lies in the capability of LLMs to understand characters from fictional works. Previous efforts have evaluated this capability via basic classification tasks or characteristic imitation, failing to capture the nuanced character understanding with LLMs. In this paper, we propose evaluating LLMs{'} character understanding capability via the character profiling task, i.e., summarizing character profiles from corresponding materials, a widely adopted yet understudied practice for RPA development. Specifically, we construct the CROSS dataset from literature experts and assess the generated profiles by comparing them with ground truth references and evaluating their applicability in downstream tasks. Our experiments, which cover various summarization methods and LLMs, have yielded promising results. These results strongly validate the character understanding capability of LLMs. Resources are available at https://github.com/Joanna0123/character{\_}profiling.",
}
| Large language models (LLMs) have demonstrated impressive performance and spurred numerous AI applications, in which role-playing agents (RPAs) are particularly popular, especially for fictional characters. The prerequisite for these RPAs lies in the capability of LLMs to understand characters from fictional works. Previous efforts have evaluated this capability via basic classification tasks or characteristic imitation, failing to capture the nuanced character understanding with LLMs. In this paper, we propose evaluating LLMs{'} character understanding capability via the character profiling task, i.e., summarizing character profiles from corresponding materials, a widely adopted yet understudied practice for RPA development. Specifically, we construct the CROSS dataset from literature experts and assess the generated profiles by comparing them with ground truth references and evaluating their applicability in downstream tasks. Our experiments, which cover various summarization methods and LLMs, have yielded promising results. These results strongly validate the character understanding capability of LLMs. Resources are available at https://github.com/Joanna0123/character{\_}profiling. | [
"Yuan, Xinfeng",
"Yuan, Siyu",
"Cui, Yuhan",
"Lin, Tianhe",
"Wang, Xintao",
"Xu, Rui",
"Chen, Jiangjie",
"Yang, Deqing"
] | Evaluating Character Understanding of Large Language Models via Character Profiling from Fictional Works | emnlp-main.456 | Poster | 2404.12726 | [
"https://github.com/joanna0123/character_profiling"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.457.bib | https://aclanthology.org/2024.emnlp-main.457/ | @inproceedings{zhang-etal-2024-getting,
title = "Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners",
author = "Zhang, Shimao and
Gao, Changjiang and
Zhu, Wenhao and
Chen, Jiajun and
Huang, Xin and
Han, Xue and
Feng, Junlan and
Deng, Chao and
Huang, Shujian",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.457",
pages = "8037--8051",
abstract = "Recently, Large Language Models (LLMs) have shown impressive language capabilities, while most of them have very unbalanced performance across different languages. Multilingual alignment based on the translation parallel data is an effective method to enhance LLMs{'} multilingual capabilities. In this work, we first discover and comprehensively investigate the spontaneous multilingual alignment of LLMs. Firstly, we find that LLMs instruction-tuned on the question translation data (i.e. without annotated answers) are able to encourage the alignment between English and a wide range of languages, even including those unseen during instruction-tuning. Additionally, we utilize different settings and mechanistic interpretability methods to analyze the LLM{'}s performance in the multilingual scenario comprehensively. Our work suggests that LLMs have enormous potential for improving multilingual alignment efficiently with great language generalization and task generalization.",
}
| Recently, Large Language Models (LLMs) have shown impressive language capabilities, while most of them have very unbalanced performance across different languages. Multilingual alignment based on the translation parallel data is an effective method to enhance LLMs{'} multilingual capabilities. In this work, we first discover and comprehensively investigate the spontaneous multilingual alignment of LLMs. Firstly, we find that LLMs instruction-tuned on the question translation data (i.e. without annotated answers) are able to encourage the alignment between English and a wide range of languages, even including those unseen during instruction-tuning. Additionally, we utilize different settings and mechanistic interpretability methods to analyze the LLM{'}s performance in the multilingual scenario comprehensively. Our work suggests that LLMs have enormous potential for improving multilingual alignment efficiently with great language generalization and task generalization. | [
"Zhang, Shimao",
"Gao, Changjiang",
"Zhu, Wenhao",
"Chen, Jiajun",
"Huang, Xin",
"Han, Xue",
"Feng, Junlan",
"Deng, Chao",
"Huang, Shujian"
] | Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners | emnlp-main.457 | Oral | 2405.13816 | [
"https://github.com/shimao-zhang/llm-multilingual-learner"
] | https://huggingface.co/papers/2405.13816 | 1 | 0 | 0 | 9 | [
"sunatte/txt2sql",
"MachoMaheen/devdock4bit"
] | [] | [
"smarttang/blingsec"
] | [
"sunatte/txt2sql",
"MachoMaheen/devdock4bit"
] | [] | [
"smarttang/blingsec"
] | 1 |
https://aclanthology.org/2024.emnlp-main.458.bib | https://aclanthology.org/2024.emnlp-main.458/ | @inproceedings{sun-etal-2024-adaswitch,
title = "{A}da{S}witch: Adaptive Switching between Small and Large Agents for Effective Cloud-Local Collaborative Learning",
author = "Sun, Hao and
Wu, Jiayi and
Cai, Hengyi and
Wei, Xiaochi and
Feng, Yue and
Wang, Bo and
Wang, Shuaiqiang and
Zhang, Yan and
Yin, Dawei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.458",
pages = "8052--8062",
abstract = "Recent advancements in large language models (LLMs) have been remarkable. Users face a choice between using cloud-based LLMs for generation quality and deploying local-based LLMs for lower computational cost. The former option is typically costly and inefficient, while the latter usually fails to deliver satisfactory performance for reasoning steps requiring deliberate thought processes. In this work, we propose a novel LLM utilization paradigm that facilitates the collaborative operation of large cloud-based LLMs and smaller local-deployed LLMs. Our framework comprises two primary modules: the local agent instantiated with a relatively smaller LLM, handling less complex reasoning steps, and the cloud agent equipped with a larger LLM, managing more intricate reasoning steps. This collaborative processing is enabled through an adaptive mechanism where the local agent introspectively identifies errors and proactively seeks assistance from the cloud agent, thereby effectively integrating the strengths of both locally-deployed and cloud-based LLMs, resulting in significant enhancements in task completion performance and efficiency. We evaluate AdaSwitch across 7 benchmarks, ranging from mathematical reasoning and complex question answering, using various types of LLMs to instantiate the local and cloud agents. The empirical results show that AdaSwitch effectively improves the performance of the local agent, and sometimes achieves competitive results compared to the cloud agent while utilizing much less computational overhead.",
}
| Recent advancements in large language models (LLMs) have been remarkable. Users face a choice between using cloud-based LLMs for generation quality and deploying local-based LLMs for lower computational cost. The former option is typically costly and inefficient, while the latter usually fails to deliver satisfactory performance for reasoning steps requiring deliberate thought processes. In this work, we propose a novel LLM utilization paradigm that facilitates the collaborative operation of large cloud-based LLMs and smaller local-deployed LLMs. Our framework comprises two primary modules: the local agent instantiated with a relatively smaller LLM, handling less complex reasoning steps, and the cloud agent equipped with a larger LLM, managing more intricate reasoning steps. This collaborative processing is enabled through an adaptive mechanism where the local agent introspectively identifies errors and proactively seeks assistance from the cloud agent, thereby effectively integrating the strengths of both locally-deployed and cloud-based LLMs, resulting in significant enhancements in task completion performance and efficiency. We evaluate AdaSwitch across 7 benchmarks, ranging from mathematical reasoning and complex question answering, using various types of LLMs to instantiate the local and cloud agents. The empirical results show that AdaSwitch effectively improves the performance of the local agent, and sometimes achieves competitive results compared to the cloud agent while utilizing much less computational overhead. | [
"Sun, Hao",
"Wu, Jiayi",
"Cai, Hengyi",
"Wei, Xiaochi",
"Feng, Yue",
"Wang, Bo",
"Wang, Shuaiqiang",
"Zhang, Yan",
"Yin, Dawei"
] | AdaSwitch: Adaptive Switching between Small and Large Agents for Effective Cloud-Local Collaborative Learning | emnlp-main.458 | Poster | 2410.13181 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.459.bib | https://aclanthology.org/2024.emnlp-main.459/ | @inproceedings{gong-etal-2024-coba,
title = "{C}o{B}a: Convergence Balancer for Multitask Finetuning of Large Language Models",
author = "Gong, Zi and
Yu, Hang and
Liao, Cong and
Liu, Bingchang and
Chen, Chaoyu and
Li, Jianguo",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.459",
pages = "8063--8077",
abstract = "Multi-task learning (MTL) benefits the fine-tuning of large language models (LLMs) by providing a single model with improved performance and generalization ability across tasks, presenting a resource-efficient alternative to developing separate models for each task. Yet, existing MTL strategies for LLMs often fall short by either being computationally intensive or failing to ensure simultaneous task convergence. This paper presents CoBa, a new MTL approach designed to effectively manage task convergence balance with minimal computational overhead. Utilizing Relative Convergence Scores (RCS), Absolute Convergence Scores (ACS), and a Divergence Factor (DF), CoBa dynamically adjusts task weights during the training process, ensuring that the validation loss of all tasks progress towards convergence at an even pace while mitigating the issue of individual task divergence. The results of our experiments involving three disparate datasets underscore that this approach not only fosters equilibrium in task improvement but enhances the LLMs{'} performance by up to 13{\%} relative to the second-best baselines. Code is open-sourced at https://github.com/codefuse-ai/MFTCoder.",
}
| Multi-task learning (MTL) benefits the fine-tuning of large language models (LLMs) by providing a single model with improved performance and generalization ability across tasks, presenting a resource-efficient alternative to developing separate models for each task. Yet, existing MTL strategies for LLMs often fall short by either being computationally intensive or failing to ensure simultaneous task convergence. This paper presents CoBa, a new MTL approach designed to effectively manage task convergence balance with minimal computational overhead. Utilizing Relative Convergence Scores (RCS), Absolute Convergence Scores (ACS), and a Divergence Factor (DF), CoBa dynamically adjusts task weights during the training process, ensuring that the validation loss of all tasks progress towards convergence at an even pace while mitigating the issue of individual task divergence. The results of our experiments involving three disparate datasets underscore that this approach not only fosters equilibrium in task improvement but enhances the LLMs{'} performance by up to 13{\%} relative to the second-best baselines. Code is open-sourced at https://github.com/codefuse-ai/MFTCoder. | [
"Gong, Zi",
"Yu, Hang",
"Liao, Cong",
"Liu, Bingchang",
"Chen, Chaoyu",
"Li, Jianguo"
] | CoBa: Convergence Balancer for Multitask Finetuning of Large Language Models | emnlp-main.459 | Poster | 2410.06741 | [
"https://github.com/codefuse-ai/mftcoder"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.460.bib | https://aclanthology.org/2024.emnlp-main.460/ | @inproceedings{wang-etal-2024-mdpo,
title = "m{DPO}: Conditional Preference Optimization for Multimodal Large Language Models",
author = "Wang, Fei and
Zhou, Wenxuan and
Huang, James Y. and
Xu, Nan and
Zhang, Sheng and
Poon, Hoifung and
Chen, Muhao",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.460",
pages = "8078--8088",
abstract = "Direct preference optimization (DPO) has shown to be an effective method for large language model (LLM) alignment. Recent works have attempted to apply DPO to multimodal scenarios but have found it challenging to achieve consistent improvement. Through a comparative experiment, we identify the unconditional preference problem in multimodal preference optimization, where the model overlooks the image condition. To address this problem, we propose mDPO, a multimodal DPO objective that prevents the over-prioritization of language-only preferences by also optimizing image preference. Moreover, we introduce a reward anchor that forces the reward to be positive for chosen responses, thereby avoiding the decrease in their likelihood{---}an intrinsic problem of relative preference optimization. Experiments on two multimodal LLMs of different sizes and three widely used benchmarks demonstrate that mDPO effectively addresses the unconditional preference problem in multimodal preference optimization and significantly improves model performance, particularly in reducing hallucination.",
}
| Direct preference optimization (DPO) has shown to be an effective method for large language model (LLM) alignment. Recent works have attempted to apply DPO to multimodal scenarios but have found it challenging to achieve consistent improvement. Through a comparative experiment, we identify the unconditional preference problem in multimodal preference optimization, where the model overlooks the image condition. To address this problem, we propose mDPO, a multimodal DPO objective that prevents the over-prioritization of language-only preferences by also optimizing image preference. Moreover, we introduce a reward anchor that forces the reward to be positive for chosen responses, thereby avoiding the decrease in their likelihood{---}an intrinsic problem of relative preference optimization. Experiments on two multimodal LLMs of different sizes and three widely used benchmarks demonstrate that mDPO effectively addresses the unconditional preference problem in multimodal preference optimization and significantly improves model performance, particularly in reducing hallucination. | [
"Wang, Fei",
"Zhou, Wenxuan",
"Huang, James Y.",
"Xu, Nan",
"Zhang, Sheng",
"Poon, Hoifung",
"Chen, Muhao"
] | mDPO: Conditional Preference Optimization for Multimodal Large Language Models | emnlp-main.460 | Poster | 2406.11839 | [
""
] | https://huggingface.co/papers/2406.11839 | 5 | 37 | 1 | 7 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.461.bib | https://aclanthology.org/2024.emnlp-main.461/ | @inproceedings{wang-etal-2024-data,
title = "Data Advisor: Dynamic Data Curation for Safety Alignment of Large Language Models",
author = "Wang, Fei and
Mehrabi, Ninareh and
Goyal, Palash and
Gupta, Rahul and
Chang, Kai-Wei and
Galstyan, Aram",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.461",
pages = "8089--8100",
abstract = "Data are crucial element in large language model (LLM) alignment. Recent studies have explored using LLMs for efficient data collection. However, LLM-generated data often suffers from quality issues, with underrepresented or absent aspects and low-quality datapoints. To address these problems, we propose Data Advisor, an enhanced LLM-based method for generating data that takes into account the characteristics of the desired dataset. Starting from a set of pre-defined principles in hand, Data Advisor monitors the status of the generated data, identifies weaknesses in the current dataset, and advises the next iteration of data generation accordingly. Data Advisor can be easily integrated into existing data generation methods to enhance data quality and coverage. Experiments on safety alignment of three representative LLMs (i.e., Mistral, Llama2, and Falcon) demonstrate the effectiveness of Data Advisor in enhancing model safety against various fine-grained safety issues without sacrificing model utility.",
}
| Data are crucial element in large language model (LLM) alignment. Recent studies have explored using LLMs for efficient data collection. However, LLM-generated data often suffers from quality issues, with underrepresented or absent aspects and low-quality datapoints. To address these problems, we propose Data Advisor, an enhanced LLM-based method for generating data that takes into account the characteristics of the desired dataset. Starting from a set of pre-defined principles in hand, Data Advisor monitors the status of the generated data, identifies weaknesses in the current dataset, and advises the next iteration of data generation accordingly. Data Advisor can be easily integrated into existing data generation methods to enhance data quality and coverage. Experiments on safety alignment of three representative LLMs (i.e., Mistral, Llama2, and Falcon) demonstrate the effectiveness of Data Advisor in enhancing model safety against various fine-grained safety issues without sacrificing model utility. | [
"Wang, Fei",
"Mehrabi, Ninareh",
"Goyal, Palash",
"Gupta, Rahul",
"Chang, Kai-Wei",
"Galstyan, Aram"
] | Data Advisor: Dynamic Data Curation for Safety Alignment of Large Language Models | emnlp-main.461 | Poster | 2410.05269 | [
""
] | https://huggingface.co/papers/2410.05269 | 4 | 3 | 2 | 6 | [] | [
"fwnlp/data-advisor-safety-alignment",
"fwnlp/self-instruct-safety-alignment"
] | [] | [] | [
"fwnlp/data-advisor-safety-alignment",
"fwnlp/self-instruct-safety-alignment"
] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.462.bib | https://aclanthology.org/2024.emnlp-main.462/ | @inproceedings{bostrom-etal-2024-language,
title = "Language-to-Code Translation with a Single Labeled Example",
author = "Bostrom, Kaj and
Jhamtani, Harsh and
Fang, Hao and
Thomson, Sam and
Shin, Richard and
Xia, Patrick and
Van Durme, Benjamin and
Eisner, Jason and
Andreas, Jacob",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.462",
pages = "8101--8112",
abstract = "Tools for translating natural language into code promise natural, open-ended interaction with databases, web APIs, and other software systems. However, this promise is complicated by the diversity and continual development of these systems, each with its own interface and distinct set of features. Building a new language-to-code translator, even starting with a large language model (LM), typically requires annotating a large set of natural language commands with their associated programs. In this paper, we describe ICIP (In-Context Inverse Programming), a method for bootstrapping a language-to-code system using mostly (or entirely) unlabeled programs written using a potentially unfamiliar (but human-readable) library or API. ICIP uses a pre-trained LM to assign candidate natural language descriptions to these programs, then iteratively refines the descriptions to ensure global consistency. Across nine different application domains from the Overnight and Spider benchmarks and text-davinci-003 and CodeLlama-7b-Instruct models, ICIP outperforms a number of prompting baselines. Indeed, in a {``}nearly unsupervised{''} setting with only a single annotated program and 100 unlabeled examples, it achieves up to 85{\%} of the performance of a fully supervised system.",
}
| Tools for translating natural language into code promise natural, open-ended interaction with databases, web APIs, and other software systems. However, this promise is complicated by the diversity and continual development of these systems, each with its own interface and distinct set of features. Building a new language-to-code translator, even starting with a large language model (LM), typically requires annotating a large set of natural language commands with their associated programs. In this paper, we describe ICIP (In-Context Inverse Programming), a method for bootstrapping a language-to-code system using mostly (or entirely) unlabeled programs written using a potentially unfamiliar (but human-readable) library or API. ICIP uses a pre-trained LM to assign candidate natural language descriptions to these programs, then iteratively refines the descriptions to ensure global consistency. Across nine different application domains from the Overnight and Spider benchmarks and text-davinci-003 and CodeLlama-7b-Instruct models, ICIP outperforms a number of prompting baselines. Indeed, in a {``}nearly unsupervised{''} setting with only a single annotated program and 100 unlabeled examples, it achieves up to 85{\%} of the performance of a fully supervised system. | [
"Bostrom, Kaj",
"Jhamtani, Harsh",
"Fang, Hao",
"Thomson, Sam",
"Shin, Richard",
"Xia, Patrick",
"Van Durme, Benjamin",
"Eisner, Jason",
"Andreas, Jacob"
] | Language-to-Code Translation with a Single Labeled Example | emnlp-main.462 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.463.bib | https://aclanthology.org/2024.emnlp-main.463/ | @inproceedings{buchmann-etal-2024-attribute,
title = "Attribute or Abstain: Large Language Models as Long Document Assistants",
author = "Buchmann, Jan and
Liu, Xiao and
Gurevych, Iryna",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.463",
pages = "8113--8140",
abstract = "LLMs can help humans working with long documents, but are known to hallucinate. *Attribution* can increase trust in LLM responses: The LLM provides evidence that supports its response, which enhances verifiability. Existing approaches to attribution have only been evaluated in RAG settings, where the initial retrieval confounds LLM performance. This is crucially different from the long document setting, where retrieval is not needed, but could help. Thus, a long document specific evaluation of attribution is missing. To fill this gap, we present LAB, a benchmark of 6 diverse long document tasks with attribution, and experiments with different approaches to attribution on 5 LLMs of different sizes. We find that *citation*, i.e. response generation and evidence extraction in one step, performs best for large and fine-tuned models, while additional retrieval can help for small, prompted models. We investigate whether the {``}Lost in the Middle{''} phenomenon exists for attribution, but do not find this. We also find that evidence quality can predict response quality on datasets with simple responses, but not so for complex responses, as models struggle with providing evidence for complex claims. We release code and data for further investigation. [Link](https://github.com/UKPLab/arxiv2024-attribute-or-abstain)",
}
| LLMs can help humans working with long documents, but are known to hallucinate. *Attribution* can increase trust in LLM responses: The LLM provides evidence that supports its response, which enhances verifiability. Existing approaches to attribution have only been evaluated in RAG settings, where the initial retrieval confounds LLM performance. This is crucially different from the long document setting, where retrieval is not needed, but could help. Thus, a long document specific evaluation of attribution is missing. To fill this gap, we present LAB, a benchmark of 6 diverse long document tasks with attribution, and experiments with different approaches to attribution on 5 LLMs of different sizes. We find that *citation*, i.e. response generation and evidence extraction in one step, performs best for large and fine-tuned models, while additional retrieval can help for small, prompted models. We investigate whether the {``}Lost in the Middle{''} phenomenon exists for attribution, but do not find this. We also find that evidence quality can predict response quality on datasets with simple responses, but not so for complex responses, as models struggle with providing evidence for complex claims. We release code and data for further investigation. [Link](https://github.com/UKPLab/arxiv2024-attribute-or-abstain) | [
"Buchmann, Jan",
"Liu, Xiao",
"Gurevych, Iryna"
] | Attribute or Abstain: Large Language Models as Long Document Assistants | emnlp-main.463 | Poster | 2407.07799 | [
"https://github.com/ukplab/arxiv2024-attribute-or-abstain"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.464.bib | https://aclanthology.org/2024.emnlp-main.464/ | @inproceedings{wang-etal-2024-fedkim,
title = "{FEDKIM}: Adaptive Federated Knowledge Injection into Medical Foundation Models",
author = "Wang, Xiaochen and
Wang, Jiaqi and
Xiao, Houping and
Chen, Jinghui and
Ma, Fenglong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.464",
pages = "8141--8154",
abstract = "Foundation models have demonstrated remarkable capabilities in handling diverse modalities and tasks, outperforming conventional artificial intelligence (AI) approaches that are highly task-specific and modality-reliant. In the medical domain, however, the development of comprehensive foundation models is constrained by limited access to diverse modalities and stringent privacy regulations. To address these constraints, this study introduces a novel knowledge injection approach, FedKIM, designed to scale the medical foundation model within a federated learning framework. FedKIM leverages lightweight local models to extract healthcare knowledge from private data and integrates this knowledge into a centralized foundation model using a designed adaptive Multitask Multimodal Mixture Of Experts (M$^3$OE) module. This method not only preserves privacy but also enhances the model{'}s ability to handle complex medical tasks involving multiple modalities. Our extensive experiments across twelve tasks in seven modalities demonstrate the effectiveness of FedKIM in various settings, highlighting its potential to scale medical foundation models without direct access to sensitive data. Source codes are available at https://github.com/XiaochenWang-PSU/FedKIM.",
}
| Foundation models have demonstrated remarkable capabilities in handling diverse modalities and tasks, outperforming conventional artificial intelligence (AI) approaches that are highly task-specific and modality-reliant. In the medical domain, however, the development of comprehensive foundation models is constrained by limited access to diverse modalities and stringent privacy regulations. To address these constraints, this study introduces a novel knowledge injection approach, FedKIM, designed to scale the medical foundation model within a federated learning framework. FedKIM leverages lightweight local models to extract healthcare knowledge from private data and integrates this knowledge into a centralized foundation model using a designed adaptive Multitask Multimodal Mixture Of Experts (M$^3$OE) module. This method not only preserves privacy but also enhances the model{'}s ability to handle complex medical tasks involving multiple modalities. Our extensive experiments across twelve tasks in seven modalities demonstrate the effectiveness of FedKIM in various settings, highlighting its potential to scale medical foundation models without direct access to sensitive data. Source codes are available at https://github.com/XiaochenWang-PSU/FedKIM. | [
"Wang, Xiaochen",
"Wang, Jiaqi",
"Xiao, Houping",
"Chen, Jinghui",
"Ma, Fenglong"
] | FEDKIM: Adaptive Federated Knowledge Injection into Medical Foundation Models | emnlp-main.464 | Poster | 2408.10276 | [
"https://github.com/XiaochenWang-PSU/FedKIM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.465.bib | https://aclanthology.org/2024.emnlp-main.465/ | @inproceedings{sun-etal-2024-retrieved,
title = "Retrieved In-Context Principles from Previous Mistakes",
author = "Sun, Hao and
Jiang, Yong and
Wang, Bo and
Hou, Yingyan and
Zhang, Yan and
Xie, Pengjun and
Huang, Fei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.465",
pages = "8155--8169",
abstract = "In-context learning (ICL) has been instrumental in adapting large language models (LLMs) to downstream tasks using correct input-output examples. Recent advances have attempted to improve model performance through principles derived from mistakes, yet these approaches suffer from lack of customization and inadequate error coverage. To address these limitations, we propose Retrieved In-Context Principles (RICP), a novel teacher-student framework. In RICP, the teacher model analyzes mistakes from the student model to generate reasons and insights for preventing similar mistakes. These mistakes are clustered based on their underlying reasons for developing task-level principles, enhancing the error coverage of principles. During inference, the most relevant mistakes for each question are retrieved to create question-level principles, improving the customization of the provided guidance. RICP is orthogonal to existing prompting methods and does not require intervention from the teacher model during inference. Experimental results across seven reasoning benchmarks reveal that RICP effectively enhances performance when applied to various prompting strategies.",
}
| In-context learning (ICL) has been instrumental in adapting large language models (LLMs) to downstream tasks using correct input-output examples. Recent advances have attempted to improve model performance through principles derived from mistakes, yet these approaches suffer from lack of customization and inadequate error coverage. To address these limitations, we propose Retrieved In-Context Principles (RICP), a novel teacher-student framework. In RICP, the teacher model analyzes mistakes from the student model to generate reasons and insights for preventing similar mistakes. These mistakes are clustered based on their underlying reasons for developing task-level principles, enhancing the error coverage of principles. During inference, the most relevant mistakes for each question are retrieved to create question-level principles, improving the customization of the provided guidance. RICP is orthogonal to existing prompting methods and does not require intervention from the teacher model during inference. Experimental results across seven reasoning benchmarks reveal that RICP effectively enhances performance when applied to various prompting strategies. | [
"Sun, Hao",
"Jiang, Yong",
"Wang, Bo",
"Hou, Yingyan",
"Zhang, Yan",
"Xie, Pengjun",
"Huang, Fei"
] | Retrieved In-Context Principles from Previous Mistakes | emnlp-main.465 | Poster | 2407.05682 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.466.bib | https://aclanthology.org/2024.emnlp-main.466/ | @inproceedings{chen-etal-2024-emoknob,
title = "{E}mo{K}nob: Enhance Voice Cloning with Fine-Grained Emotion Control",
author = "Chen, Haozhe and
Chen, Run and
Hirschberg, Julia",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.466",
pages = "8170--8180",
abstract = "While recent advances in Text-to-Speech (TTS) technology produce natural and expressive speech, they lack the option for users to select emotion and control intensity. We propose EmoKnob, a framework that allows fine-grained emotion control in speech synthesis with few-shot demonstrative samples of arbitrary emotion. Our framework leverages the expressive speaker representation space made possible by recent advances in foundation voice cloning models. Based on the few-shot capability of our emotion control framework, we propose two methods to apply emotion control on emotions described by open-ended text, enabling an intuitive interface for controlling a diverse array of nuanced emotions. To facilitate a more systematic emotional speech synthesis field, we introduce a set of evaluation metrics designed to rigorously assess the faithfulness and recognizability of emotion control frameworks. Through objective and subjective evaluations, we show that our emotion control framework effectively embeds emotions into speech and surpasses emotion expressiveness of commercial TTS services.",
}
| While recent advances in Text-to-Speech (TTS) technology produce natural and expressive speech, they lack the option for users to select emotion and control intensity. We propose EmoKnob, a framework that allows fine-grained emotion control in speech synthesis with few-shot demonstrative samples of arbitrary emotion. Our framework leverages the expressive speaker representation space made possible by recent advances in foundation voice cloning models. Based on the few-shot capability of our emotion control framework, we propose two methods to apply emotion control on emotions described by open-ended text, enabling an intuitive interface for controlling a diverse array of nuanced emotions. To facilitate a more systematic emotional speech synthesis field, we introduce a set of evaluation metrics designed to rigorously assess the faithfulness and recognizability of emotion control frameworks. Through objective and subjective evaluations, we show that our emotion control framework effectively embeds emotions into speech and surpasses emotion expressiveness of commercial TTS services. | [
"Chen, Haozhe",
"Chen, Run",
"Hirschberg, Julia"
] | EmoKnob: Enhance Voice Cloning with Fine-Grained Emotion Control | emnlp-main.466 | Poster | 2410.00316 | [
"https://github.com/tonychenxyz/emoknob"
] | https://huggingface.co/papers/2410.00316 | 1 | 4 | 2 | 3 | [] | [] | [
"tonychenxyz/emo-knob"
] | [] | [] | [
"tonychenxyz/emo-knob"
] | 1 |
https://aclanthology.org/2024.emnlp-main.467.bib | https://aclanthology.org/2024.emnlp-main.467/ | @inproceedings{liu-etal-2024-vptq,
title = "{VPTQ}: Extreme Low-bit Vector Post-Training Quantization for Large Language Models",
author = "Liu, Yifei and
Wen, Jicheng and
Wang, Yang and
Ye, Shengyu and
Zhang, Li Lyna and
Cao, Ting and
Li, Cheng and
Yang, Mao",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.467",
pages = "8181--8196",
abstract = "Scaling model size significantly challenges the deployment and inference of Large Language Models (LLMs). Due to the redundancy in LLM weights, recent research has focused on pushing weight-only quantization to extremely low-bit (even down to 2 bits). It reduces memory requirements, optimizes storage costs, and decreases memory bandwidth needs during inference. However, due to numerical representation limitations, traditional scalar-based weight quantization struggles to achieve such extreme low-bit.Recent research on Vector Quantization (VQ) for LLMs has demonstrated the potential for extremely low-bit model quantization by compressing vectors into indices using lookup tables. In this paper, we introduce **Vector Post-Training Quantization (VPTQ)** for extremely low-bit quantization of LLMs. We use Second-Order Optimization to formulate the LLM VQ problem and guide our quantization algorithm design by solving the optimization.We further refine the weights using Channel-Independent Second-Order Optimization for a granular VQ.In addition, by decomposing the optimization problem, we propose a brief and effective codebook initialization algorithm. We also extend VPTQ to support residual and outlier quantization, which enhances model accuracy and further compresses the model.Our experimental results show that VPTQ reduces model quantization perplexity by 0.01-0.34 on LLaMA-2, 0.38-0.68 on Mistral-7B, 4.41-7.34 on LLaMA-3 over SOTA at 2-bit, with an average accuracy improvement of 0.79-1.5{\%} on LLaMA-2, 1{\%} on Mistral-7B, 11-22{\%} on LLaMA-3 on QA tasks on average. We only utilize 10.4-18.6{\%} of the quantization algorithm execution time, resulting in a 1.6-$1.8\times$ increase in inference throughput compared to SOTA.",
}
| Scaling model size significantly challenges the deployment and inference of Large Language Models (LLMs). Due to the redundancy in LLM weights, recent research has focused on pushing weight-only quantization to extremely low-bit (even down to 2 bits). It reduces memory requirements, optimizes storage costs, and decreases memory bandwidth needs during inference. However, due to numerical representation limitations, traditional scalar-based weight quantization struggles to achieve such extreme low-bit.Recent research on Vector Quantization (VQ) for LLMs has demonstrated the potential for extremely low-bit model quantization by compressing vectors into indices using lookup tables. In this paper, we introduce **Vector Post-Training Quantization (VPTQ)** for extremely low-bit quantization of LLMs. We use Second-Order Optimization to formulate the LLM VQ problem and guide our quantization algorithm design by solving the optimization.We further refine the weights using Channel-Independent Second-Order Optimization for a granular VQ.In addition, by decomposing the optimization problem, we propose a brief and effective codebook initialization algorithm. We also extend VPTQ to support residual and outlier quantization, which enhances model accuracy and further compresses the model.Our experimental results show that VPTQ reduces model quantization perplexity by 0.01-0.34 on LLaMA-2, 0.38-0.68 on Mistral-7B, 4.41-7.34 on LLaMA-3 over SOTA at 2-bit, with an average accuracy improvement of 0.79-1.5{\%} on LLaMA-2, 1{\%} on Mistral-7B, 11-22{\%} on LLaMA-3 on QA tasks on average. We only utilize 10.4-18.6{\%} of the quantization algorithm execution time, resulting in a 1.6-$1.8\times$ increase in inference throughput compared to SOTA. | [
"Liu, Yifei",
"Wen, Jicheng",
"Wang, Yang",
"Ye, Shengyu",
"Zhang, Li Lyna",
"Cao, Ting",
"Li, Cheng",
"Yang, Mao"
] | VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models | emnlp-main.467 | Poster | 2409.17066 | [
"https://github.com/microsoft/vptq"
] | https://huggingface.co/papers/2409.17066 | 6 | 27 | 3 | 8 | [
"VPTQ-community/Qwen2.5-72B-Instruct-v16-k65536-65536-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v16-k65536-32768-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-64-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-65536-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v8-k65536-65536-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k16384-0-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-0-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-4-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-0-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k1024-512-woft",
"VPTQ-community/Meta-Llama-3.1-8B-Instruct-v12-k65536-4096-woft",
"VPTQ-community/Qwen2.5-32B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-16384-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k32768-0-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-4-woft-duplicated",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-1024-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v8-k4096-0-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k32768-32768-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-128-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k512-512-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-256-woft",
"VPTQ-community/Qwen2.5-7B-Instruct-v16-k65536-65536-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Qwen2.5-32B-Instruct-v16-k65536-65536-woft",
"VPTQ-community/Qwen2.5-32B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-65536-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-65536-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-256-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-32768-woft",
"VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-4096-woft",
"VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-65536-woft",
"VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Qwen2.5-14B-Instruct-v8-k256-256-woft",
"VPTQ-community/Qwen2.5-14B-Instruct-v16-k65536-65536-woft",
"VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-0-woft",
"VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Qwen2.5-7B-Instruct-v8-k256-256-woft",
"VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-0-woft",
"VPTQ-community/Qwen2.5-32B-Instruct-v8-k65536-0-woft",
"VPTQ-community/Qwen2.5-32B-Instruct-v8-k256-256-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-0-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-256-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-256-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-1024-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-4096-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v8-k65536-256-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-65536-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-1024-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v8-k65536-0-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-16384-woft"
] | [] | [
"VPTQ-community/VPTQ-Demo",
"OpenSourceRonin/VPTQ-demo",
"microsoft/VPTQ",
"OpenSourceRonin/VPTQ_demo"
] | [
"VPTQ-community/Qwen2.5-72B-Instruct-v16-k65536-65536-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v16-k65536-32768-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-64-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-65536-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v8-k65536-65536-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k16384-0-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-0-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-4-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-0-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k1024-512-woft",
"VPTQ-community/Meta-Llama-3.1-8B-Instruct-v12-k65536-4096-woft",
"VPTQ-community/Qwen2.5-32B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-16384-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k32768-0-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-4-woft-duplicated",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-1024-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v8-k4096-0-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k32768-32768-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-128-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k512-512-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v16-k65536-256-woft",
"VPTQ-community/Qwen2.5-7B-Instruct-v16-k65536-65536-woft",
"VPTQ-community/Qwen2.5-72B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Qwen2.5-32B-Instruct-v16-k65536-65536-woft",
"VPTQ-community/Qwen2.5-32B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-65536-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-65536-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-256-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-32768-woft",
"VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-4096-woft",
"VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-65536-woft",
"VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Qwen2.5-14B-Instruct-v8-k256-256-woft",
"VPTQ-community/Qwen2.5-14B-Instruct-v16-k65536-65536-woft",
"VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-256-woft",
"VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-0-woft",
"VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Qwen2.5-7B-Instruct-v8-k256-256-woft",
"VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-0-woft",
"VPTQ-community/Qwen2.5-32B-Instruct-v8-k65536-0-woft",
"VPTQ-community/Qwen2.5-32B-Instruct-v8-k256-256-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-0-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-256-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-256-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-1024-woft",
"VPTQ-community/Mistral-Large-Instruct-2407-v16-k65536-4096-woft",
"VPTQ-community/Meta-Llama-3.1-405B-Instruct-v8-k65536-65536-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v8-k65536-256-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-65536-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-1024-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v8-k65536-0-woft",
"VPTQ-community/Llama-3.1-Nemotron-70B-Instruct-HF-v16-k65536-16384-woft"
] | [] | [
"VPTQ-community/VPTQ-Demo",
"OpenSourceRonin/VPTQ-demo",
"microsoft/VPTQ",
"OpenSourceRonin/VPTQ_demo"
] | 1 |
https://aclanthology.org/2024.emnlp-main.468.bib | https://aclanthology.org/2024.emnlp-main.468/ | @inproceedings{pasti-etal-2024-l,
title = "An {L}* Algorithm for Deterministic Weighted Regular Languages",
author = {Pasti, Clemente and
Karag{\"o}z, Talu and
Nowak, Franz and
Svete, Anej and
Boumasmoud, Reda and
Cotterell, Ryan},
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.468",
pages = "8197--8210",
abstract = "Extracting finite state automata (FSAs) fromblack-box models offers a powerful approachto gaining interpretable insights into complexmodel behaviors. To support this pursuit, wepresent a weighted variant of Angluin{'}s (1987)L* algorithm for learning FSAs. We stay faithful to the original formulation, devising a wayto exactly learn deterministic weighted FSAswhose weights support division. Furthermore,we formulate the learning process in a mannerthat highlights the connection with FSA minimization, showing how L* directly learns aminimal automaton for the target language.",
}
| Extracting finite state automata (FSAs) fromblack-box models offers a powerful approachto gaining interpretable insights into complexmodel behaviors. To support this pursuit, wepresent a weighted variant of Angluin{'}s (1987)L* algorithm for learning FSAs. We stay faithful to the original formulation, devising a wayto exactly learn deterministic weighted FSAswhose weights support division. Furthermore,we formulate the learning process in a mannerthat highlights the connection with FSA minimization, showing how L* directly learns aminimal automaton for the target language. | [
"Pasti, Clemente",
"Karag{\\\"o}z, Talu",
"Nowak, Franz",
"Svete, Anej",
"Boumasmoud, Reda",
"Cotterell, Ryan"
] | An L* Algorithm for Deterministic Weighted Regular Languages | emnlp-main.468 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.469.bib | https://aclanthology.org/2024.emnlp-main.469/ | @inproceedings{sun-etal-2024-towards-verifiable,
title = "Towards Verifiable Text Generation with Evolving Memory and Self-Reflection",
author = "Sun, Hao and
Cai, Hengyi and
Wang, Bo and
Hou, Yingyan and
Wei, Xiaochi and
Wang, Shuaiqiang and
Zhang, Yan and
Yin, Dawei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.469",
pages = "8211--8227",
abstract = "Despite the remarkable ability of large language models (LLMs) in language comprehension and generation, they often suffer from producing factually incorrect information, also known as hallucination. A promising solution to this issue is verifiable text generation, which prompts LLMs to generate content with citations for accuracy verification. However, verifiable text generation is non-trivial due to the focus-shifting phenomenon, the intricate reasoning needed to align the claim with correct citations, and the dilemma between the precision and breadth of retrieved documents. In this paper, we present VTG, an innovative framework for Verifiable Text Generation with evolving memory and self-reflection. VTG introduces evolving long short-term memory to retain both valuable documents and recent documents. A two-tier verifier equipped with an evidence finder is proposed to rethink and reflect on the relationship between the claim and citations. Furthermore, active retrieval and diverse query generation are utilized to enhance both the precision and breadth of the retrieved documents. We conduct extensive experiments on five datasets across three knowledge-intensive tasks and the results reveal that VTG significantly outperforms baselines.",
}
| Despite the remarkable ability of large language models (LLMs) in language comprehension and generation, they often suffer from producing factually incorrect information, also known as hallucination. A promising solution to this issue is verifiable text generation, which prompts LLMs to generate content with citations for accuracy verification. However, verifiable text generation is non-trivial due to the focus-shifting phenomenon, the intricate reasoning needed to align the claim with correct citations, and the dilemma between the precision and breadth of retrieved documents. In this paper, we present VTG, an innovative framework for Verifiable Text Generation with evolving memory and self-reflection. VTG introduces evolving long short-term memory to retain both valuable documents and recent documents. A two-tier verifier equipped with an evidence finder is proposed to rethink and reflect on the relationship between the claim and citations. Furthermore, active retrieval and diverse query generation are utilized to enhance both the precision and breadth of the retrieved documents. We conduct extensive experiments on five datasets across three knowledge-intensive tasks and the results reveal that VTG significantly outperforms baselines. | [
"Sun, Hao",
"Cai, Hengyi",
"Wang, Bo",
"Hou, Yingyan",
"Wei, Xiaochi",
"Wang, Shuaiqiang",
"Zhang, Yan",
"Yin, Dawei"
] | Towards Verifiable Text Generation with Evolving Memory and Self-Reflection | emnlp-main.469 | Poster | 2312.09075 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.470.bib | https://aclanthology.org/2024.emnlp-main.470/ | @inproceedings{sahu-etal-2024-pelican,
title = "Pelican: Correcting Hallucination in Vision-{LLM}s via Claim Decomposition and Program of Thought Verification",
author = "Sahu, Pritish and
Sikka, Karan and
Divakaran, Ajay",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.470",
pages = "8228--8248",
abstract = "Large Visual Language Models (LVLMs) struggle with hallucinations in visual instruction following task(s). These issues hinder their trustworthiness and real-world applicability. We propose Pelican {--} a novel framework designed to detect and mitigate hallucinations through claim verification. Pelican first decomposes the visual claim into a chain of sub-claims based on first-order predicates. These sub-claims consists of (predicate, question) pairs and can be conceptualized as nodes of a computational graph. We then use use Program-of-Thought prompting to generate Python code for answering these questions through flexible composition of external tools. Pelican improves over prior work by introducing (1) intermediate variables for precise grounding of object instances, and (2) shared computation for answering the sub-question to enable adaptive corrections and inconsistency identification. We finally use reasoning abilities of LLM to verify the correctness of the the claim by considering the consistency and confidence of the (question, answer) pairs from each sub-claim. Our experiments demonstrate consistent performance improvements over various baseline LVLMs and existing hallucination mitigation approaches across several benchmarks.",
}
| Large Visual Language Models (LVLMs) struggle with hallucinations in visual instruction following task(s). These issues hinder their trustworthiness and real-world applicability. We propose Pelican {--} a novel framework designed to detect and mitigate hallucinations through claim verification. Pelican first decomposes the visual claim into a chain of sub-claims based on first-order predicates. These sub-claims consists of (predicate, question) pairs and can be conceptualized as nodes of a computational graph. We then use use Program-of-Thought prompting to generate Python code for answering these questions through flexible composition of external tools. Pelican improves over prior work by introducing (1) intermediate variables for precise grounding of object instances, and (2) shared computation for answering the sub-question to enable adaptive corrections and inconsistency identification. We finally use reasoning abilities of LLM to verify the correctness of the the claim by considering the consistency and confidence of the (question, answer) pairs from each sub-claim. Our experiments demonstrate consistent performance improvements over various baseline LVLMs and existing hallucination mitigation approaches across several benchmarks. | [
"Sahu, Pritish",
"Sikka, Karan",
"Divakaran, Ajay"
] | Pelican: Correcting Hallucination in Vision-LLMs via Claim Decomposition and Program of Thought Verification | emnlp-main.470 | Poster | 2407.02352 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.471.bib | https://aclanthology.org/2024.emnlp-main.471/ | @inproceedings{hirota-etal-2024-resampled,
title = "Resampled Datasets Are Not Enough: Mitigating Societal Bias Beyond Single Attributes",
author = "Hirota, Yusuke and
Andrews, Jerone and
Zhao, Dora and
Papakyriakopoulos, Orestis and
Modas, Apostolos and
Nakashima, Yuta and
Xiang, Alice",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.471",
pages = "8249--8267",
abstract = "We tackle societal bias in image-text datasets by removing spurious correlations between protected groups and image attributes. Traditional methods only target labeled attributes, ignoring biases from unlabeled ones. Using text-guided inpainting models, our approach ensures protected group independence from all attributes and mitigates inpainting biases through data filtering. Evaluations on multi-label image classification and image captioning tasks show our method effectively reduces bias without compromising performance across various models. Specifically, we achieve an average societal bias reduction of 46.1{\%} in leakage-based bias metrics for multi-label classification and 74.8{\%} for image captioning.",
}
| We tackle societal bias in image-text datasets by removing spurious correlations between protected groups and image attributes. Traditional methods only target labeled attributes, ignoring biases from unlabeled ones. Using text-guided inpainting models, our approach ensures protected group independence from all attributes and mitigates inpainting biases through data filtering. Evaluations on multi-label image classification and image captioning tasks show our method effectively reduces bias without compromising performance across various models. Specifically, we achieve an average societal bias reduction of 46.1{\%} in leakage-based bias metrics for multi-label classification and 74.8{\%} for image captioning. | [
"Hirota, Yusuke",
"Andrews, Jerone",
"Zhao, Dora",
"Papakyriakopoulos, Orestis",
"Modas, Apostolos",
"Nakashima, Yuta",
"Xiang, Alice"
] | Resampled Datasets Are Not Enough: Mitigating Societal Bias Beyond Single Attributes | emnlp-main.471 | Poster | 2407.03623 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.472.bib | https://aclanthology.org/2024.emnlp-main.472/ | @inproceedings{cao-etal-2024-realvul,
title = "{R}eal{V}ul: Can We Detect Vulnerabilities in Web Applications with {LLM}?",
author = "Cao, Di and
Liao, Yong and
Shang, Xiuwei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.472",
pages = "8268--8282",
abstract = "The latest advancements in large language models (LLMs) have sparked interest in their potential for software vulnerability detection. However, there is currently a lack of research specifically focused on vulnerabilities in the PHP language, and challenges in data sampling and processing persist, hindering the model{'}s ability to effectively capture the characteristics of specific vulnerabilities. In this paper, we present RealVul, the first LLM-based framework designed for PHP vulnerability detection, addressing these issues. By improving code sampling methods and employing normalization techniques, we can isolate potential vulnerability triggers while streamlining the code and eliminating unnecessary semantic information, enabling the model to better understand and learn from the generated vulnerability samples. We also address the issue of insufficient PHP vulnerability samples by improving data synthesis methods. To evaluate RealVul{'}s performance, we conduct an extensive analysis using five distinct code LLMs on vulnerability data from 180 PHP projects. The results demonstrate a significant improvement in both effectiveness and generalization compared to existing methods, effectively boosting the vulnerability detection capabilities of these models.",
}
| The latest advancements in large language models (LLMs) have sparked interest in their potential for software vulnerability detection. However, there is currently a lack of research specifically focused on vulnerabilities in the PHP language, and challenges in data sampling and processing persist, hindering the model{'}s ability to effectively capture the characteristics of specific vulnerabilities. In this paper, we present RealVul, the first LLM-based framework designed for PHP vulnerability detection, addressing these issues. By improving code sampling methods and employing normalization techniques, we can isolate potential vulnerability triggers while streamlining the code and eliminating unnecessary semantic information, enabling the model to better understand and learn from the generated vulnerability samples. We also address the issue of insufficient PHP vulnerability samples by improving data synthesis methods. To evaluate RealVul{'}s performance, we conduct an extensive analysis using five distinct code LLMs on vulnerability data from 180 PHP projects. The results demonstrate a significant improvement in both effectiveness and generalization compared to existing methods, effectively boosting the vulnerability detection capabilities of these models. | [
"Cao, Di",
"Liao, Yong",
"Shang, Xiuwei"
] | RealVul: Can We Detect Vulnerabilities in Web Applications with LLM? | emnlp-main.472 | Poster | 2410.07573 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.473.bib | https://aclanthology.org/2024.emnlp-main.473/ | @inproceedings{king-flanigan-2024-unsupervised,
title = "Unsupervised End-to-End Task-Oriented Dialogue with {LLM}s: The Power of the Noisy Channel",
author = "King, Brendan and
Flanigan, Jeffrey",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.473",
pages = "8283--8300",
abstract = "Training task-oriented dialogue systems typically requires turn-level annotations for interacting with their APIs: e.g. a dialogue state and the system actions taken at each step. These annotations can be costly to produce, error-prone, and require both domain and annotation expertise. With advances in LLMs, we hypothesize that unlabeled data and a schema definition are sufficient for building a working task-oriented dialogue system, completely unsupervised. We consider a novel unsupervised setting of only (1) a well-defined API schema (2) a set of unlabeled dialogues between a user and agent. We propose an innovative approach using expectation-maximization (EM) that infers turn-level annotations as latent variables using a noisy channel model to build an end-to-end dialogue agent. Evaluating our approach on the MultiWOZ benchmark, our method more than doubles the dialogue success rate of a strong GPT-3.5 baseline.",
}
| Training task-oriented dialogue systems typically requires turn-level annotations for interacting with their APIs: e.g. a dialogue state and the system actions taken at each step. These annotations can be costly to produce, error-prone, and require both domain and annotation expertise. With advances in LLMs, we hypothesize that unlabeled data and a schema definition are sufficient for building a working task-oriented dialogue system, completely unsupervised. We consider a novel unsupervised setting of only (1) a well-defined API schema (2) a set of unlabeled dialogues between a user and agent. We propose an innovative approach using expectation-maximization (EM) that infers turn-level annotations as latent variables using a noisy channel model to build an end-to-end dialogue agent. Evaluating our approach on the MultiWOZ benchmark, our method more than doubles the dialogue success rate of a strong GPT-3.5 baseline. | [
"King, Brendan",
"Flanigan, Jeffrey"
] | Unsupervised End-to-End Task-Oriented Dialogue with LLMs: The Power of the Noisy Channel | emnlp-main.473 | Oral | 2404.15219 | [
"https://github.com/jlab-nlp/nc_latent_tod"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.474.bib | https://aclanthology.org/2024.emnlp-main.474/ | @inproceedings{chen-etal-2024-humans,
title = "Humans or {LLM}s as the Judge? A Study on Judgement Bias",
author = "Chen, Guiming Hardy and
Chen, Shunian and
Liu, Ziche and
Jiang, Feng and
Wang, Benyou",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.474",
pages = "8301--8327",
abstract = "Adopting human and large language models (LLM) as judges (*a.k.a* human- and LLM-as-a-judge) for evaluating the performance of LLMs has recently gained attention. Nonetheless, this approach concurrently introduces potential biases from human and LLMs, questioning the reliability of the evaluation results. In this paper, we propose a novel framework that is free from referencing groundtruth annotations for investigating **Misinformation Oversight Bias**, **Gender Bias**, **Authority Bias** and **Beauty Bias** on LLM and human judges. We curate a dataset referring to the revised Bloom{'}s Taxonomy and conduct thousands of evaluations. Results show that human and LLM judges are vulnerable to perturbations to various degrees, and that even the cutting-edge judges possess considerable biases. We further exploit these biases to conduct attacks on LLM judges. We hope that our work can notify the community of the bias and vulnerability of human- and LLM-as-a-judge, as well as the urgency of developing robust evaluation systems.",
}
| Adopting human and large language models (LLM) as judges (*a.k.a* human- and LLM-as-a-judge) for evaluating the performance of LLMs has recently gained attention. Nonetheless, this approach concurrently introduces potential biases from human and LLMs, questioning the reliability of the evaluation results. In this paper, we propose a novel framework that is free from referencing groundtruth annotations for investigating **Misinformation Oversight Bias**, **Gender Bias**, **Authority Bias** and **Beauty Bias** on LLM and human judges. We curate a dataset referring to the revised Bloom{'}s Taxonomy and conduct thousands of evaluations. Results show that human and LLM judges are vulnerable to perturbations to various degrees, and that even the cutting-edge judges possess considerable biases. We further exploit these biases to conduct attacks on LLM judges. We hope that our work can notify the community of the bias and vulnerability of human- and LLM-as-a-judge, as well as the urgency of developing robust evaluation systems. | [
"Chen, Guiming Hardy",
"Chen, Shunian",
"Liu, Ziche",
"Jiang, Feng",
"Wang, Benyou"
] | Humans or LLMs as the Judge? A Study on Judgement Bias | emnlp-main.474 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.475.bib | https://aclanthology.org/2024.emnlp-main.475/ | @inproceedings{zhou-etal-2024-wpo,
title = "{WPO}: Enhancing {RLHF} with Weighted Preference Optimization",
author = "Zhou, Wenxuan and
Agrawal, Ravi and
Zhang, Shujian and
Indurthi, Sathish Reddy and
Zhao, Sanqiang and
Song, Kaiqiang and
Xu, Silei and
Zhu, Chenguang",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.475",
pages = "8328--8340",
abstract = "Reinforcement learning from human feedback (RLHF) is a promising solution to align large language models (LLMs) more closely with human values. Off-policy preference optimization, where the preference data is obtained from other models, is widely adopted due to its cost efficiency and scalability. However, off-policy preference optimization often suffers from a distributional gap between the policy used for data collection and the target policy, leading to suboptimal optimization. In this paper, we propose a novel strategy to mitigate this problem by simulating on-policy learning with off-policy preference data. Our Weighted Preference Optimization (WPO) method adapts off-policy data to resemble on-policy data more closely by reweighting preference pairs according to their probability under the current policy. This method not only addresses the distributional gap problem but also enhances the optimization process without incurring additional costs. We validate our method on instruction following benchmarks including Alpaca Eval 2 and MT-bench. WPO not only outperforms Direct Preference Optimization (DPO) by up to 5.6{\%} on Alpaca Eval 2 but also establishes a remarkable length-controlled winning rate against GPT-4-turbo of 76.7{\%} based on Gemma-2-9b-it. We release the code and models at https://github.com/wzhouad/WPO.",
}
| Reinforcement learning from human feedback (RLHF) is a promising solution to align large language models (LLMs) more closely with human values. Off-policy preference optimization, where the preference data is obtained from other models, is widely adopted due to its cost efficiency and scalability. However, off-policy preference optimization often suffers from a distributional gap between the policy used for data collection and the target policy, leading to suboptimal optimization. In this paper, we propose a novel strategy to mitigate this problem by simulating on-policy learning with off-policy preference data. Our Weighted Preference Optimization (WPO) method adapts off-policy data to resemble on-policy data more closely by reweighting preference pairs according to their probability under the current policy. This method not only addresses the distributional gap problem but also enhances the optimization process without incurring additional costs. We validate our method on instruction following benchmarks including Alpaca Eval 2 and MT-bench. WPO not only outperforms Direct Preference Optimization (DPO) by up to 5.6{\%} on Alpaca Eval 2 but also establishes a remarkable length-controlled winning rate against GPT-4-turbo of 76.7{\%} based on Gemma-2-9b-it. We release the code and models at https://github.com/wzhouad/WPO. | [
"Zhou, Wenxuan",
"Agrawal, Ravi",
"Zhang, Shujian",
"Indurthi, Sathish Reddy",
"Zhao, Sanqiang",
"Song, Kaiqiang",
"Xu, Silei",
"Zhu, Chenguang"
] | WPO: Enhancing RLHF with Weighted Preference Optimization | emnlp-main.475 | Poster | 2406.11827 | [
"https://github.com/wzhouad/wpo"
] | https://huggingface.co/papers/2406.11827 | 5 | 14 | 1 | 8 | [
"wzhouad/gemma-2-9b-it-WPO-HB",
"wzhouad/Llama3-Instruct-8B-WPO-HB-v2",
"QuantFactory/gemma-2-9b-it-WPO-HB-GGUF",
"wzhouad/Llama3-Instruct-8B-WPO-FP",
"wzhouad/Llama3-Instruct-8B-WPO-HB",
"wzhouad/zephyr-7B-WPO-FP",
"wzhouad/zephyr-7B-WPO-HB",
"wzhouad/gemma-2-9b-it-WPO-FP",
"RichardErkhov/wzhouad_-_gemma-2-9b-it-WPO-HB-gguf"
] | [] | [] | [
"wzhouad/gemma-2-9b-it-WPO-HB",
"wzhouad/Llama3-Instruct-8B-WPO-HB-v2",
"QuantFactory/gemma-2-9b-it-WPO-HB-GGUF",
"wzhouad/Llama3-Instruct-8B-WPO-FP",
"wzhouad/Llama3-Instruct-8B-WPO-HB",
"wzhouad/zephyr-7B-WPO-FP",
"wzhouad/zephyr-7B-WPO-HB",
"wzhouad/gemma-2-9b-it-WPO-FP",
"RichardErkhov/wzhouad_-_gemma-2-9b-it-WPO-HB-gguf"
] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.476.bib | https://aclanthology.org/2024.emnlp-main.476/ | @inproceedings{xu-etal-2024-walking,
title = "Walking in Others{'} Shoes: How Perspective-Taking Guides Large Language Models in Reducing Toxicity and Bias",
author = "Xu, Rongwu and
Zhou, Zian and
Zhang, Tianwei and
Qi, Zehan and
Yao, Su and
Xu, Ke and
Xu, Wei and
Qiu, Han",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.476",
pages = "8341--8368",
abstract = "The common toxicity and societal bias in contents generated by large language models (LLMs) necessitate strategies to reduce harm. Present solutions often demand white-box access to the model or substantial training, which is impractical for cutting-edge commercial LLMs. Moreover, prevailing prompting methods depend on external tool feedback and fail to simultaneously lessen toxicity and bias. Motivated by social psychology principles, we propose a novel strategy named perspective-taking prompting (PeT) that inspires LLMs to integrate diverse human perspectives and self-regulate their responses. This self-correction mechanism can significantly diminish toxicity (up to 89{\%}) and bias (up to 73{\%}) in LLMs{'} responses. Rigorous evaluations and ablation studies are conducted on two commercial LLMs (ChatGPT and GLM) and three open-source LLMs, revealing PeT{'}s superiority in producing less harmful responses, outperforming five strong baselines.",
}
| The common toxicity and societal bias in contents generated by large language models (LLMs) necessitate strategies to reduce harm. Present solutions often demand white-box access to the model or substantial training, which is impractical for cutting-edge commercial LLMs. Moreover, prevailing prompting methods depend on external tool feedback and fail to simultaneously lessen toxicity and bias. Motivated by social psychology principles, we propose a novel strategy named perspective-taking prompting (PeT) that inspires LLMs to integrate diverse human perspectives and self-regulate their responses. This self-correction mechanism can significantly diminish toxicity (up to 89{\%}) and bias (up to 73{\%}) in LLMs{'} responses. Rigorous evaluations and ablation studies are conducted on two commercial LLMs (ChatGPT and GLM) and three open-source LLMs, revealing PeT{'}s superiority in producing less harmful responses, outperforming five strong baselines. | [
"Xu, Rongwu",
"Zhou, Zian",
"Zhang, Tianwei",
"Qi, Zehan",
"Yao, Su",
"Xu, Ke",
"Xu, Wei",
"Qiu, Han"
] | Walking in Others' Shoes: How Perspective-Taking Guides Large Language Models in Reducing Toxicity and Bias | emnlp-main.476 | Poster | 2407.15366 | [
""
] | https://huggingface.co/papers/2407.15366 | 1 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.477.bib | https://aclanthology.org/2024.emnlp-main.477/ | @inproceedings{gupta-etal-2024-metareflection,
title = "{M}eta{R}eflection: Learning Instructions for Language Agents using Past Reflections",
author = "Gupta, Priyanshu and
Kirtania, Shashank and
Singha, Ananya and
Gulwani, Sumit and
Radhakrishna, Arjun and
Soares, Gustavo and
Shi, Sherry",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.477",
pages = "8369--8385",
abstract = "The popularity of Large Language Models (LLMs) have unleashed a new age of Language Agents for solving a diverse range of tasks. While contemporary frontier LLMs are capable enough to power reasonably good Language agents, the closed-API model makes it hard to improve in cases they perform sub-optimally. To address this, recent works have explored techniques to improve their performance using self reflection and prompt optimization techniques. While techniques like self reflection work well in an online setup, contemporary prompt optimization techniques are designed to work on simpler tasks. To address this, we introduce METAREFLECTION, a novel offline reinforcement learning technique that enhances the performance of Language Agents by augmenting a semantic memory based on experiential learnings from past trials. We demonstrate the efficacy of METAREFLECTION by evaluating across multiple domains, including complex logical reasoning, biomedical semantic similarity, open world question answering, and vulnerability threat detection, in Infrastructure-as-Code, with different agent design. METAREFLECTION boosts Language agents{'} performance by 4 {\%} to 16.82 {\%} over the raw GPT-4 baseline and performs on par with existing state-of-the-art prompt optimization techniques while requiring fewer LLM calls.",
}
| The popularity of Large Language Models (LLMs) have unleashed a new age of Language Agents for solving a diverse range of tasks. While contemporary frontier LLMs are capable enough to power reasonably good Language agents, the closed-API model makes it hard to improve in cases they perform sub-optimally. To address this, recent works have explored techniques to improve their performance using self reflection and prompt optimization techniques. While techniques like self reflection work well in an online setup, contemporary prompt optimization techniques are designed to work on simpler tasks. To address this, we introduce METAREFLECTION, a novel offline reinforcement learning technique that enhances the performance of Language Agents by augmenting a semantic memory based on experiential learnings from past trials. We demonstrate the efficacy of METAREFLECTION by evaluating across multiple domains, including complex logical reasoning, biomedical semantic similarity, open world question answering, and vulnerability threat detection, in Infrastructure-as-Code, with different agent design. METAREFLECTION boosts Language agents{'} performance by 4 {\%} to 16.82 {\%} over the raw GPT-4 baseline and performs on par with existing state-of-the-art prompt optimization techniques while requiring fewer LLM calls. | [
"Gupta, Priyanshu",
"Kirtania, Shashank",
"Singha, Ananya",
"Gulwani, Sumit",
"Radhakrishna, Arjun",
"Soares, Gustavo",
"Shi, Sherry"
] | MetaReflection: Learning Instructions for Language Agents using Past Reflections | emnlp-main.477 | Poster | 2405.13009 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.478.bib | https://aclanthology.org/2024.emnlp-main.478/ | @inproceedings{daheim-etal-2024-stepwise,
title = "Stepwise Verification and Remediation of Student Reasoning Errors with Large Language Model Tutors",
author = "Daheim, Nico and
Macina, Jakub and
Kapur, Manu and
Gurevych, Iryna and
Sachan, Mrinmaya",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.478",
pages = "8386--8411",
abstract = "Large language models (LLMs) offer many opportunities to scale high-quality personalized tutoring. A promising approach is to build dialog tutoring models to scaffold students{'} problem-solving. However, even though existing models perform well in solving reasoning questions, they can struggle to precisely detect student{'}s errors and tailor their feedback to these errors. Inspired by real-world teaching practice where teachers identify student errors and customize their response based on them, we focus on verifying student solutions and show how grounding to such verification improves the overall quality of tutor response generation. We collect a dataset of 1,002 stepwise math reasoning chains with the first error step annotated by teachers. We show empirically that finding the mistake in a student solution is challenging for current models. We propose and evaluate several verifiers for detecting these errors. Using both automatic and human evaluation we show that the student solution verifiers steer the generation model towards highly targeted responses to student error which are more often correct with less hallucinations compared to existing baselines. The benchmark dataset and code will be released openly.",
}
| Large language models (LLMs) offer many opportunities to scale high-quality personalized tutoring. A promising approach is to build dialog tutoring models to scaffold students{'} problem-solving. However, even though existing models perform well in solving reasoning questions, they can struggle to precisely detect student{'}s errors and tailor their feedback to these errors. Inspired by real-world teaching practice where teachers identify student errors and customize their response based on them, we focus on verifying student solutions and show how grounding to such verification improves the overall quality of tutor response generation. We collect a dataset of 1,002 stepwise math reasoning chains with the first error step annotated by teachers. We show empirically that finding the mistake in a student solution is challenging for current models. We propose and evaluate several verifiers for detecting these errors. Using both automatic and human evaluation we show that the student solution verifiers steer the generation model towards highly targeted responses to student error which are more often correct with less hallucinations compared to existing baselines. The benchmark dataset and code will be released openly. | [
"Daheim, Nico",
"Macina, Jakub",
"Kapur, Manu",
"Gurevych, Iryna",
"Sachan, Mrinmaya"
] | Stepwise Verification and Remediation of Student Reasoning Errors with Large Language Model Tutors | emnlp-main.478 | Poster | 2407.09136 | [
"https://github.com/eth-lre/verify-then-generate"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.479.bib | https://aclanthology.org/2024.emnlp-main.479/ | @inproceedings{wang-utiyama-2024-eliciting,
title = "On Eliciting Syntax from Language Models via Hashing",
author = "Wang, Yiran and
Utiyama, Masao",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.479",
pages = "8412--8427",
abstract = "Unsupervised parsing, also known as grammar induction, aims to infer syntactic structure from raw text. Recently, binary representation has exhibited remarkable information-preserving capabilities at both lexicon and syntax levels. In this paper, we explore the possibility of leveraging this capability to deduce parsing trees from raw text, relying solely on the implicitly induced grammars within models. To achieve this, we upgrade the bit-level CKY from zero-order to first-order to encode the lexicon and syntax in a unified binary representation space, switch training from supervised to unsupervised under the contrastive hashing framework, and introduce a novel loss function to impose stronger yet balanced alignment signals. Our model shows competitive performance on various datasets, therefore, we claim that our method is effective and efficient enough to acquire high-quality parsing trees from pre-trained language models at a low cost.",
}
| Unsupervised parsing, also known as grammar induction, aims to infer syntactic structure from raw text. Recently, binary representation has exhibited remarkable information-preserving capabilities at both lexicon and syntax levels. In this paper, we explore the possibility of leveraging this capability to deduce parsing trees from raw text, relying solely on the implicitly induced grammars within models. To achieve this, we upgrade the bit-level CKY from zero-order to first-order to encode the lexicon and syntax in a unified binary representation space, switch training from supervised to unsupervised under the contrastive hashing framework, and introduce a novel loss function to impose stronger yet balanced alignment signals. Our model shows competitive performance on various datasets, therefore, we claim that our method is effective and efficient enough to acquire high-quality parsing trees from pre-trained language models at a low cost. | [
"Wang, Yiran",
"Utiyama, Masao"
] | On Eliciting Syntax from Language Models via Hashing | emnlp-main.479 | Poster | 2410.04074 | [
"https://github.com/speedcell4/parserker"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.480.bib | https://aclanthology.org/2024.emnlp-main.480/ | @inproceedings{ouyang-etal-2024-climedbench,
title = "{C}li{M}ed{B}ench: A Large-Scale {C}hinese Benchmark for Evaluating Medical Large Language Models in Clinical Scenarios",
author = "Ouyang, Zetian and
Qiu, Yishuai and
Wang, Linlin and
De Melo, Gerard and
Zhang, Ya and
Wang, Yanfeng and
He, Liang",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.480",
pages = "8428--8438",
abstract = "With the proliferation of Large Language Models (LLMs) in diverse domains, there is a particular need for unified evaluation standards in clinical medical scenarios, where models need to be examined very thoroughly. We present CliMedBench, a comprehensive benchmark with 14 expert-guided core clinical scenarios specifically designed to assess the medical ability of LLMs across 7 pivot dimensions. It comprises 33,735 questions derived from real-world medical reports of top-tier tertiary hospitals and authentic examination exercises. The reliability of this benchmark has been confirmed in several ways. Subsequent experiments with existing LLMs have led to the following findings: (i) Chinese medical LLMs underperform on this benchmark, especially where medical reasoning and factual consistency are vital, underscoring the need for advances in clinical knowledge and diagnostic accuracy. (ii) Several general-domain LLMs demonstrate substantial potential in medical clinics, while the limited input capacity of many medical LLMs hinders their practical use. These findings reveal both the strengths and limitations of LLMs in clinical scenarios and offer critical insights for medical research.",
}
| With the proliferation of Large Language Models (LLMs) in diverse domains, there is a particular need for unified evaluation standards in clinical medical scenarios, where models need to be examined very thoroughly. We present CliMedBench, a comprehensive benchmark with 14 expert-guided core clinical scenarios specifically designed to assess the medical ability of LLMs across 7 pivot dimensions. It comprises 33,735 questions derived from real-world medical reports of top-tier tertiary hospitals and authentic examination exercises. The reliability of this benchmark has been confirmed in several ways. Subsequent experiments with existing LLMs have led to the following findings: (i) Chinese medical LLMs underperform on this benchmark, especially where medical reasoning and factual consistency are vital, underscoring the need for advances in clinical knowledge and diagnostic accuracy. (ii) Several general-domain LLMs demonstrate substantial potential in medical clinics, while the limited input capacity of many medical LLMs hinders their practical use. These findings reveal both the strengths and limitations of LLMs in clinical scenarios and offer critical insights for medical research. | [
"Ouyang, Zetian",
"Qiu, Yishuai",
"Wang, Linlin",
"De Melo, Gerard",
"Zhang, Ya",
"Wang, Yanfeng",
"He, Liang"
] | CliMedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large Language Models in Clinical Scenarios | emnlp-main.480 | Poster | 2410.03502 | [
"https://github.com/Optifine-TAT/CliMedBench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.481.bib | https://aclanthology.org/2024.emnlp-main.481/ | @inproceedings{yang-li-2024-best,
title = "The Best Defense is Attack: Repairing Semantics in Textual Adversarial Examples",
author = "Yang, Heng and
Li, Ke",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.481",
pages = "8439--8457",
abstract = "Recent studies have revealed the vulnerability of pre-trained language models to adversarial attacks. Adversarial defense techniques have been proposed to reconstruct adversarial examples within feature or text spaces. However, these methods struggle to effectively repair the semantics in adversarial examples, resulting in unsatisfactory defense performance. To repair the semantics in adversarial examples, we introduce a novel approach named Reactive Perturbation Defocusing (Rapid), which employs an adversarial detector to identify the fake labels of adversarial examples and leverages adversarial attackers to repair the semantics in adversarial examples. Our extensive experimental results, conducted on four public datasets, demonstrate the consistent effectiveness of Rapid in various adversarial attack scenarios. For easy evaluation, we provide a click-to-run demo of Rapid at https://tinyurl.com/22ercuf8.",
}
| Recent studies have revealed the vulnerability of pre-trained language models to adversarial attacks. Adversarial defense techniques have been proposed to reconstruct adversarial examples within feature or text spaces. However, these methods struggle to effectively repair the semantics in adversarial examples, resulting in unsatisfactory defense performance. To repair the semantics in adversarial examples, we introduce a novel approach named Reactive Perturbation Defocusing (Rapid), which employs an adversarial detector to identify the fake labels of adversarial examples and leverages adversarial attackers to repair the semantics in adversarial examples. Our extensive experimental results, conducted on four public datasets, demonstrate the consistent effectiveness of Rapid in various adversarial attack scenarios. For easy evaluation, we provide a click-to-run demo of Rapid at https://tinyurl.com/22ercuf8. | [
"Yang, Heng",
"Li, Ke"
] | The Best Defense is Attack: Repairing Semantics in Textual Adversarial Examples | emnlp-main.481 | Poster | 2305.04067 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.482.bib | https://aclanthology.org/2024.emnlp-main.482/ | @inproceedings{ray-etal-2024-cssl,
title = "{CSSL}: Contrastive Self-Supervised Learning for Dependency Parsing on Relatively Free Word Ordered and Morphologically Rich Low Resource Languages",
author = "Ray, Pretam and
Sandhan, Jivnesh and
Krishna, Amrith and
Goyal, Pawan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.482",
pages = "8458--8466",
abstract = "Neural dependency parsing has achieved remarkable performance for low resource morphologically rich languages. It has also been well-studied that morphologically rich languages exhibit relatively free word order. This prompts a fundamental investigation: Is there a way to enhance dependency parsing performance, making the model robust to word order variations utilizing the relatively free word order nature of morphologically rich languages? In this work, we examine the robustness of graph-based parsing architectures on 7 relatively free word order languages. We focus on scrutinizing essential modifications such as data augmentation and the removal of position encoding required to adapt these architectures accordingly. To this end, we propose a contrastive self-supervised learning method to make the model robust to word order variations. Furthermore, our proposed modification demonstrates a substantial average gain of 3.03/2.95 points in 7 relatively free word order languages, as measured by the UAS/LAS Score metric when compared to the best performing baseline.",
}
| Neural dependency parsing has achieved remarkable performance for low resource morphologically rich languages. It has also been well-studied that morphologically rich languages exhibit relatively free word order. This prompts a fundamental investigation: Is there a way to enhance dependency parsing performance, making the model robust to word order variations utilizing the relatively free word order nature of morphologically rich languages? In this work, we examine the robustness of graph-based parsing architectures on 7 relatively free word order languages. We focus on scrutinizing essential modifications such as data augmentation and the removal of position encoding required to adapt these architectures accordingly. To this end, we propose a contrastive self-supervised learning method to make the model robust to word order variations. Furthermore, our proposed modification demonstrates a substantial average gain of 3.03/2.95 points in 7 relatively free word order languages, as measured by the UAS/LAS Score metric when compared to the best performing baseline. | [
"Ray, Pretam",
"S",
"han, Jivnesh",
"Krishna, Amrith",
"Goyal, Pawan"
] | CSSL: Contrastive Self-Supervised Learning for Dependency Parsing on Relatively Free Word Ordered and Morphologically Rich Low Resource Languages | emnlp-main.482 | Poster | 2410.06944 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.483.bib | https://aclanthology.org/2024.emnlp-main.483/ | @inproceedings{belem-etal-2024-perceptions,
title = "Perceptions of Linguistic Uncertainty by Language Models and Humans",
author = "Bel{\'e}m, Catarina G and
Kelly, Markelle and
Steyvers, Mark and
Singh, Sameer and
Smyth, Padhraic",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.483",
pages = "8467--8502",
abstract = "*Uncertainty expressions* such as {`}probably{'} or {`}highly unlikely{'} are pervasive in human language. While prior work has established that there is population-level agreement in terms of how humans quantitatively interpret these expressions, there has been little inquiry into the abilities of language models in the same context. In this paper, we investigate how language models map linguistic expressions of uncertainty to numerical responses. Our approach assesses whether language models can employ theory of mind in this setting: understanding the uncertainty of another agent about a particular statement, independently of the model{'}s own certainty about that statement. We find that 7 out of 10 models are able to map uncertainty expressions to probabilistic responses in a human-like manner. However, we observe systematically different behavior depending on whether a statement is actually true or false. This sensitivity indicates that language models are substantially more susceptible to bias based on their prior knowledge (as compared to humans). These findings raise important questions and have broad implications for human-AI and AI-AI communication.",
}
| *Uncertainty expressions* such as {`}probably{'} or {`}highly unlikely{'} are pervasive in human language. While prior work has established that there is population-level agreement in terms of how humans quantitatively interpret these expressions, there has been little inquiry into the abilities of language models in the same context. In this paper, we investigate how language models map linguistic expressions of uncertainty to numerical responses. Our approach assesses whether language models can employ theory of mind in this setting: understanding the uncertainty of another agent about a particular statement, independently of the model{'}s own certainty about that statement. We find that 7 out of 10 models are able to map uncertainty expressions to probabilistic responses in a human-like manner. However, we observe systematically different behavior depending on whether a statement is actually true or false. This sensitivity indicates that language models are substantially more susceptible to bias based on their prior knowledge (as compared to humans). These findings raise important questions and have broad implications for human-AI and AI-AI communication. | [
"Bel{\\'e}m, Catarina G",
"Kelly, Markelle",
"Steyvers, Mark",
"Singh, Sameer",
"Smyth, Padhraic"
] | Perceptions of Linguistic Uncertainty by Language Models and Humans | emnlp-main.483 | Poster | 2407.15814 | [
"https://github.com/ucidatalab/llm-uncertainty-perceptions"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.484.bib | https://aclanthology.org/2024.emnlp-main.484/ | @inproceedings{chang-etal-2024-explaining,
title = "Explaining and Improving Contrastive Decoding by Extrapolating the Probabilities of a Huge and Hypothetical {LM}",
author = "Chang, Haw-Shiuan and
Peng, Nanyun and
Bansal, Mohit and
Ramakrishna, Anil and
Chung, Tagyoung",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.484",
pages = "8503--8526",
abstract = "Contrastive decoding (CD) (Li et al., 2022) improves the next-token distribution of a large expert language model (LM) using a small amateur LM. Although CD is applied to various LMs and domains to enhance open-ended text generation, it is still unclear why CD often works well, when it could fail, and how we can make it better. To deepen our understanding of CD, we first theoretically prove that CD could be viewed as linearly extrapolating the next-token logits from a huge and hypothetical LM. We also highlight that the linear extrapolation could make CD unable to output the most obvious answers that have already been assigned high probabilities by the amateur LM.To overcome CD{'}s limitation, we propose a new unsupervised decoding method called Asymptotic Probability Decoding (APD). APD explicitly extrapolates the probability curves from the LMs of different sizes to infer the asymptotic probabilities from an infinitely large LM without inducing more inference costs than CD. In FactualityPrompts, an open-ended text generation benchmark, sampling using APD significantly boosts factuality in comparison to the CD sampling and its variants, and achieves state-of-the-art results for Pythia 6.9B and OPT 6.7B. Furthermore, in five commonsense QA datasets, APD is often significantly better than CD and achieves a similar effect of using a larger LLM. For example, the perplexity of APD on top of Pythia 6.9B is even lower than the perplexity of Pythia 12B in CommonsenseQA and LAMBADA.",
}
| Contrastive decoding (CD) (Li et al., 2022) improves the next-token distribution of a large expert language model (LM) using a small amateur LM. Although CD is applied to various LMs and domains to enhance open-ended text generation, it is still unclear why CD often works well, when it could fail, and how we can make it better. To deepen our understanding of CD, we first theoretically prove that CD could be viewed as linearly extrapolating the next-token logits from a huge and hypothetical LM. We also highlight that the linear extrapolation could make CD unable to output the most obvious answers that have already been assigned high probabilities by the amateur LM.To overcome CD{'}s limitation, we propose a new unsupervised decoding method called Asymptotic Probability Decoding (APD). APD explicitly extrapolates the probability curves from the LMs of different sizes to infer the asymptotic probabilities from an infinitely large LM without inducing more inference costs than CD. In FactualityPrompts, an open-ended text generation benchmark, sampling using APD significantly boosts factuality in comparison to the CD sampling and its variants, and achieves state-of-the-art results for Pythia 6.9B and OPT 6.7B. Furthermore, in five commonsense QA datasets, APD is often significantly better than CD and achieves a similar effect of using a larger LLM. For example, the perplexity of APD on top of Pythia 6.9B is even lower than the perplexity of Pythia 12B in CommonsenseQA and LAMBADA. | [
"Chang, Haw-Shiuan",
"Peng, Nanyun",
"Bansal, Mohit",
"Ramakrishna, Anil",
"Chung, Tagyoung"
] | Explaining and Improving Contrastive Decoding by Extrapolating the Probabilities of a Huge and Hypothetical LM | emnlp-main.484 | Oral | 2411.01610 | [
"https://github.com/amazon-science/llm-asymptotic-decoding"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.485.bib | https://aclanthology.org/2024.emnlp-main.485/ | @inproceedings{dong-etal-2024-zero,
title = "Zero-shot Cross-domain Dialogue State Tracking via Context-aware Auto-prompting and Instruction-following Contrastive Decoding",
author = "Dong, Xiaoyu and
Feng, Yujie and
Lu, Zexin and
Shi, Guangyuan and
Wu, Xiao-Ming",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.485",
pages = "8527--8540",
abstract = "Zero-shot cross-domain dialogue state tracking (DST) enables us to manage task-oriented dialogues in new, unseen domains without the cost of collecting in-domain data. Previous studies have implemented slot-based input improvements, such as schema-driven descriptions and question-answering formats, but still suffer from negative transfer for seen slots and inefficient transfer for unseen slots due to the significant source-target domain gap. To address these issues, we introduce a novel framework called Context-aware Auto-prompting and Instruction-following Contrastive Decoding (CAPID). This framework generates dynamic, context-aware slot queries, effectively improving the model{'}s transferability. Our context-aware auto-prompting approach tailors slot queries to the current dialogue context, increasing flexibility and reducing ambiguities. Additionally, an instruction-following contrastive decoding strategy helps reduce errors related to off-topic slots by penalizing deviations from the provided instructions. Extensive experiments on two datasets, with varying model sizes (from 60M to 7B), demonstrate the superior performance of CAPID. The source code is provided for reproducibility.",
}
| Zero-shot cross-domain dialogue state tracking (DST) enables us to manage task-oriented dialogues in new, unseen domains without the cost of collecting in-domain data. Previous studies have implemented slot-based input improvements, such as schema-driven descriptions and question-answering formats, but still suffer from negative transfer for seen slots and inefficient transfer for unseen slots due to the significant source-target domain gap. To address these issues, we introduce a novel framework called Context-aware Auto-prompting and Instruction-following Contrastive Decoding (CAPID). This framework generates dynamic, context-aware slot queries, effectively improving the model{'}s transferability. Our context-aware auto-prompting approach tailors slot queries to the current dialogue context, increasing flexibility and reducing ambiguities. Additionally, an instruction-following contrastive decoding strategy helps reduce errors related to off-topic slots by penalizing deviations from the provided instructions. Extensive experiments on two datasets, with varying model sizes (from 60M to 7B), demonstrate the superior performance of CAPID. The source code is provided for reproducibility. | [
"Dong, Xiaoyu",
"Feng, Yujie",
"Lu, Zexin",
"Shi, Guangyuan",
"Wu, Xiao-Ming"
] | Zero-shot Cross-domain Dialogue State Tracking via Context-aware Auto-prompting and Instruction-following Contrastive Decoding | emnlp-main.485 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.486.bib | https://aclanthology.org/2024.emnlp-main.486/ | @inproceedings{xu-etal-2024-knowledge-conflicts,
title = "Knowledge Conflicts for {LLM}s: A Survey",
author = "Xu, Rongwu and
Qi, Zehan and
Guo, Zhijiang and
Wang, Cunxiang and
Wang, Hongru and
Zhang, Yue and
Xu, Wei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.486",
pages = "8541--8565",
abstract = "This survey provides an in-depth analysis of knowledge conflicts for large language models (LLMs), highlighting the complex challenges they encounter when blending contextual and parametric knowledge. Our focus is on three categories of knowledge conflicts: context-memory, inter-context, and intra-memory conflict. These conflicts can significantly impact the trustworthiness and performance of LLMs, especially in real-world applications where noise and misinformation are common. By categorizing these conflicts, exploring the causes, examining the behaviors of LLMs under such conflicts, and reviewing available solutions, this survey aims to shed light on strategies for improving the robustness of LLMs, thereby serving as a valuable resource for advancing research in this evolving area.",
}
| This survey provides an in-depth analysis of knowledge conflicts for large language models (LLMs), highlighting the complex challenges they encounter when blending contextual and parametric knowledge. Our focus is on three categories of knowledge conflicts: context-memory, inter-context, and intra-memory conflict. These conflicts can significantly impact the trustworthiness and performance of LLMs, especially in real-world applications where noise and misinformation are common. By categorizing these conflicts, exploring the causes, examining the behaviors of LLMs under such conflicts, and reviewing available solutions, this survey aims to shed light on strategies for improving the robustness of LLMs, thereby serving as a valuable resource for advancing research in this evolving area. | [
"Xu, Rongwu",
"Qi, Zehan",
"Guo, Zhijiang",
"Wang, Cunxiang",
"Wang, Hongru",
"Zhang, Yue",
"Xu, Wei"
] | Knowledge Conflicts for LLMs: A Survey | emnlp-main.486 | Poster | 2403.08319 | [
"https://github.com/pillowsofwind/knowledge-conflicts-survey"
] | https://huggingface.co/papers/2403.08319 | 2 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.487.bib | https://aclanthology.org/2024.emnlp-main.487/ | @inproceedings{gabriel-etal-2024-misinfoeval,
title = "{M}isinfo{E}val: Generative {AI} in the Era of {``}Alternative Facts{''}",
author = "Gabriel, Saadia and
Lyu, Liang and
Siderius, James and
Ghassemi, Marzyeh and
Andreas, Jacob and
Ozdaglar, Asuman E.",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.487",
pages = "8566--8578",
abstract = "The spread of misinformation on social media platforms threatens democratic processes, contributes to massive economic losses, and endangers public health. Many efforts to address misinformation focus on a knowledge deficit model and propose interventions for improving users{'} critical thinking through access to facts. Such efforts are often hampered by challenges with scalability, and by platform users{'} personal biases. The emergence of generative AI presents promising opportunities for countering misinformation at scale across ideological barriers. In this paper, we introduce a framework (MisinfoEval) for generating and comprehensively evaluating large language model (LLM) based misinformation interventions. We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users with the goal of countering misinformation by appealing to their pre-existing values. Our findings confirm that LLM-based interventions are highly effective at correcting user behavior (improving overall user accuracy at reliability labeling by up to 41.72{\%}). Furthermore, we find that users favor more personalized interventions when making decisions about news reliability and users shown personalized interventions have significantly higher accuracy at identifying misinformation.",
}
| The spread of misinformation on social media platforms threatens democratic processes, contributes to massive economic losses, and endangers public health. Many efforts to address misinformation focus on a knowledge deficit model and propose interventions for improving users{'} critical thinking through access to facts. Such efforts are often hampered by challenges with scalability, and by platform users{'} personal biases. The emergence of generative AI presents promising opportunities for countering misinformation at scale across ideological barriers. In this paper, we introduce a framework (MisinfoEval) for generating and comprehensively evaluating large language model (LLM) based misinformation interventions. We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users with the goal of countering misinformation by appealing to their pre-existing values. Our findings confirm that LLM-based interventions are highly effective at correcting user behavior (improving overall user accuracy at reliability labeling by up to 41.72{\%}). Furthermore, we find that users favor more personalized interventions when making decisions about news reliability and users shown personalized interventions have significantly higher accuracy at identifying misinformation. | [
"Gabriel, Saadia",
"Lyu, Liang",
"Siderius, James",
"Ghassemi, Marzyeh",
"Andreas, Jacob",
"Ozdaglar, Asuman E."
] | MisinfoEval: Generative AI in the Era of “Alternative Facts” | emnlp-main.487 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.488.bib | https://aclanthology.org/2024.emnlp-main.488/ | @inproceedings{irving-schoene-2024-meant,
title = "{MEANT}: Multimodal Encoder for Antecedent Information",
author = "Irving, Benjamin and
Schoene, Annika Marie",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.488",
pages = "8579--8600",
abstract = "The stock market provides a rich well of information that can be split across modalities, making it an ideal candidate for multimodal evaluation. Multimodal data plays an increasingly important role in the development of machine learning and has shown to positively impact performance. But information can do more than exist across modes{---} it can exist across time. How should we attend to temporal data that consists of multiple information types? This work introduces (i) the MEANT model, a Multimodal Encoder for Antecedent information and (ii) a new dataset called TempStock, which consists of price, Tweets, and graphical data with over a million Tweets from all of the companies in the S{\&}P 500 Index. We find that MEANT improves performance on existing baselines by over 15{\%}, and that the textual information affects performance far more than the visual information on our time-dependent task from our ablation study. The code and dataset will be made available upon publication.",
}
| The stock market provides a rich well of information that can be split across modalities, making it an ideal candidate for multimodal evaluation. Multimodal data plays an increasingly important role in the development of machine learning and has shown to positively impact performance. But information can do more than exist across modes{---} it can exist across time. How should we attend to temporal data that consists of multiple information types? This work introduces (i) the MEANT model, a Multimodal Encoder for Antecedent information and (ii) a new dataset called TempStock, which consists of price, Tweets, and graphical data with over a million Tweets from all of the companies in the S{\&}P 500 Index. We find that MEANT improves performance on existing baselines by over 15{\%}, and that the textual information affects performance far more than the visual information on our time-dependent task from our ablation study. The code and dataset will be made available upon publication. | [
"Irving, Benjamin",
"Schoene, Annika Marie"
] | MEANT: Multimodal Encoder for Antecedent Information | emnlp-main.488 | Poster | 2411.06616 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.489.bib | https://aclanthology.org/2024.emnlp-main.489/ | @inproceedings{shi-etal-2024-thorough,
title = "A Thorough Examination of Decoding Methods in the Era of {LLM}s",
author = "Shi, Chufan and
Yang, Haoran and
Cai, Deng and
Zhang, Zhisong and
Wang, Yifan and
Yang, Yujiu and
Lam, Wai",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.489",
pages = "8601--8629",
abstract = "Decoding methods play an indispensable role in converting language models from next-token predictors into practical task solvers. Prior research on decoding methods, primarily focusing on task-specific models, may not extend to the current era of general-purpose large language models (LLMs). Moreover, the recent influx of decoding strategies has further complicated this landscape. This paper provides a comprehensive and multifaceted analysis of various decoding methods within the context of LLMs, evaluating their performance, robustness to hyperparameter changes, and decoding speeds across a wide range of tasks, models, and deployment environments. Our findings reveal that decoding method performance is notably task-dependent and influenced by factors such as alignment, model size, and quantization. Intriguingly, sensitivity analysis exposes that certain methods achieve superior performance at the cost of extensive hyperparameter tuning, highlighting the trade-off between attaining optimal results and the practicality of implementation in varying contexts.",
}
| Decoding methods play an indispensable role in converting language models from next-token predictors into practical task solvers. Prior research on decoding methods, primarily focusing on task-specific models, may not extend to the current era of general-purpose large language models (LLMs). Moreover, the recent influx of decoding strategies has further complicated this landscape. This paper provides a comprehensive and multifaceted analysis of various decoding methods within the context of LLMs, evaluating their performance, robustness to hyperparameter changes, and decoding speeds across a wide range of tasks, models, and deployment environments. Our findings reveal that decoding method performance is notably task-dependent and influenced by factors such as alignment, model size, and quantization. Intriguingly, sensitivity analysis exposes that certain methods achieve superior performance at the cost of extensive hyperparameter tuning, highlighting the trade-off between attaining optimal results and the practicality of implementation in varying contexts. | [
"Shi, Chufan",
"Yang, Haoran",
"Cai, Deng",
"Zhang, Zhisong",
"Wang, Yifan",
"Yang, Yujiu",
"Lam, Wai"
] | A Thorough Examination of Decoding Methods in the Era of LLMs | emnlp-main.489 | Poster | 2402.06925 | [
"https://github.com/davidfanzz/llm_decoding"
] | https://huggingface.co/papers/2402.06925 | 0 | 1 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.490.bib | https://aclanthology.org/2024.emnlp-main.490/ | @inproceedings{gangi-reddy-etal-2024-agrame,
title = "{AGR}a{ME}: Any-Granularity Ranking with Multi-Vector Embeddings",
author = "Gangi Reddy, Revanth and
Attia, Omar and
Li, Yunyao and
Ji, Heng and
Potdar, Saloni",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.490",
pages = "8630--8641",
abstract = "Ranking is a fundamental problem in search, however, existing ranking algorithms usually restrict the granularity of ranking to full passages or require a specific dense index for each desired level of granularity. Such lack of flexibility in granularity negatively affects many applications that can benefit from more granular ranking, such as sentence-level ranking for open-domain QA, or proposition-level ranking for attribution. In this work, we introduce the idea of any-granularity ranking which leverages multi-vector embeddings to rank at varying levels of granularity while maintaining encoding at a single (coarser) level of granularity. We propose a multi-granular contrastive loss for training multi-vector approaches and validate its utility with both sentences and propositions as ranking units. Finally, we demonstrate the application of proposition-level ranking to post-hoc citation addition in retrieval-augmented generation, surpassing the performance of prompt-driven citation generation.",
}
| Ranking is a fundamental problem in search, however, existing ranking algorithms usually restrict the granularity of ranking to full passages or require a specific dense index for each desired level of granularity. Such lack of flexibility in granularity negatively affects many applications that can benefit from more granular ranking, such as sentence-level ranking for open-domain QA, or proposition-level ranking for attribution. In this work, we introduce the idea of any-granularity ranking which leverages multi-vector embeddings to rank at varying levels of granularity while maintaining encoding at a single (coarser) level of granularity. We propose a multi-granular contrastive loss for training multi-vector approaches and validate its utility with both sentences and propositions as ranking units. Finally, we demonstrate the application of proposition-level ranking to post-hoc citation addition in retrieval-augmented generation, surpassing the performance of prompt-driven citation generation. | [
"Gangi Reddy, Revanth",
"Attia, Omar",
"Li, Yunyao",
"Ji, Heng",
"Potdar, Saloni"
] | AGRaME: Any-Granularity Ranking with Multi-Vector Embeddings | emnlp-main.490 | Poster | 2405.15028 | [
""
] | https://huggingface.co/papers/2405.15028 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.491.bib | https://aclanthology.org/2024.emnlp-main.491/ | @inproceedings{gangi-reddy-etal-2024-first,
title = "{FIRST}: Faster Improved Listwise Reranking with Single Token Decoding",
author = "Gangi Reddy, Revanth and
Doo, JaeHyeok and
Xu, Yifei and
Sultan, Md Arafat and
Swain, Deevya and
Sil, Avirup and
Ji, Heng",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.491",
pages = "8642--8652",
abstract = "Large Language Models (LLMs) have significantly advanced the field of information retrieval, particularly for reranking. Listwise LLM rerankers have showcased superior performance and generalizability compared to existing supervised approaches. However, conventional listwise LLM reranking methods lack efficiency as they provide ranking output in the form of a generated ordered sequence of candidate passage identifiers. Further, they are trained with the typical language modeling objective, which treats all ranking errors uniformly{--}potentially at the cost of misranking highly relevant passages. Addressing these limitations, we introduce FIRST, a novel listwise LLM reranking approach leveraging the output logits of the first generated identifier to directly obtain a ranked ordering of the candidates. Further, we incorporate a learning-to-rank loss during training, prioritizing ranking accuracy for the more relevant passages. Empirical results demonstrate that FIRST accelerates inference by 50{\%} while maintaining a robust ranking performance with gains across the BEIR benchmark. Finally, to illustrate the practical effectiveness of listwise LLM rerankers, we investigate their application in providing relevance feedback for retrievers during inference. Our results show that LLM rerankers can provide a stronger distillation signal compared to cross-encoders, yielding substantial improvements in retriever recall after relevance feedback.",
}
| Large Language Models (LLMs) have significantly advanced the field of information retrieval, particularly for reranking. Listwise LLM rerankers have showcased superior performance and generalizability compared to existing supervised approaches. However, conventional listwise LLM reranking methods lack efficiency as they provide ranking output in the form of a generated ordered sequence of candidate passage identifiers. Further, they are trained with the typical language modeling objective, which treats all ranking errors uniformly{--}potentially at the cost of misranking highly relevant passages. Addressing these limitations, we introduce FIRST, a novel listwise LLM reranking approach leveraging the output logits of the first generated identifier to directly obtain a ranked ordering of the candidates. Further, we incorporate a learning-to-rank loss during training, prioritizing ranking accuracy for the more relevant passages. Empirical results demonstrate that FIRST accelerates inference by 50{\%} while maintaining a robust ranking performance with gains across the BEIR benchmark. Finally, to illustrate the practical effectiveness of listwise LLM rerankers, we investigate their application in providing relevance feedback for retrievers during inference. Our results show that LLM rerankers can provide a stronger distillation signal compared to cross-encoders, yielding substantial improvements in retriever recall after relevance feedback. | [
"Gangi Reddy, Revanth",
"Doo, JaeHyeok",
"Xu, Yifei",
"Sultan, Md Arafat",
"Swain, Deevya",
"Sil, Avirup",
"Ji, Heng"
] | FIRST: Faster Improved Listwise Reranking with Single Token Decoding | emnlp-main.491 | Poster | 2406.15657 | [
"https://github.com/gangiswag/llm-reranker"
] | https://huggingface.co/papers/2406.15657 | 0 | 0 | 0 | 7 | [
"rryisthebest/First_Model",
"castorini/first_mistral"
] | [] | [] | [
"rryisthebest/First_Model",
"castorini/first_mistral"
] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.492.bib | https://aclanthology.org/2024.emnlp-main.492/ | @inproceedings{kim-etal-2024-exploring,
title = "Exploring Nested Named Entity Recognition with Large Language Models: Methods, Challenges, and Insights",
author = "Kim, Hongjin and
Kim, Jai-Eun and
Kim, Harksoo",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.492",
pages = "8653--8670",
abstract = "Nested Named Entity Recognition (NER) poses a significant challenge in Natural Language Processing (NLP), demanding sophisticated techniques to identify entities within entities. This research investigates the application of Large Language Models (LLMs) to nested NER, exploring methodologies from prior work and introducing specific reasoning techniques and instructions to improve LLM efficacy. Through experiments conducted on the ACE 2004, ACE 2005, and GENIA datasets, we evaluate the impact of these approaches on nested NER performance. Results indicate that output format critically influences nested NER performance, methodologies from previous works are less effective, and our nested NER-tailored instructions significantly enhance performance. Additionally, we find that label information and descriptions of nested cases are crucial in eliciting the capabilities of LLMs for nested NER, especially in specific domains (i.e., the GENIA dataset). However, these methods still do not outperform BERT-based models, highlighting the ongoing need for innovative approaches in nested NER with LLMs.",
}
| Nested Named Entity Recognition (NER) poses a significant challenge in Natural Language Processing (NLP), demanding sophisticated techniques to identify entities within entities. This research investigates the application of Large Language Models (LLMs) to nested NER, exploring methodologies from prior work and introducing specific reasoning techniques and instructions to improve LLM efficacy. Through experiments conducted on the ACE 2004, ACE 2005, and GENIA datasets, we evaluate the impact of these approaches on nested NER performance. Results indicate that output format critically influences nested NER performance, methodologies from previous works are less effective, and our nested NER-tailored instructions significantly enhance performance. Additionally, we find that label information and descriptions of nested cases are crucial in eliciting the capabilities of LLMs for nested NER, especially in specific domains (i.e., the GENIA dataset). However, these methods still do not outperform BERT-based models, highlighting the ongoing need for innovative approaches in nested NER with LLMs. | [
"Kim, Hongjin",
"Kim, Jai-Eun",
"Kim, Harksoo"
] | Exploring Nested Named Entity Recognition with Large Language Models: Methods, Challenges, and Insights | emnlp-main.492 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.493.bib | https://aclanthology.org/2024.emnlp-main.493/ | @inproceedings{xie-etal-2024-recall,
title = "{R}e{C}a{LL}: Membership Inference via Relative Conditional Log-Likelihoods",
author = "Xie, Roy and
Wang, Junlin and
Huang, Ruomin and
Zhang, Minxing and
Ge, Rong and
Pei, Jian and
Gong, Neil Zhenqiang and
Dhingra, Bhuwan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.493",
pages = "8671--8689",
abstract = "The rapid scaling of large language models (LLMs) has raised concerns about the transparency and fair use of the data used in their pretraining. Detecting such content is challenging due to the scale of the data and limited exposure of each instance during training. We propose ReCaLL (Relative Conditional Log-Likelihood), a novel membership inference attack (MIA) to detect LLMs{'} pretraining data by leveraging their conditional language modeling capabilities. ReCaLL examines the relative change in conditional log-likelihoods when prefixing target data points with non-member context. Our empirical findings show that conditioning member data on non-member prefixes induces a larger decrease in log-likelihood compared to non-member data. We conduct comprehensive experiments and show that ReCaLL achieves state-of-the-art performance on the WikiMIA dataset, even with random and synthetic prefixes, and can be further improved using an ensemble approach. Moreover, we conduct an in-depth analysis of LLMs{'} behavior with different membership contexts, providing insights into how LLMs leverage membership information for effective inference at both the sequence and token level.",
}
| The rapid scaling of large language models (LLMs) has raised concerns about the transparency and fair use of the data used in their pretraining. Detecting such content is challenging due to the scale of the data and limited exposure of each instance during training. We propose ReCaLL (Relative Conditional Log-Likelihood), a novel membership inference attack (MIA) to detect LLMs{'} pretraining data by leveraging their conditional language modeling capabilities. ReCaLL examines the relative change in conditional log-likelihoods when prefixing target data points with non-member context. Our empirical findings show that conditioning member data on non-member prefixes induces a larger decrease in log-likelihood compared to non-member data. We conduct comprehensive experiments and show that ReCaLL achieves state-of-the-art performance on the WikiMIA dataset, even with random and synthetic prefixes, and can be further improved using an ensemble approach. Moreover, we conduct an in-depth analysis of LLMs{'} behavior with different membership contexts, providing insights into how LLMs leverage membership information for effective inference at both the sequence and token level. | [
"Xie, Roy",
"Wang, Junlin",
"Huang, Ruomin",
"Zhang, Minxing",
"Ge, Rong",
"Pei, Jian",
"Gong, Neil Zhenqiang",
"Dhingra, Bhuwan"
] | ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods | emnlp-main.493 | Poster | 2406.15968 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.494.bib | https://aclanthology.org/2024.emnlp-main.494/ | @inproceedings{halevy-etal-2024-flex,
title = "{``}Flex Tape Can{'}t Fix That{''}: Bias and Misinformation in Edited Language Models",
author = "Halevy, Karina H and
Sotnikova, Anna and
AlKhamissi, Badr and
Montariol, Syrielle and
Bosselut, Antoine",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.494",
pages = "8690--8707",
abstract = "Weight-based model editing methods update the parametric knowledge of language models post-training. However, these methods can unintentionally alter unrelated parametric knowledge representations, potentially increasing the risk of harm. In this work, we investigate how weight editing methods unexpectedly amplify model biases after edits. We introduce a novel benchmark dataset, Seesaw-CF, for measuring bias amplification of model editing methods for demographic traits such as race, geographic origin, and gender. We use Seesaw-CF to examine the impact of model editing on bias in five large language models. Our results demonstrate that edited models exhibit, to various degrees, more biased behavior for certain demographic groups than before they were edited, specifically becoming less confident in properties for Asian and African subjects. Additionally, editing facts about place of birth, country of citizenship, or gender has particularly negative effects on the model{'}s knowledge about unrelated properties, such as field of work, a pattern observed across multiple models.",
}
| Weight-based model editing methods update the parametric knowledge of language models post-training. However, these methods can unintentionally alter unrelated parametric knowledge representations, potentially increasing the risk of harm. In this work, we investigate how weight editing methods unexpectedly amplify model biases after edits. We introduce a novel benchmark dataset, Seesaw-CF, for measuring bias amplification of model editing methods for demographic traits such as race, geographic origin, and gender. We use Seesaw-CF to examine the impact of model editing on bias in five large language models. Our results demonstrate that edited models exhibit, to various degrees, more biased behavior for certain demographic groups than before they were edited, specifically becoming less confident in properties for Asian and African subjects. Additionally, editing facts about place of birth, country of citizenship, or gender has particularly negative effects on the model{'}s knowledge about unrelated properties, such as field of work, a pattern observed across multiple models. | [
"Halevy, Karina H",
"Sotnikova, Anna",
"AlKhamissi, Badr",
"Montariol, Syrielle",
"Bosselut, Antoine"
] | “Flex Tape Can't Fix That”: Bias and Misinformation in Edited Language Models | emnlp-main.494 | Poster | [
"https://github.com/ENSCMA2/flextape"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.495.bib | https://aclanthology.org/2024.emnlp-main.495/ | @inproceedings{liu-etal-2024-revisiting,
title = "Revisiting Who{'}s Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective",
author = "Liu, Yujian and
Zhang, Yang and
Jaakkola, Tommi and
Chang, Shiyu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.495",
pages = "8708--8731",
abstract = "This paper investigates Who{'}s Harry Potter (WHP), a pioneering yet insufficiently understood method for LLM unlearning. We explore it in two steps. First, we introduce a new task of LLM targeted unlearning, where given an unlearning target (e.g., a person) and some unlearning documents, we aim to unlearn only the information about the target, rather than everything in the unlearning documents. We further argue that a successful unlearning should satisfy criteria such as not outputting gibberish, not fabricating facts about the unlearning target, and not releasing factual information under jailbreak attacks. Second, we construct a causal intervention framework for targeted unlearning, where the knowledge of the unlearning target is modeled as a confounder between LLM input and output, and the unlearning process as a deconfounding process. This framework justifies and extends WHP, deriving a simple unlearning algorithm that includes WHP as a special case. Experiments on existing and new datasets show that our approach, without explicitly optimizing for the aforementioned criteria, achieves competitive performance in all of them.",
}
| This paper investigates Who{'}s Harry Potter (WHP), a pioneering yet insufficiently understood method for LLM unlearning. We explore it in two steps. First, we introduce a new task of LLM targeted unlearning, where given an unlearning target (e.g., a person) and some unlearning documents, we aim to unlearn only the information about the target, rather than everything in the unlearning documents. We further argue that a successful unlearning should satisfy criteria such as not outputting gibberish, not fabricating facts about the unlearning target, and not releasing factual information under jailbreak attacks. Second, we construct a causal intervention framework for targeted unlearning, where the knowledge of the unlearning target is modeled as a confounder between LLM input and output, and the unlearning process as a deconfounding process. This framework justifies and extends WHP, deriving a simple unlearning algorithm that includes WHP as a special case. Experiments on existing and new datasets show that our approach, without explicitly optimizing for the aforementioned criteria, achieves competitive performance in all of them. | [
"Liu, Yujian",
"Zhang, Yang",
"Jaakkola, Tommi",
"Chang, Shiyu"
] | Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective | emnlp-main.495 | Poster | 2407.16997 | [
"https://github.com/ucsb-nlp-chang/causal_unlearn"
] | https://huggingface.co/papers/2407.16997 | 2 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.496.bib | https://aclanthology.org/2024.emnlp-main.496/ | @inproceedings{yu-etal-2024-lions,
title = "{LION}s: An Empirically Optimized Approach to Align Language Models",
author = "Yu, Xiao and
Wu, Qingyang and
Li, Yu and
Yu, Zhou",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.496",
pages = "8732--8753",
abstract = "Alignment is a crucial step to enhance the instruction-following and conversational abilities of language models. Despite many recent works proposing new algorithms, datasets, and training pipelines, there is a lack of comprehensive studies measuring the impact of various design choices throughout the whole training process. We first conduct a rigorous analysis over a three-stage training pipeline consisting of supervised fine-tuning, offline preference learning, and online preference learning. We have found that using techniques like sequence packing, loss masking in SFT, increasing the preference dataset size in DPO, and online DPO training can significantly improve the performance of language models. We then train from Gemma-2b-base and LLama-3-8b-base, and find that our best models exceed the performance of the official instruct models tuned with closed-source data and algorithms. Our code and models can be found at https://github.com/Columbia-NLP-Lab/LionAlignment.",
}
| Alignment is a crucial step to enhance the instruction-following and conversational abilities of language models. Despite many recent works proposing new algorithms, datasets, and training pipelines, there is a lack of comprehensive studies measuring the impact of various design choices throughout the whole training process. We first conduct a rigorous analysis over a three-stage training pipeline consisting of supervised fine-tuning, offline preference learning, and online preference learning. We have found that using techniques like sequence packing, loss masking in SFT, increasing the preference dataset size in DPO, and online DPO training can significantly improve the performance of language models. We then train from Gemma-2b-base and LLama-3-8b-base, and find that our best models exceed the performance of the official instruct models tuned with closed-source data and algorithms. Our code and models can be found at https://github.com/Columbia-NLP-Lab/LionAlignment. | [
"Yu, Xiao",
"Wu, Qingyang",
"Li, Yu",
"Yu, Zhou"
] | LIONs: An Empirically Optimized Approach to Align Language Models | emnlp-main.496 | Poster | 2407.06542 | [
"https://github.com/columbia-nlp-lab/lionalignment"
] | https://huggingface.co/papers/2407.06542 | 1 | 0 | 0 | 4 | [
"Columbia-NLP/LION-Gemma-2b-odpo-v1.0",
"Columbia-NLP/LION-LLaMA-3-8b-dpo-v1.0",
"Columbia-NLP/LION-LLaMA-3-8b-odpo-v1.0",
"Columbia-NLP/LION-Gemma-2b-dpo-v1.0",
"Columbia-NLP/LION-LLaMA-3-8b-sft-v1.0",
"Columbia-NLP/LION-Gemma-2b-sft-v1.0",
"RichardErkhov/Columbia-NLP_-_LION-Gemma-2b-dpo-v1.0-gguf",
"RichardErkhov/Columbia-NLP_-_LION-LLaMA-3-8b-dpo-v1.0-gguf"
] | [
"Columbia-NLP/DPO-Nectar",
"Columbia-NLP/DPO-UltraFeedback_binarized",
"Columbia-NLP/DPO-tldr-summarisation-preferences",
"Columbia-NLP/DPO-distilabel-intel-orca-dpo-pairs_cleaned",
"Columbia-NLP/DPO-distilabel-capybara-dpo-7k-binarized",
"Columbia-NLP/DPO-hh-rlhf",
"Columbia-NLP/DPO-PKU-SafeRLHF",
"Columbia-NLP/DPO-py-dpo-v0.1",
"Columbia-NLP/DPO-HelpSteer"
] | [
"eduagarcia/open_pt_llm_leaderboard"
] | [
"Columbia-NLP/LION-Gemma-2b-odpo-v1.0",
"Columbia-NLP/LION-LLaMA-3-8b-dpo-v1.0",
"Columbia-NLP/LION-LLaMA-3-8b-odpo-v1.0",
"Columbia-NLP/LION-Gemma-2b-dpo-v1.0",
"Columbia-NLP/LION-LLaMA-3-8b-sft-v1.0",
"Columbia-NLP/LION-Gemma-2b-sft-v1.0",
"RichardErkhov/Columbia-NLP_-_LION-Gemma-2b-dpo-v1.0-gguf",
"RichardErkhov/Columbia-NLP_-_LION-LLaMA-3-8b-dpo-v1.0-gguf"
] | [
"Columbia-NLP/DPO-Nectar",
"Columbia-NLP/DPO-UltraFeedback_binarized",
"Columbia-NLP/DPO-tldr-summarisation-preferences",
"Columbia-NLP/DPO-distilabel-intel-orca-dpo-pairs_cleaned",
"Columbia-NLP/DPO-distilabel-capybara-dpo-7k-binarized",
"Columbia-NLP/DPO-hh-rlhf",
"Columbia-NLP/DPO-PKU-SafeRLHF",
"Columbia-NLP/DPO-py-dpo-v0.1",
"Columbia-NLP/DPO-HelpSteer"
] | [
"eduagarcia/open_pt_llm_leaderboard"
] | 1 |
https://aclanthology.org/2024.emnlp-main.497.bib | https://aclanthology.org/2024.emnlp-main.497/ | @inproceedings{zhang-etal-2024-jellyfish,
title = "Jellyfish: Instruction-Tuning Local Large Language Models for Data Preprocessing",
author = "Zhang, Haochen and
Dong, Yuyang and
Xiao, Chuan and
Oyamada, Masafumi",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.497",
pages = "8754--8782",
abstract = "This paper explores the utilization of LLMs for data preprocessing (DP), a crucial step in the data mining pipeline that transforms raw data into a clean format. We instruction-tune local LLMs as universal DP task solvers that operate on a local, single, and low-priced GPU, ensuring data security and enabling further customization. We select a collection of datasets across four representative DP tasks and construct instruction data using data configuration, knowledge injection, and reasoning data distillation techniques tailored to DP. By tuning Mistral-7B, Llama 3-8B, and OpenOrca-Platypus2-13B, our models, Jellyfish-7B/8B/13B, deliver competitiveness compared to GPT-3.5/4 models and strong generalizability to unseen tasks while barely compromising the base models{'} abilities in NLP tasks. Meanwhile, Jellyfish offers enhanced reasoning capabilities compared to GPT-3.5. Our models are available at: https://huggingface.co/NECOUDBFM/JellyfishOur instruction dataset is available at: https://huggingface.co/datasets/NECOUDBFM/Jellyfish-Instruct",
}
| This paper explores the utilization of LLMs for data preprocessing (DP), a crucial step in the data mining pipeline that transforms raw data into a clean format. We instruction-tune local LLMs as universal DP task solvers that operate on a local, single, and low-priced GPU, ensuring data security and enabling further customization. We select a collection of datasets across four representative DP tasks and construct instruction data using data configuration, knowledge injection, and reasoning data distillation techniques tailored to DP. By tuning Mistral-7B, Llama 3-8B, and OpenOrca-Platypus2-13B, our models, Jellyfish-7B/8B/13B, deliver competitiveness compared to GPT-3.5/4 models and strong generalizability to unseen tasks while barely compromising the base models{'} abilities in NLP tasks. Meanwhile, Jellyfish offers enhanced reasoning capabilities compared to GPT-3.5. Our models are available at: https://huggingface.co/NECOUDBFM/JellyfishOur instruction dataset is available at: https://huggingface.co/datasets/NECOUDBFM/Jellyfish-Instruct | [
"Zhang, Haochen",
"Dong, Yuyang",
"Xiao, Chuan",
"Oyamada, Masafumi"
] | Jellyfish: Instruction-Tuning Local Large Language Models for Data Preprocessing | emnlp-main.497 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.498.bib | https://aclanthology.org/2024.emnlp-main.498/ | @inproceedings{zhang-etal-2024-comprehensive-survey,
title = "A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery",
author = "Zhang, Yu and
Chen, Xiusi and
Jin, Bowen and
Wang, Sheng and
Ji, Shuiwang and
Wang, Wei and
Han, Jiawei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.498",
pages = "8783--8817",
abstract = "In many scientific fields, large language models (LLMs) have revolutionized the way text and other modalities of data (e.g., molecules and proteins) are handled, achieving superior performance in various applications and augmenting the scientific discovery process. Nevertheless, previous surveys on scientific LLMs often concentrate on one or two fields or a single modality. In this paper, we aim to provide a more holistic view of the research landscape by unveiling cross-field and cross-modal connections between scientific LLMs regarding their architectures and pre-training techniques. To this end, we comprehensively survey over 260 scientific LLMs, discuss their commonalities and differences, as well as summarize pre-training datasets and evaluation tasks for each field and modality. Moreover, we investigate how LLMs have been deployed to benefit scientific discovery. Resources related to this survey are available at https://github.com/yuzhimanhua/Awesome-Scientific-Language-Models.",
}
| In many scientific fields, large language models (LLMs) have revolutionized the way text and other modalities of data (e.g., molecules and proteins) are handled, achieving superior performance in various applications and augmenting the scientific discovery process. Nevertheless, previous surveys on scientific LLMs often concentrate on one or two fields or a single modality. In this paper, we aim to provide a more holistic view of the research landscape by unveiling cross-field and cross-modal connections between scientific LLMs regarding their architectures and pre-training techniques. To this end, we comprehensively survey over 260 scientific LLMs, discuss their commonalities and differences, as well as summarize pre-training datasets and evaluation tasks for each field and modality. Moreover, we investigate how LLMs have been deployed to benefit scientific discovery. Resources related to this survey are available at https://github.com/yuzhimanhua/Awesome-Scientific-Language-Models. | [
"Zhang, Yu",
"Chen, Xiusi",
"Jin, Bowen",
"Wang, Sheng",
"Ji, Shuiwang",
"Wang, Wei",
"Han, Jiawei"
] | A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery | emnlp-main.498 | Oral | 2406.10833 | [
"https://github.com/yuzhimanhua/Awesome-Scientific-Language-Models"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.499.bib | https://aclanthology.org/2024.emnlp-main.499/ | @inproceedings{tang-etal-2024-minicheck,
title = "{M}ini{C}heck: Efficient Fact-Checking of {LLM}s on Grounding Documents",
author = "Tang, Liyan and
Laban, Philippe and
Durrett, Greg",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.499",
pages = "8818--8847",
abstract = "Recognizing if LLM output can be grounded in evidence is central to many tasks in NLP: retrieval-augmented generation, summarization, document-grounded dialogue, and more. Current approaches to this kind of fact-checking are based on verifying each piece of a model generation against potential evidence using an LLM. However, this process can be very computationally expensive, requiring many calls to a model to check a single response. In this work, we show how to build small fact-checking models that have GPT-4-level performance but for 400x lower cost. We do this by constructing synthetic training data with GPT-4, which involves creating realistic yet challenging instances of factual errors via a structured generation procedure. Training on this data teaches models to check each fact in the claim and recognize synthesis of information across sentences. For evaluation, we unify datasets from recent work on fact-checking and grounding LLM generations into a new benchmark, LLM-AggreFact. Our best system MiniCheck-FT5 (770M parameters) outperforms all systems of comparable size and reaches GPT-4 accuracy. We release LLM-AggreFact, code for data synthesis, and models.",
}
| Recognizing if LLM output can be grounded in evidence is central to many tasks in NLP: retrieval-augmented generation, summarization, document-grounded dialogue, and more. Current approaches to this kind of fact-checking are based on verifying each piece of a model generation against potential evidence using an LLM. However, this process can be very computationally expensive, requiring many calls to a model to check a single response. In this work, we show how to build small fact-checking models that have GPT-4-level performance but for 400x lower cost. We do this by constructing synthetic training data with GPT-4, which involves creating realistic yet challenging instances of factual errors via a structured generation procedure. Training on this data teaches models to check each fact in the claim and recognize synthesis of information across sentences. For evaluation, we unify datasets from recent work on fact-checking and grounding LLM generations into a new benchmark, LLM-AggreFact. Our best system MiniCheck-FT5 (770M parameters) outperforms all systems of comparable size and reaches GPT-4 accuracy. We release LLM-AggreFact, code for data synthesis, and models. | [
"Tang, Liyan",
"Laban, Philippe",
"Durrett, Greg"
] | MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents | emnlp-main.499 | Poster | 2404.10774 | [
"https://github.com/liyan06/minicheck"
] | https://huggingface.co/papers/2404.10774 | 2 | 2 | 0 | 3 | [
"bespokelabs/Bespoke-MiniCheck-7B",
"lytang/MiniCheck-Flan-T5-Large",
"lytang/MiniCheck-RoBERTa-Large",
"lytang/MiniCheck-DeBERTa-v3-Large"
] | [
"lytang/LLM-AggreFact",
"lytang/C2D-and-D2C-MiniCheck"
] | [] | [
"bespokelabs/Bespoke-MiniCheck-7B",
"lytang/MiniCheck-Flan-T5-Large",
"lytang/MiniCheck-RoBERTa-Large",
"lytang/MiniCheck-DeBERTa-v3-Large"
] | [
"lytang/LLM-AggreFact",
"lytang/C2D-and-D2C-MiniCheck"
] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.500.bib | https://aclanthology.org/2024.emnlp-main.500/ | @inproceedings{wu-etal-2024-beyond,
title = "Beyond Label Attention: Transparency in Language Models for Automated Medical Coding via Dictionary Learning",
author = "Wu, John and
Wu, David and
Sun, Jimeng",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.500",
pages = "8848--8871",
abstract = "Medical coding, the translation of unstructured clinical text into standardized medical codes, is a crucial but time-consuming healthcare practice. Though large language models (LLM) could automate the coding process and improve the efficiency of such tasks, interpretability remains paramount for maintaining patient trust. Current efforts in interpretability of medical coding applications rely heavily on label attention mechanisms, which often leads to the highlighting of extraneous tokens irrelevant to the ICD code. To facilitate accurate interpretability in medical language models, this paper leverages dictionary learning that can efficiently extract sparsely activated representations from dense language model embeddings in superposition. Compared with common label attention mechanisms, our model goes beyond token-level representations by building an interpretable dictionary which enhances the mechanistic-based explanations for each ICD code prediction, even when the highlighted tokens are medically irrelevant. We show that dictionary features are human interpretable, can elucidate the hidden meanings of upwards of 90{\%} of medically irrelevant tokens, and steer model behavior.",
}
| Medical coding, the translation of unstructured clinical text into standardized medical codes, is a crucial but time-consuming healthcare practice. Though large language models (LLM) could automate the coding process and improve the efficiency of such tasks, interpretability remains paramount for maintaining patient trust. Current efforts in interpretability of medical coding applications rely heavily on label attention mechanisms, which often leads to the highlighting of extraneous tokens irrelevant to the ICD code. To facilitate accurate interpretability in medical language models, this paper leverages dictionary learning that can efficiently extract sparsely activated representations from dense language model embeddings in superposition. Compared with common label attention mechanisms, our model goes beyond token-level representations by building an interpretable dictionary which enhances the mechanistic-based explanations for each ICD code prediction, even when the highlighted tokens are medically irrelevant. We show that dictionary features are human interpretable, can elucidate the hidden meanings of upwards of 90{\%} of medically irrelevant tokens, and steer model behavior. | [
"Wu, John",
"Wu, David",
"Sun, Jimeng"
] | Beyond Label Attention: Transparency in Language Models for Automated Medical Coding via Dictionary Learning | emnlp-main.500 | Poster | 2411.00173 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |