Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
566
3.75k
abstract
stringlengths
4
3.1k
authors
sequencelengths
1
66
title
stringlengths
12
172
id
stringlengths
7
19
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
21
upvotes
int64
-1
116
num_comments
int64
-1
11
n_authors
int64
-1
61
Models
sequencelengths
0
100
Datasets
sequencelengths
0
100
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
100
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.emnlp-main.101.bib
https://aclanthology.org/2024.emnlp-main.101/
@inproceedings{zong-etal-2024-triad, title = "Triad: A Framework Leveraging a Multi-Role {LLM}-based Agent to Solve Knowledge Base Question Answering", author = "Zong, Chang and Yan, Yuchen and Lu, Weiming and Shao, Jian and Huang, Yongfeng and Chang, Heng and Zhuang, Yueting", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.101", pages = "1698--1710", abstract = "Recent progress with LLM-based agents has shown promising results across various tasks. However, their use in answering questions from knowledge bases remains largely unexplored. Implementing a KBQA system using traditional methods is challenging due to the shortage of task-specific training data and the complexity of creating task-focused model structures. In this paper, we present Triad, a unified framework that utilizes an LLM-based agent with multiple roles for KBQA tasks. The agent is assigned three roles to tackle different KBQA subtasks: agent as a generalist for mastering various subtasks, as a decision maker for the selection of candidates, and as an advisor for answering questions with knowledge. Our KBQA framework is executed in four phases, involving the collaboration of the agent{'}s multiple roles. We evaluated the performance of our framework using three benchmark datasets, and the results show that our framework outperforms state-of-the-art systems on the LC-QuAD and YAGO-QA benchmarks, yielding F1 scores of 11.8{\%} and 20.7{\%}, respectively.", }
Recent progress with LLM-based agents has shown promising results across various tasks. However, their use in answering questions from knowledge bases remains largely unexplored. Implementing a KBQA system using traditional methods is challenging due to the shortage of task-specific training data and the complexity of creating task-focused model structures. In this paper, we present Triad, a unified framework that utilizes an LLM-based agent with multiple roles for KBQA tasks. The agent is assigned three roles to tackle different KBQA subtasks: agent as a generalist for mastering various subtasks, as a decision maker for the selection of candidates, and as an advisor for answering questions with knowledge. Our KBQA framework is executed in four phases, involving the collaboration of the agent{'}s multiple roles. We evaluated the performance of our framework using three benchmark datasets, and the results show that our framework outperforms state-of-the-art systems on the LC-QuAD and YAGO-QA benchmarks, yielding F1 scores of 11.8{\%} and 20.7{\%}, respectively.
[ "Zong, Chang", "Yan, Yuchen", "Lu, Weiming", "Shao, Jian", "Huang, Yongfeng", "Chang, Heng", "Zhuang, Yueting" ]
Triad: A Framework Leveraging a Multi-Role LLM-based Agent to Solve Knowledge Base Question Answering
emnlp-main.101
Poster
2402.14320
[ "https://github.com/ZJU-DCDLab/Triad" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.102.bib
https://aclanthology.org/2024.emnlp-main.102/
@inproceedings{zhou-etal-2024-metagpt, title = "{M}eta{GPT}: Merging Large Language Models Using Model Exclusive Task Arithmetic", author = "Zhou, Yuyan and Song, Liang and Wang, Bingning and Chen, Weipeng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.102", pages = "1711--1724", abstract = "The advent of large language models (LLMs) like GPT-4 has catalyzed the exploration of multi-task learning (MTL), in which a single model demonstrates proficiency across diverse tasks. Task arithmetic has emerged as a cost-effective approach for MTL. It enables performance enhancement across multiple tasks by adding their corresponding task vectors to a pre-trained model. However, the current lack of a method that can simultaneously achieve optimal performance, computational efficiency, and data privacy limits their application to LLMs. In this paper, we propose \textbf{M}odel \textbf{E}xclusive \textbf{T}ask \textbf{A}rithmetic for merging \textbf{GPT}-scale models (MetaGPT) which formalizes the objective of model merging into a multi-task learning framework, aiming to minimize the average loss difference between the merged model and each individual task model. Since data privacy limits the use of multi-task training data, we leverage LLMs{'} local linearity and task vectors{'} orthogonality to separate the data term and scaling coefficients term and derive a model-exclusive task arithmetic method. Our proposed MetaGPT is data-agnostic and bypasses the heavy search process, making it cost-effective and easy to implement for LLMs. Extensive experiments demonstrate that MetaGPT leads to improvement of task arithmetic and achieves state-of-the-art performance on multiple tasks.", }
The advent of large language models (LLMs) like GPT-4 has catalyzed the exploration of multi-task learning (MTL), in which a single model demonstrates proficiency across diverse tasks. Task arithmetic has emerged as a cost-effective approach for MTL. It enables performance enhancement across multiple tasks by adding their corresponding task vectors to a pre-trained model. However, the current lack of a method that can simultaneously achieve optimal performance, computational efficiency, and data privacy limits their application to LLMs. In this paper, we propose \textbf{M}odel \textbf{E}xclusive \textbf{T}ask \textbf{A}rithmetic for merging \textbf{GPT}-scale models (MetaGPT) which formalizes the objective of model merging into a multi-task learning framework, aiming to minimize the average loss difference between the merged model and each individual task model. Since data privacy limits the use of multi-task training data, we leverage LLMs{'} local linearity and task vectors{'} orthogonality to separate the data term and scaling coefficients term and derive a model-exclusive task arithmetic method. Our proposed MetaGPT is data-agnostic and bypasses the heavy search process, making it cost-effective and easy to implement for LLMs. Extensive experiments demonstrate that MetaGPT leads to improvement of task arithmetic and achieves state-of-the-art performance on multiple tasks.
[ "Zhou, Yuyan", "Song, Liang", "Wang, Bingning", "Chen, Weipeng" ]
MetaGPT: Merging Large Language Models Using Model Exclusive Task Arithmetic
emnlp-main.102
Poster
2406.11385
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.103.bib
https://aclanthology.org/2024.emnlp-main.103/
@inproceedings{wang-etal-2024-event-causality, title = "Event Causality Identification with Synthetic Control", author = "Wang, Haoyu and Liu, Fengze and Zhang, Jiayao and Roth, Dan and Richardson, Kyle", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.103", pages = "1725--1737", abstract = "Event causality identification (ECI), a process that extracts causal relations between events from text, is crucial for distinguishing causation from correlation. Traditional approaches to ECI have primarily utilized linguistic patterns and multi-hop relational inference, risking false causality identification due to informal usage of causality and specious graphical inference. In this paper, we adopt the Rubin Causal Model to identify event causality: given two temporally ordered events, we see the first event as the treatment and the second one as the observed outcome. Determining their causality involves manipulating the treatment and estimating the resultant change in the likelihood of the outcome. Given that it is only possible to implement manipulation conceptually in the text domain, as a work-around, we try to find a twin for the protagonist from existing corpora. This twin should have identical life experiences with the protagonist before the treatment but undergoes an intervention of treatment. However, the practical difficulty of locating such a match limits its feasibility. Addressing this issue, we use the synthetic control method to generate such a twin{'} from relevant historical data, leveraging text embedding synthesis and inversion techniques. This approach allows us to identify causal relations more robustly than previous methods, including GPT-4, which is demonstrated on a causality benchmark, COPES-hard.", }
Event causality identification (ECI), a process that extracts causal relations between events from text, is crucial for distinguishing causation from correlation. Traditional approaches to ECI have primarily utilized linguistic patterns and multi-hop relational inference, risking false causality identification due to informal usage of causality and specious graphical inference. In this paper, we adopt the Rubin Causal Model to identify event causality: given two temporally ordered events, we see the first event as the treatment and the second one as the observed outcome. Determining their causality involves manipulating the treatment and estimating the resultant change in the likelihood of the outcome. Given that it is only possible to implement manipulation conceptually in the text domain, as a work-around, we try to find a twin for the protagonist from existing corpora. This twin should have identical life experiences with the protagonist before the treatment but undergoes an intervention of treatment. However, the practical difficulty of locating such a match limits its feasibility. Addressing this issue, we use the synthetic control method to generate such a twin{'} from relevant historical data, leveraging text embedding synthesis and inversion techniques. This approach allows us to identify causal relations more robustly than previous methods, including GPT-4, which is demonstrated on a causality benchmark, COPES-hard.
[ "Wang, Haoyu", "Liu, Fengze", "Zhang, Jiayao", "Roth, Dan", "Richardson, Kyle" ]
Event Causality Identification with Synthetic Control
emnlp-main.103
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.104.bib
https://aclanthology.org/2024.emnlp-main.104/
@inproceedings{ma-etal-2024-retrieved, title = "Retrieved Sequence Augmentation for Protein Representation Learning", author = "Ma, Chang and Zhao, Haiteng and Zheng, Lin and Xin, Jiayi and Li, Qintong and Wu, Lijun and Deng, Zhihong and Lu, Yang Young and Liu, Qi and Wang, Sheng and Kong, Lingpeng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.104", pages = "1738--1767", abstract = "Protein Language Models traditionally depend on Multiple Sequence Alignments (MSA) to incorporate evolutionary knowledge. However, MSA-based approaches suffer from substantial computational overhead and generally underperform in generalizing to de novo proteins. This study reevaluates the role of MSA, proposing it as a retrieval augmentation method and questioning the necessity of sequence alignment. We show that a simple alternative, Retrieved Sequence Augmentation (RSA), can enhance protein representation learning without the need for alignment and cumbersome preprocessing. RSA surpasses MSA Transformer by an average of 5{\%} in both structural and property prediction tasks while being 373 times faster. Additionally, RSA demonstrates enhanced transferability for predicting de novo proteins. This methodology addresses a critical need for efficiency in protein prediction and can be rapidly employed to identify homologous sequences, improve representation learning, and enhance the capacity of Large Language Models to interpret protein structures.", }
Protein Language Models traditionally depend on Multiple Sequence Alignments (MSA) to incorporate evolutionary knowledge. However, MSA-based approaches suffer from substantial computational overhead and generally underperform in generalizing to de novo proteins. This study reevaluates the role of MSA, proposing it as a retrieval augmentation method and questioning the necessity of sequence alignment. We show that a simple alternative, Retrieved Sequence Augmentation (RSA), can enhance protein representation learning without the need for alignment and cumbersome preprocessing. RSA surpasses MSA Transformer by an average of 5{\%} in both structural and property prediction tasks while being 373 times faster. Additionally, RSA demonstrates enhanced transferability for predicting de novo proteins. This methodology addresses a critical need for efficiency in protein prediction and can be rapidly employed to identify homologous sequences, improve representation learning, and enhance the capacity of Large Language Models to interpret protein structures.
[ "Ma, Chang", "Zhao, Haiteng", "Zheng, Lin", "Xin, Jiayi", "Li, Qintong", "Wu, Lijun", "Deng, Zhihong", "Lu, Yang Young", "Liu, Qi", "Wang, Sheng", "Kong, Lingpeng" ]
Retrieved Sequence Augmentation for Protein Representation Learning
emnlp-main.104
Poster
2302.12563
[ "https://github.com/hkunlp/rsa" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.105.bib
https://aclanthology.org/2024.emnlp-main.105/
@inproceedings{yuan-etal-2024-helpd, title = "{HELPD}: Mitigating Hallucination of {LVLM}s by Hierarchical Feedback Learning with Vision-enhanced Penalty Decoding", author = "Yuan, Fan and Qin, Chi and Xu, Xiaogang and Li, Piji", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.105", pages = "1768--1785", abstract = "Large Vision-Language Models (LVLMs) have shown remarkable performance on many visual-language tasks. However, these models still suffer from \textit{multimodal hallucination}, which means the generation of objects or content that violates the images. Many existing work detects hallucination by directly judging whether an object exists in an image, overlooking the association between the object and semantics. To address this issue, we propose Hierarchical Feedback Learning with Vision-enhanced Penalty Decoding (HELPD). This framework incorporates hallucination feedback at both object and sentence semantic levels. Remarkably, even with a marginal degree of training, this approach can alleviate over 15{\%} of hallucination. Simultaneously, HELPD penalizes the output logits according to the image attention window to avoid being overly affected by generated text. HELPD can be seamlessly integrated with any LVLMs. Our experiments demonstrate that the proposed framework yields favorable results across multiple hallucination benchmarks. It effectively mitigates hallucination for different LVLMs and concurrently improves their text generation quality.", }
Large Vision-Language Models (LVLMs) have shown remarkable performance on many visual-language tasks. However, these models still suffer from \textit{multimodal hallucination}, which means the generation of objects or content that violates the images. Many existing work detects hallucination by directly judging whether an object exists in an image, overlooking the association between the object and semantics. To address this issue, we propose Hierarchical Feedback Learning with Vision-enhanced Penalty Decoding (HELPD). This framework incorporates hallucination feedback at both object and sentence semantic levels. Remarkably, even with a marginal degree of training, this approach can alleviate over 15{\%} of hallucination. Simultaneously, HELPD penalizes the output logits according to the image attention window to avoid being overly affected by generated text. HELPD can be seamlessly integrated with any LVLMs. Our experiments demonstrate that the proposed framework yields favorable results across multiple hallucination benchmarks. It effectively mitigates hallucination for different LVLMs and concurrently improves their text generation quality.
[ "Yuan, Fan", "Qin, Chi", "Xu, Xiaogang", "Li, Piji" ]
HELPD: Mitigating Hallucination of LVLMs by Hierarchical Feedback Learning with Vision-enhanced Penalty Decoding
emnlp-main.105
Poster
2409.20429
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.106.bib
https://aclanthology.org/2024.emnlp-main.106/
@inproceedings{li-etal-2024-topviewrs, title = "{T}op{V}iew{RS}: Vision-Language Models as Top-View Spatial Reasoners", author = "Li, Chengzu and Zhang, Caiqi and Zhou, Han and Collier, Nigel and Korhonen, Anna and Vuli{\'c}, Ivan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.106", pages = "1786--1807", abstract = "Top-view perspective denotes a typical way in which humans read and reason over different types of maps, and it is vital for localization and navigation of humans as well as of {`}non-human{'} agents, such as the ones backed by large Vision-Language Models (VLMs). Nonetheless, spatial reasoning capabilities of modern VLMs in this setup remain unattested and underexplored. In this work, we study their capability to understand and reason over spatial relations from the top view. The focus on top view also enables controlled evaluations at different granularity of spatial reasoning; we clearly disentangle different abilities (e.g., recognizing particular objects versus understanding their relative positions). We introduce the TopViewRS (Top-View Reasoning in Space) dataset, consisting of 11,384 multiple-choice questions with either realistic or semantic top-view map as visual input. We then use it to study and evaluate VLMs across 4 perception and reasoning tasks with different levels of complexity. Evaluation of 10 representative open- and closed-source VLMs reveals the gap of more than 50{\%} compared to average human performance, and it is even lower than the random baseline in some cases. Although additional experiments show that Chain-of-Thought reasoning can boost model capabilities by 5.82{\%} on average, the overall performance of VLMs remains limited. Our findings underscore the critical need for enhanced model capability in top-view spatial reasoning and set a foundation for further research towards human-level proficiency of VLMs in real-world multimodal tasks.", }
Top-view perspective denotes a typical way in which humans read and reason over different types of maps, and it is vital for localization and navigation of humans as well as of {`}non-human{'} agents, such as the ones backed by large Vision-Language Models (VLMs). Nonetheless, spatial reasoning capabilities of modern VLMs in this setup remain unattested and underexplored. In this work, we study their capability to understand and reason over spatial relations from the top view. The focus on top view also enables controlled evaluations at different granularity of spatial reasoning; we clearly disentangle different abilities (e.g., recognizing particular objects versus understanding their relative positions). We introduce the TopViewRS (Top-View Reasoning in Space) dataset, consisting of 11,384 multiple-choice questions with either realistic or semantic top-view map as visual input. We then use it to study and evaluate VLMs across 4 perception and reasoning tasks with different levels of complexity. Evaluation of 10 representative open- and closed-source VLMs reveals the gap of more than 50{\%} compared to average human performance, and it is even lower than the random baseline in some cases. Although additional experiments show that Chain-of-Thought reasoning can boost model capabilities by 5.82{\%} on average, the overall performance of VLMs remains limited. Our findings underscore the critical need for enhanced model capability in top-view spatial reasoning and set a foundation for further research towards human-level proficiency of VLMs in real-world multimodal tasks.
[ "Li, Chengzu", "Zhang, Caiqi", "Zhou, Han", "Collier, Nigel", "Korhonen, Anna", "Vuli{\\'c}, Ivan" ]
TopViewRS: Vision-Language Models as Top-View Spatial Reasoners
emnlp-main.106
Poster
2406.02537
[ "https://github.com/cambridgeltl/topviewrs" ]
https://huggingface.co/papers/2406.02537
0
0
0
6
[]
[ "chengzu/topviewrs" ]
[]
[]
[ "chengzu/topviewrs" ]
[]
1
https://aclanthology.org/2024.emnlp-main.107.bib
https://aclanthology.org/2024.emnlp-main.107/
@inproceedings{wang-etal-2024-da3, title = "{DA}$^3$: A Distribution-Aware Adversarial Attack against Language Models", author = "Wang, Yibo and Dong, Xiangjue and Caverlee, James and Yu, Philip S.", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.107", pages = "1808--1825", abstract = "Language models can be manipulated by adversarial attacks, which introduce subtle perturbations to input data. While recent attack methods can achieve a relatively high attack success rate (ASR), we{'}ve observed that the generated adversarial examples have a different data distribution compared with the original examples. Specifically, these adversarial examples exhibit reduced confidence levels and greater divergence from the training data distribution. Consequently, they are easy to detect using straightforward detection methods, diminishing the efficacy of such attacks. To address this issue, we propose a Distribution-Aware Adversarial Attack (DA$^3$) method. DA$^3$ considers the distribution shifts of adversarial examples to improve attacks{'} effectiveness under detection methods. We further design a novel evaluation metric, the Non-detectable Attack Success Rate (NASR), which integrates both ASR and detectability for the attack task. We conduct experiments on four widely used datasets to validate the attack effectiveness and transferability of adversarial examples generated by DA$^3$ against both the white-box BERT-base and RoBERTa-base models and the black-box LLaMA2-7b model.", }
Language models can be manipulated by adversarial attacks, which introduce subtle perturbations to input data. While recent attack methods can achieve a relatively high attack success rate (ASR), we{'}ve observed that the generated adversarial examples have a different data distribution compared with the original examples. Specifically, these adversarial examples exhibit reduced confidence levels and greater divergence from the training data distribution. Consequently, they are easy to detect using straightforward detection methods, diminishing the efficacy of such attacks. To address this issue, we propose a Distribution-Aware Adversarial Attack (DA$^3$) method. DA$^3$ considers the distribution shifts of adversarial examples to improve attacks{'} effectiveness under detection methods. We further design a novel evaluation metric, the Non-detectable Attack Success Rate (NASR), which integrates both ASR and detectability for the attack task. We conduct experiments on four widely used datasets to validate the attack effectiveness and transferability of adversarial examples generated by DA$^3$ against both the white-box BERT-base and RoBERTa-base models and the black-box LLaMA2-7b model.
[ "Wang, Yibo", "Dong, Xiangjue", "Caverlee, James", "Yu, Philip S." ]
DA^3: A Distribution-Aware Adversarial Attack against Language Models
emnlp-main.107
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.108.bib
https://aclanthology.org/2024.emnlp-main.108/
@inproceedings{li-etal-2024-evaluating-psychological, title = "Evaluating Psychological Safety of Large Language Models", author = "Li, Xingxuan and Li, Yutong and Qiu, Lin and Joty, Shafiq and Bing, Lidong", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.108", pages = "1826--1843", abstract = "In this work, we designed unbiased prompts to systematically evaluate the psychological safety of large language models (LLMs). First, we tested five different LLMs by using two personality tests: Short Dark Triad (SD-3) and Big Five Inventory (BFI). All models scored higher than the human average on SD-3, suggesting a relatively darker personality pattern. Despite being instruction fine-tuned with safety metrics to reduce toxicity, InstructGPT, GPT-3.5, and GPT-4 still showed dark personality patterns; these models scored higher than self-supervised GPT-3 on the Machiavellianism and narcissism traits on SD-3. Then, we evaluated the LLMs in the GPT series by using well-being tests to study the impact of fine-tuning with more training data. We observed a continuous increase in the well-being scores of GPT models. Following these observations, we showed that fine-tuning Llama-2-chat-7B with responses from BFI using direct preference optimization could effectively reduce the psychological toxicity of the model. Based on the findings, we recommended the application of systematic and comprehensive psychological metrics to further evaluate and improve the safety of LLMs.", }
In this work, we designed unbiased prompts to systematically evaluate the psychological safety of large language models (LLMs). First, we tested five different LLMs by using two personality tests: Short Dark Triad (SD-3) and Big Five Inventory (BFI). All models scored higher than the human average on SD-3, suggesting a relatively darker personality pattern. Despite being instruction fine-tuned with safety metrics to reduce toxicity, InstructGPT, GPT-3.5, and GPT-4 still showed dark personality patterns; these models scored higher than self-supervised GPT-3 on the Machiavellianism and narcissism traits on SD-3. Then, we evaluated the LLMs in the GPT series by using well-being tests to study the impact of fine-tuning with more training data. We observed a continuous increase in the well-being scores of GPT models. Following these observations, we showed that fine-tuning Llama-2-chat-7B with responses from BFI using direct preference optimization could effectively reduce the psychological toxicity of the model. Based on the findings, we recommended the application of systematic and comprehensive psychological metrics to further evaluate and improve the safety of LLMs.
[ "Li, Xingxuan", "Li, Yutong", "Qiu, Lin", "Joty, Shafiq", "Bing, Lidong" ]
Evaluating Psychological Safety of Large Language Models
emnlp-main.108
Poster
2212.10529
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.109.bib
https://aclanthology.org/2024.emnlp-main.109/
@inproceedings{chen-etal-2024-effective, title = "An Effective Deployment of Diffusion {LM} for Data Augmentation in Low-Resource Sentiment Classification", author = "Chen, Zhuowei and Wang, Lianxi and Wu, Yuben and Liao, Xinfeng and Tian, Yujia and Zhong, Junyang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.109", pages = "1844--1856", abstract = "Sentiment classification (SC) often suffers from low-resource challenges such as domain-specific contexts, imbalanced label distributions, and few-shot scenarios. The potential of the diffusion language model (LM) for textual data augmentation (DA) remains unexplored, moreover, textual DA methods struggle to balance the diversity and consistency of new samples. Most DA methods either perform logical modifications or rephrase less important tokens in the original sequence with the language model. In the context of SC, strong emotional tokens could act critically on the sentiment of the whole sequence. Therefore, contrary to rephrasing less important context, we propose DiffusionCLS to leverage a diffusion LM to capture in-domain knowledge and generate pseudo samples by reconstructing strong label-related tokens. This approach ensures a balance between consistency and diversity, avoiding the introduction of noise and augmenting crucial features of datasets. DiffusionCLS also comprises a Noise-Resistant Training objective to help the model generalize. Experiments demonstrate the effectiveness of our method in various low-resource scenarios including domain-specific and domain-general problems. Ablation studies confirm the effectiveness of our framework{'}s modules, and visualization studies highlight optimal deployment conditions, reinforcing our conclusions.", }
Sentiment classification (SC) often suffers from low-resource challenges such as domain-specific contexts, imbalanced label distributions, and few-shot scenarios. The potential of the diffusion language model (LM) for textual data augmentation (DA) remains unexplored, moreover, textual DA methods struggle to balance the diversity and consistency of new samples. Most DA methods either perform logical modifications or rephrase less important tokens in the original sequence with the language model. In the context of SC, strong emotional tokens could act critically on the sentiment of the whole sequence. Therefore, contrary to rephrasing less important context, we propose DiffusionCLS to leverage a diffusion LM to capture in-domain knowledge and generate pseudo samples by reconstructing strong label-related tokens. This approach ensures a balance between consistency and diversity, avoiding the introduction of noise and augmenting crucial features of datasets. DiffusionCLS also comprises a Noise-Resistant Training objective to help the model generalize. Experiments demonstrate the effectiveness of our method in various low-resource scenarios including domain-specific and domain-general problems. Ablation studies confirm the effectiveness of our framework{'}s modules, and visualization studies highlight optimal deployment conditions, reinforcing our conclusions.
[ "Chen, Zhuowei", "Wang, Lianxi", "Wu, Yuben", "Liao, Xinfeng", "Tian, Yujia", "Zhong, Junyang" ]
An Effective Deployment of Diffusion LM for Data Augmentation in Low-Resource Sentiment Classification
emnlp-main.109
Poster
2409.03203
[ "https://github.com/johnnychanv/diffusioncls" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.110.bib
https://aclanthology.org/2024.emnlp-main.110/
@inproceedings{hao-etal-2024-self, title = "Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question Answering", author = "Hao, Dongze and Wang, Qunbo and Guo, Longteng and Jiang, Jie and Liu, Jing", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.110", pages = "1857--1868", abstract = "While large pre-trained visual-language models have shown promising results on traditional visual question answering benchmarks, it is still challenging for them to answer complex VQA problems which requires diverse world knowledge. Motivated by the research of retrieval-augmented generation in the field of natural language processing, we use Dense Passage Retrieval (DPR) to retrieve related knowledge to help the model answer questions. However, DPR conduct retrieving in natural language space, which may not ensure comprehensive acquisition of image information. Thus, the retrieved knowledge is not truly conducive to helping answer the question, affecting the performance of the overall system. To address this issue, we propose a novel framework that leverages the visual-language model to select the key knowledge retrieved by DPR and answer questions. The framework consists of two modules: Selector and Answerer, where both are initialized by the MLLM and parameter-efficiently finetuned by self-bootstrapping: find key knowledge in the retrieved knowledge documents using the Selector, and then use them to finetune the Answerer to predict answers; obtain the pseudo-labels of key knowledge documents based on the predictions of the Answerer and weak supervision labels, and then finetune the Selector to select key knowledge; repeat. Our framework significantly enhances the performance of the baseline on the challenging open-domain Knowledge-based VQA benchmark, OK-VQA, achieving a state-of-the-art accuracy of 62.83{\%}.", }
While large pre-trained visual-language models have shown promising results on traditional visual question answering benchmarks, it is still challenging for them to answer complex VQA problems which requires diverse world knowledge. Motivated by the research of retrieval-augmented generation in the field of natural language processing, we use Dense Passage Retrieval (DPR) to retrieve related knowledge to help the model answer questions. However, DPR conduct retrieving in natural language space, which may not ensure comprehensive acquisition of image information. Thus, the retrieved knowledge is not truly conducive to helping answer the question, affecting the performance of the overall system. To address this issue, we propose a novel framework that leverages the visual-language model to select the key knowledge retrieved by DPR and answer questions. The framework consists of two modules: Selector and Answerer, where both are initialized by the MLLM and parameter-efficiently finetuned by self-bootstrapping: find key knowledge in the retrieved knowledge documents using the Selector, and then use them to finetune the Answerer to predict answers; obtain the pseudo-labels of key knowledge documents based on the predictions of the Answerer and weak supervision labels, and then finetune the Selector to select key knowledge; repeat. Our framework significantly enhances the performance of the baseline on the challenging open-domain Knowledge-based VQA benchmark, OK-VQA, achieving a state-of-the-art accuracy of 62.83{\%}.
[ "Hao, Dongze", "Wang, Qunbo", "Guo, Longteng", "Jiang, Jie", "Liu, Jing" ]
Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question Answering
emnlp-main.110
Poster
2404.13947
[ "https://github.com/haodongze/self-ksel-qans" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.111.bib
https://aclanthology.org/2024.emnlp-main.111/
@inproceedings{zhao-etal-2024-psfuture, title = "{P}s{F}uture: A Pseudo-Future-based Zero-Shot Adaptive Policy for Simultaneous Machine Translation", author = "Zhao, Libo and Li, Jing and Zeng, Ziqian", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.111", pages = "1869--1881", abstract = "Simultaneous Machine Translation (SiMT) requires target tokens to be generated in real-time as streaming source tokens are consumed. Traditional approaches to SiMT typically require sophisticated architectures and extensive parameter configurations for training adaptive read/write policies, which in turn demand considerable computational power and memory. We propose PsFuture, the first zero-shot adaptive read/write policy for SiMT, enabling the translation model to independently determine read/write actions without the necessity for additional training. Furthermore, we introduce a novel training strategy, Prefix-to-Full (P2F), specifically tailored to adjust offline translation models for SiMT applications, exploiting the advantages of the bidirectional attention mechanism inherent in offline models. Experiments across multiple benchmarks demonstrate that our zero-shot policy attains performance on par with strong baselines and the P2F method can further enhance performance, achieving an outstanding trade-off between translation quality and latency.", }
Simultaneous Machine Translation (SiMT) requires target tokens to be generated in real-time as streaming source tokens are consumed. Traditional approaches to SiMT typically require sophisticated architectures and extensive parameter configurations for training adaptive read/write policies, which in turn demand considerable computational power and memory. We propose PsFuture, the first zero-shot adaptive read/write policy for SiMT, enabling the translation model to independently determine read/write actions without the necessity for additional training. Furthermore, we introduce a novel training strategy, Prefix-to-Full (P2F), specifically tailored to adjust offline translation models for SiMT applications, exploiting the advantages of the bidirectional attention mechanism inherent in offline models. Experiments across multiple benchmarks demonstrate that our zero-shot policy attains performance on par with strong baselines and the P2F method can further enhance performance, achieving an outstanding trade-off between translation quality and latency.
[ "Zhao, Libo", "Li, Jing", "Zeng, Ziqian" ]
PsFuture: A Pseudo-Future-based Zero-Shot Adaptive Policy for Simultaneous Machine Translation
emnlp-main.111
Oral
2410.04075
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.112.bib
https://aclanthology.org/2024.emnlp-main.112/
@inproceedings{zhang-etal-2024-tinychart, title = "{T}iny{C}hart: Efficient Chart Understanding with Program-of-Thoughts Learning and Visual Token Merging", author = "Zhang, Liang and Hu, Anwen and Xu, Haiyang and Yan, Ming and Xu, Yichen and Jin, Qin and Zhang, Ji and Huang, Fei", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.112", pages = "1882--1898", abstract = "Charts are important for presenting and explaining complex data relationships. Recently, multimodal large language models (MLLMs) have shown remarkable capabilities in chart understanding. However, the sheer size of these models limits their use in resource-constrained environments. In this paper, we present TinyChart, an efficient MLLM for chart understanding with only 3B parameters. TinyChart overcomes two key challenges in efficient chart understanding: (1) reduce the burden of learning numerical computations through Program-of-Thoughts (PoT) learning, which trains the model to generate Python programs for numerical calculations, and (2) reduce lengthy vision feature sequences through Vision Token Merging, which gradually merges most similar vision tokens. Extensive experiments demonstrate that our 3B TinyChart achieves SOTA performance on various chart understanding benchmarks including ChartQA, Chart-to-Text, Chart-to-Table, OpenCQA, and ChartX. It outperforms several chart-understanding MLLMs with up to 13B parameters, and close-sourced MLLM GPT-4V on ChartQA, with higher throughput during inference due to a smaller model scale and more efficient vision encoding.", }
Charts are important for presenting and explaining complex data relationships. Recently, multimodal large language models (MLLMs) have shown remarkable capabilities in chart understanding. However, the sheer size of these models limits their use in resource-constrained environments. In this paper, we present TinyChart, an efficient MLLM for chart understanding with only 3B parameters. TinyChart overcomes two key challenges in efficient chart understanding: (1) reduce the burden of learning numerical computations through Program-of-Thoughts (PoT) learning, which trains the model to generate Python programs for numerical calculations, and (2) reduce lengthy vision feature sequences through Vision Token Merging, which gradually merges most similar vision tokens. Extensive experiments demonstrate that our 3B TinyChart achieves SOTA performance on various chart understanding benchmarks including ChartQA, Chart-to-Text, Chart-to-Table, OpenCQA, and ChartX. It outperforms several chart-understanding MLLMs with up to 13B parameters, and close-sourced MLLM GPT-4V on ChartQA, with higher throughput during inference due to a smaller model scale and more efficient vision encoding.
[ "Zhang, Liang", "Hu, Anwen", "Xu, Haiyang", "Yan, Ming", "Xu, Yichen", "Jin, Qin", "Zhang, Ji", "Huang, Fei" ]
TinyChart: Efficient Chart Understanding with Program-of-Thoughts Learning and Visual Token Merging
emnlp-main.112
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.113.bib
https://aclanthology.org/2024.emnlp-main.113/
@inproceedings{zhang-etal-2024-need, title = "Do We Need Language-Specific Fact-Checking Models? The Case of {C}hinese", author = "Zhang, Caiqi and Guo, Zhijiang and Vlachos, Andreas", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.113", pages = "1899--1914", abstract = "This paper investigates the potential benefits of language-specific fact-checking models, focusing on the case of Chinese using CHEF dataset. To better reflect real-world fact-checking, we first develop a novel Chinese document-level evidence retriever, achieving state-of-the-art performance. We then demonstrate the limitations of translation-based methods and multilingual language models, highlighting the need for language-specific systems. To better analyze token-level biases in different systems, we construct an adversarial dataset based on the CHEF dataset, where each instance has a large word overlap with the original one but holds the opposite veracity label. Experimental results on the CHEF dataset and our adversarial dataset show that our proposed method outperforms translation-based methods and multilingual language models and is more robust toward biases, emphasizing the importance of language-specific fact-checking systems.", }
This paper investigates the potential benefits of language-specific fact-checking models, focusing on the case of Chinese using CHEF dataset. To better reflect real-world fact-checking, we first develop a novel Chinese document-level evidence retriever, achieving state-of-the-art performance. We then demonstrate the limitations of translation-based methods and multilingual language models, highlighting the need for language-specific systems. To better analyze token-level biases in different systems, we construct an adversarial dataset based on the CHEF dataset, where each instance has a large word overlap with the original one but holds the opposite veracity label. Experimental results on the CHEF dataset and our adversarial dataset show that our proposed method outperforms translation-based methods and multilingual language models and is more robust toward biases, emphasizing the importance of language-specific fact-checking systems.
[ "Zhang, Caiqi", "Guo, Zhijiang", "Vlachos, Andreas" ]
Do We Need Language-Specific Fact-Checking Models? The Case of Chinese
emnlp-main.113
Poster
2401.15498
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.114.bib
https://aclanthology.org/2024.emnlp-main.114/
@inproceedings{li-etal-2024-enhancing-advanced, title = "Enhancing Advanced Visual Reasoning Ability of Large Language Models", author = "Li, Zhiyuan and Liu, Dongnan and Zhang, Chaoyi and Wang, Heng and Xue, Tengfei and Cai, Weidong", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.114", pages = "1915--1929", abstract = "Recent advancements in Vision-Language (VL) research have sparked new benchmarks for complex visual reasoning, challenging models{'} advanced reasoning ability. Traditional Vision-Language models (VLMs) perform well in visual perception tasks while struggling with complex reasoning scenarios. Conversely, Large Language Models (LLMs) demonstrate robust text reasoning capabilities; however, they lack visual acuity. To bridge this gap, we propose **C**omplex **V**isual **R**easoning **L**arge **L**anguage **M**odels (**CVR-LLM**), capitalizing on VLMs{'} visual perception proficiency and LLMs{'} extensive reasoning capability. Unlike recent multimodal large language models (MLLMs) that require a projection layer, our approach transforms images into detailed, context-aware descriptions using an iterative self-refinement loop and leverages LLMs{'} text knowledge for accurate predictions without extra training. We also introduce a novel multi-modal in-context learning (ICL) methodology to enhance LLMs{'} contextual understanding and reasoning. Additionally, we introduce Chain-of-Comparison (CoC), a step-by-step comparison technique enabling contrasting various aspects of predictions. Our CVR-LLM presents the first comprehensive study across a wide array of complex visual reasoning tasks and achieves SOTA performance among all.", }
Recent advancements in Vision-Language (VL) research have sparked new benchmarks for complex visual reasoning, challenging models{'} advanced reasoning ability. Traditional Vision-Language models (VLMs) perform well in visual perception tasks while struggling with complex reasoning scenarios. Conversely, Large Language Models (LLMs) demonstrate robust text reasoning capabilities; however, they lack visual acuity. To bridge this gap, we propose **C**omplex **V**isual **R**easoning **L**arge **L**anguage **M**odels (**CVR-LLM**), capitalizing on VLMs{'} visual perception proficiency and LLMs{'} extensive reasoning capability. Unlike recent multimodal large language models (MLLMs) that require a projection layer, our approach transforms images into detailed, context-aware descriptions using an iterative self-refinement loop and leverages LLMs{'} text knowledge for accurate predictions without extra training. We also introduce a novel multi-modal in-context learning (ICL) methodology to enhance LLMs{'} contextual understanding and reasoning. Additionally, we introduce Chain-of-Comparison (CoC), a step-by-step comparison technique enabling contrasting various aspects of predictions. Our CVR-LLM presents the first comprehensive study across a wide array of complex visual reasoning tasks and achieves SOTA performance among all.
[ "Li, Zhiyuan", "Liu, Dongnan", "Zhang, Chaoyi", "Wang, Heng", "Xue, Tengfei", "Cai, Weidong" ]
Enhancing Advanced Visual Reasoning Ability of Large Language Models
emnlp-main.114
Poster
2409.13980
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.115.bib
https://aclanthology.org/2024.emnlp-main.115/
@inproceedings{tang-etal-2024-cmd, title = "{CMD}: a framework for Context-aware Model self-Detoxification", author = "Tang, Zecheng and Zhou, Keyan and Li, Juntao and Ding, Yuyang and Wang, Pinzheng and Bowen, Yan and Hua, Renjie and Zhang, Min", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.115", pages = "1930--1949", abstract = "Text detoxification aims to minimize the risk of language models producing toxic content. Existing detoxification methods of directly constraining the model output or further training the model on the non-toxic corpus fail to achieve a decent balance between detoxification effectiveness and generation quality. This issue stems from the neglect of constrain imposed by the context since language models are designed to generate output that closely matches the context while detoxification methods endeavor to ensure the safety of the output even if it semantically deviates from the context. In view of this, we introduce a Context-aware Model self-Detoxification (CMD) framework that pays attention to both the context and the detoxification process, i.e., first detoxifying the context and then making the language model generate along the safe context. Specifically, CMD framework involves two phases: utilizing language models to synthesize data and applying these data for training. We also introduce a toxic contrastive loss that encourages the model generation away from the negative toxic samples. Experiments on various LLMs have verified the effectiveness of our MSD framework, which can yield the best performance compared to baselines.", }
Text detoxification aims to minimize the risk of language models producing toxic content. Existing detoxification methods of directly constraining the model output or further training the model on the non-toxic corpus fail to achieve a decent balance between detoxification effectiveness and generation quality. This issue stems from the neglect of constrain imposed by the context since language models are designed to generate output that closely matches the context while detoxification methods endeavor to ensure the safety of the output even if it semantically deviates from the context. In view of this, we introduce a Context-aware Model self-Detoxification (CMD) framework that pays attention to both the context and the detoxification process, i.e., first detoxifying the context and then making the language model generate along the safe context. Specifically, CMD framework involves two phases: utilizing language models to synthesize data and applying these data for training. We also introduce a toxic contrastive loss that encourages the model generation away from the negative toxic samples. Experiments on various LLMs have verified the effectiveness of our MSD framework, which can yield the best performance compared to baselines.
[ "Tang, Zecheng", "Zhou, Keyan", "Li, Juntao", "Ding, Yuyang", "Wang, Pinzheng", "Bowen, Yan", "Hua, Renjie", "Zhang, Min" ]
CMD: a framework for Context-aware Model self-Detoxification
emnlp-main.115
Poster
2308.08295
[ "https://github.com/codinnlg/detox-cot" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.116.bib
https://aclanthology.org/2024.emnlp-main.116/
@inproceedings{hu-etal-2024-embedding, title = "Embedding and Gradient Say Wrong: A White-Box Method for Hallucination Detection", author = "Hu, Xiaomeng and Zhang, Yiming and Peng, Ru and Zhang, Haozhe and Wu, Chenwei and Chen, Gang and Zhao, Junbo", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.116", pages = "1950--1959", abstract = "In recent years, large language models (LLMs) have achieved remarkable success in the field of natural language generation. Compared to previous small-scale models, they are capable of generating fluent output based on the provided prefix or prompt. However, one critical challenge {---} the *hallucination* problem {---} remains to be resolved. Generally, the community refers to the undetected hallucination scenario where the LLMs generate text unrelated to the input text or facts. In this study, we intend to model the distributional distance between the regular conditional output and the unconditional output, which is generated without a given input text. Based upon Taylor Expansion for this distance at the output probability space, our approach manages to leverage the embedding and first-order gradient information. The resulting approach is plug-and-play that can be easily adapted to any autoregressive LLM. On the hallucination benchmarks HADES and other datasets, our approach achieves state-of-the-art performance.", }
In recent years, large language models (LLMs) have achieved remarkable success in the field of natural language generation. Compared to previous small-scale models, they are capable of generating fluent output based on the provided prefix or prompt. However, one critical challenge {---} the *hallucination* problem {---} remains to be resolved. Generally, the community refers to the undetected hallucination scenario where the LLMs generate text unrelated to the input text or facts. In this study, we intend to model the distributional distance between the regular conditional output and the unconditional output, which is generated without a given input text. Based upon Taylor Expansion for this distance at the output probability space, our approach manages to leverage the embedding and first-order gradient information. The resulting approach is plug-and-play that can be easily adapted to any autoregressive LLM. On the hallucination benchmarks HADES and other datasets, our approach achieves state-of-the-art performance.
[ "Hu, Xiaomeng", "Zhang, Yiming", "Peng, Ru", "Zhang, Haozhe", "Wu, Chenwei", "Chen, Gang", "Zhao, Junbo" ]
Embedding and Gradient Say Wrong: A White-Box Method for Hallucination Detection
emnlp-main.116
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.117.bib
https://aclanthology.org/2024.emnlp-main.117/
@inproceedings{zhang-etal-2024-tcsinger, title = "{TCS}inger: Zero-Shot Singing Voice Synthesis with Style Transfer and Multi-Level Style Control", author = "Zhang, Yu and Jiang, Ziyue and Li, Ruiqi and Pan, Changhao and He, Jinzheng and Huang, Rongjie and Wang, Chuxin and Zhao, Zhou", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.117", pages = "1960--1975", abstract = "Zero-shot singing voice synthesis (SVS) with style transfer and style control aims to generate high-quality singing voices with unseen timbres and styles (including singing method, emotion, rhythm, technique, and pronunciation) from audio and text prompts. However, the multifaceted nature of singing styles poses a significant challenge for effective modeling, transfer, and control. Furthermore, current SVS models often fail to generate singing voices rich in stylistic nuances for unseen singers. To address these challenges, we introduce TCSinger, the first zero-shot SVS model for style transfer across cross-lingual speech and singing styles, along with multi-level style control. Specifically, TCSinger proposes three primary modules: 1) the clustering style encoder employs a clustering vector quantization model to stably condense style information into a compact latent space; 2) the Style and Duration Language Model (S{\&}D-LM) concurrently predicts style information and phoneme duration, which benefits both; 3) the style adaptive decoder uses a novel mel-style adaptive normalization method to generate singing voices with enhanced details. Experimental results show that TCSinger outperforms all baseline models in synthesis quality, singer similarity, and style controllability across various tasks, including zero-shot style transfer, multi-level style control, cross-lingual style transfer, and speech-to-singing style transfer.", }
Zero-shot singing voice synthesis (SVS) with style transfer and style control aims to generate high-quality singing voices with unseen timbres and styles (including singing method, emotion, rhythm, technique, and pronunciation) from audio and text prompts. However, the multifaceted nature of singing styles poses a significant challenge for effective modeling, transfer, and control. Furthermore, current SVS models often fail to generate singing voices rich in stylistic nuances for unseen singers. To address these challenges, we introduce TCSinger, the first zero-shot SVS model for style transfer across cross-lingual speech and singing styles, along with multi-level style control. Specifically, TCSinger proposes three primary modules: 1) the clustering style encoder employs a clustering vector quantization model to stably condense style information into a compact latent space; 2) the Style and Duration Language Model (S{\&}D-LM) concurrently predicts style information and phoneme duration, which benefits both; 3) the style adaptive decoder uses a novel mel-style adaptive normalization method to generate singing voices with enhanced details. Experimental results show that TCSinger outperforms all baseline models in synthesis quality, singer similarity, and style controllability across various tasks, including zero-shot style transfer, multi-level style control, cross-lingual style transfer, and speech-to-singing style transfer.
[ "Zhang, Yu", "Jiang, Ziyue", "Li, Ruiqi", "Pan, Changhao", "He, Jinzheng", "Huang, Rongjie", "Wang, Chuxin", "Zhao, Zhou" ]
TCSinger: Zero-Shot Singing Voice Synthesis with Style Transfer and Multi-Level Style Control
emnlp-main.117
Poster
2409.15977
[ "https://github.com/AaronZ345/TCSinger" ]
https://huggingface.co/papers/2409.15977
0
0
0
8
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.118.bib
https://aclanthology.org/2024.emnlp-main.118/
@inproceedings{li-etal-2024-helpful, title = "Be Helpful but Don{'}t Talk too Much - Enhancing Helpfulness in Conversations through Relevance in Multi-Turn Emotional Support", author = "Li, Junlin and Peng, Bo and Hsu, Yu-Yin and Huang, Chu-Ren", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.118", pages = "1976--1988", abstract = "For a conversation to help and support, speakers should maintain an {``}effect-effort{''} tradeoff. As outlined in the gist of {``}Cognitive Relevance Principle{''}, helpful speakers should optimize the {``}cognitive relevance{''} through maximizing the {``}cognitive effects{''} and minimizing the {``}processing effort{''} imposed on listeners. Although preference learning methods have given rise a boon of studies in pursuit of{``}effect-optimization{''}, none have delved into the critical {``}effort-optimiazation{''} to fully cultivate the awareness of {``}optimal relevance{''} into thecognition of conversation agents. To address this gap, we integrate the {``}Cognitive Relevance Principle{''} into emotional support agents in the environment of multi-turn conversation. The results demonstrate a significant and robust improvement against the baseline systems with respect to response quality, human-likedness and supportivenss. This study offers compelling evidence for the effectiveness of the {``}Relevance Principle{''} in generating human-like, helpful, and harmless emotional support conversations. The source code will be available at https://github.com/CN-Eyetk/VLESA-ORL.git", }
For a conversation to help and support, speakers should maintain an {``}effect-effort{''} tradeoff. As outlined in the gist of {``}Cognitive Relevance Principle{''}, helpful speakers should optimize the {``}cognitive relevance{''} through maximizing the {``}cognitive effects{''} and minimizing the {``}processing effort{''} imposed on listeners. Although preference learning methods have given rise a boon of studies in pursuit of{``}effect-optimization{''}, none have delved into the critical {``}effort-optimiazation{''} to fully cultivate the awareness of {``}optimal relevance{''} into thecognition of conversation agents. To address this gap, we integrate the {``}Cognitive Relevance Principle{''} into emotional support agents in the environment of multi-turn conversation. The results demonstrate a significant and robust improvement against the baseline systems with respect to response quality, human-likedness and supportivenss. This study offers compelling evidence for the effectiveness of the {``}Relevance Principle{''} in generating human-like, helpful, and harmless emotional support conversations. The source code will be available at https://github.com/CN-Eyetk/VLESA-ORL.git
[ "Li, Junlin", "Peng, Bo", "Hsu, Yu-Yin", "Huang, Chu-Ren" ]
Be Helpful but Don't Talk too Much - Enhancing Helpfulness in Conversations through Relevance in Multi-Turn Emotional Support
emnlp-main.118
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.119.bib
https://aclanthology.org/2024.emnlp-main.119/
@inproceedings{kim-etal-2024-aligning, title = "Aligning Language Models to Explicitly Handle Ambiguity", author = "Kim, Hyuhng Joon and Kim, Youna and Park, Cheonbok and Kim, Junyeob and Park, Choonghyun and Yoo, Kang Min and Lee, Sang-goo and Kim, Taeuk", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.119", pages = "1989--2007", abstract = "In interactions between users and language model agents, user utterances frequently exhibit ellipsis (omission of words or phrases) or imprecision (lack of exactness) to prioritize efficiency. This can lead to varying interpretations of the same input based on different assumptions or background knowledge. It is thus crucial for agents to adeptly handle the inherent ambiguity in queries to ensure reliability. However, even state-of-the-art large language models (LLMs) still face challenges in such scenarios, primarily due to the following hurdles: (1) LLMs are not explicitly trained to deal with ambiguous utterances; (2) the degree of ambiguity perceived by the LLMs may vary depending on the possessed knowledge. To address these issues, we propose Alignment with Perceived Ambiguity (APA), a novel pipeline that aligns LLMs to manage ambiguous queries by leveraging their own assessment of ambiguity (i.e., perceived ambiguity). Experimental results on question-answering datasets demonstrate that APA empowers LLMs to explicitly detect and manage ambiguous queries while retaining the ability to answer clear questions. Furthermore, our finding proves that APA excels beyond training with gold-standard labels, especially in out-of-distribution scenarios. The data and code are available at https://github.com/heyjoonkim/APA.", }
In interactions between users and language model agents, user utterances frequently exhibit ellipsis (omission of words or phrases) or imprecision (lack of exactness) to prioritize efficiency. This can lead to varying interpretations of the same input based on different assumptions or background knowledge. It is thus crucial for agents to adeptly handle the inherent ambiguity in queries to ensure reliability. However, even state-of-the-art large language models (LLMs) still face challenges in such scenarios, primarily due to the following hurdles: (1) LLMs are not explicitly trained to deal with ambiguous utterances; (2) the degree of ambiguity perceived by the LLMs may vary depending on the possessed knowledge. To address these issues, we propose Alignment with Perceived Ambiguity (APA), a novel pipeline that aligns LLMs to manage ambiguous queries by leveraging their own assessment of ambiguity (i.e., perceived ambiguity). Experimental results on question-answering datasets demonstrate that APA empowers LLMs to explicitly detect and manage ambiguous queries while retaining the ability to answer clear questions. Furthermore, our finding proves that APA excels beyond training with gold-standard labels, especially in out-of-distribution scenarios. The data and code are available at https://github.com/heyjoonkim/APA.
[ "Kim, Hyuhng Joon", "Kim, Youna", "Park, Cheonbok", "Kim, Junyeob", "Park, Choonghyun", "Yoo, Kang Min", "Lee, Sang-goo", "Kim, Taeuk" ]
Aligning Language Models to Explicitly Handle Ambiguity
emnlp-main.119
Poster
2404.11972
[ "https://github.com/heyjoonkim/apa" ]
https://huggingface.co/papers/2404.11972
1
1
0
8
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.120.bib
https://aclanthology.org/2024.emnlp-main.120/
@inproceedings{qi-etal-2024-tag, title = "Tag-grounded Visual Instruction Tuning with Retrieval Augmentation", author = "Qi, Daiqing and Zhao, Handong and Wei, Zijun and Li, Sheng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.120", pages = "2008--2026", abstract = "Despite recent advances in the general visual instruction-following ability of Multimodal Large Language Models (MLLMs), they still struggle with critical problems when required to provide a precise and detailed response to a visual instruction: (1) failure to identify novel objects or entities, (2) mention of non-existent objects, and (3) neglect of object{'}s attributed details. Intuitive solutions include improving the size and quality of data or using larger foundation models. They show effectiveness in mitigating these issues, but at an expensive cost of collecting a vast amount of new data and introducing a significantly larger model. Standing at the intersection of these approaches, we examine the three object-oriented problems from the perspective of the image-to-text mapping process by the multimodal connector. In this paper, we first identify the limitations of multimodal connectors stemming from insufficient training data. Driven by this, we propose to enhance the mapping with retrieval-augmented tag tokens, which contain rich object-aware information such as object names and attributes. With our Tag-grounded visual instruction tuning with retrieval Augmentation (TUNA), we outperform baselines that share the same language model and training data on 12 benchmarks. Furthermore, we show the zero-shot capability of TUNA when provided with specific datastores.", }
Despite recent advances in the general visual instruction-following ability of Multimodal Large Language Models (MLLMs), they still struggle with critical problems when required to provide a precise and detailed response to a visual instruction: (1) failure to identify novel objects or entities, (2) mention of non-existent objects, and (3) neglect of object{'}s attributed details. Intuitive solutions include improving the size and quality of data or using larger foundation models. They show effectiveness in mitigating these issues, but at an expensive cost of collecting a vast amount of new data and introducing a significantly larger model. Standing at the intersection of these approaches, we examine the three object-oriented problems from the perspective of the image-to-text mapping process by the multimodal connector. In this paper, we first identify the limitations of multimodal connectors stemming from insufficient training data. Driven by this, we propose to enhance the mapping with retrieval-augmented tag tokens, which contain rich object-aware information such as object names and attributes. With our Tag-grounded visual instruction tuning with retrieval Augmentation (TUNA), we outperform baselines that share the same language model and training data on 12 benchmarks. Furthermore, we show the zero-shot capability of TUNA when provided with specific datastores.
[ "Qi, Daiqing", "Zhao, H", "ong", "Wei, Zijun", "Li, Sheng" ]
Tag-grounded Visual Instruction Tuning with Retrieval Augmentation
emnlp-main.120
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.121.bib
https://aclanthology.org/2024.emnlp-main.121/
@inproceedings{zhang-etal-2024-glape, title = "{GL}a{PE}: Gold Label-agnostic Prompt Evaluation for Large Language Models", author = "Zhang, Xuanchang and Zhang, Zhuosheng and Zhao, Hai", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.121", pages = "2027--2039", abstract = "Despite the rapid progress of large language models (LLMs), their task performance remains sensitive to prompt design. Recent studies have explored leveraging the LLM itself as an optimizer to identify optimal prompts that maximize task accuracy. However, when evaluating prompts, such approaches heavily rely on elusive manually annotated gold labels to calculate task accuracy for each candidate prompt, which hinders its generality. To overcome the limitation, this work proposes GLaPE, a gold label-agnostic prompt evaluation method to alleviate dependence on gold labels. GLaPE is composed of two critical aspects: self-consistency evaluation of a single prompt and mutual-consistency refinement across multiple prompts. Experimental results on 8 widely-recognized reasoning tasks demonstrate that GLaPE can produce more effective prompts, achieving performance comparable to those derived from manually annotated gold labels. Analysis shows that GLaPE provides reliable evaluations aligned with accuracy, even in the absence of gold labels. Code is publicly available at **Anonymous**.", }
Despite the rapid progress of large language models (LLMs), their task performance remains sensitive to prompt design. Recent studies have explored leveraging the LLM itself as an optimizer to identify optimal prompts that maximize task accuracy. However, when evaluating prompts, such approaches heavily rely on elusive manually annotated gold labels to calculate task accuracy for each candidate prompt, which hinders its generality. To overcome the limitation, this work proposes GLaPE, a gold label-agnostic prompt evaluation method to alleviate dependence on gold labels. GLaPE is composed of two critical aspects: self-consistency evaluation of a single prompt and mutual-consistency refinement across multiple prompts. Experimental results on 8 widely-recognized reasoning tasks demonstrate that GLaPE can produce more effective prompts, achieving performance comparable to those derived from manually annotated gold labels. Analysis shows that GLaPE provides reliable evaluations aligned with accuracy, even in the absence of gold labels. Code is publicly available at **Anonymous**.
[ "Zhang, Xuanchang", "Zhang, Zhuosheng", "Zhao, Hai" ]
GLaPE: Gold Label-agnostic Prompt Evaluation for Large Language Models
emnlp-main.121
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.122.bib
https://aclanthology.org/2024.emnlp-main.122/
@inproceedings{xia-etal-2024-decoding, title = "Decoding the Echoes of Vision from f{MRI}: Memory Disentangling for Past Semantic Information", author = "Xia, Runze and Yin, Congchi and Li, Piji", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.122", pages = "2040--2052", }
No abstract found
[ "Xia, Runze", "Yin, Congchi", "Li, Piji" ]
Decoding the Echoes of Vision from fMRI: Memory Disentangling for Past Semantic Information
emnlp-main.122
Oral
2409.20428
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.123.bib
https://aclanthology.org/2024.emnlp-main.123/
@inproceedings{li-etal-2024-optimizing, title = "Optimizing Code Retrieval: High-Quality and Scalable Dataset Annotation through Large Language Models", author = "Li, Rui and Liu, Qi and He, Liyang and Zhang, Zheng and Zhang, Hao and Ye, Shengyu and Lu, Junyu and Huang, Zhenya", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.123", pages = "2053--2065", abstract = "Code retrieval aims to identify code from extensive codebases that semantically aligns with a given query code snippet. Collecting a broad and high-quality set of query and code pairs is crucial to the success of this task. However, existing data collection methods struggle to effectively balance scalability and annotation quality. In this paper, we first analyze the factors influencing the quality of function annotations generated by Large Language Models (LLMs). We find that the invocation of intra-repository functions and third-party APIs plays a significant role. Building on this insight, we propose a novel annotation method that enhances the annotation context by incorporating the content of functions called within the repository and information on third-party API functionalities. Additionally, we integrate LLMs with a novel sorting method to address the multi-level function call relationships within repositories. Furthermore, by applying our proposed method across a range of repositories, we have developed the Query4Code dataset. The quality of this synthesized dataset is validated through both model training and human evaluation, demonstrating high-quality annotations. Moreover, cost analysis confirms the scalability of our annotation method.", }
Code retrieval aims to identify code from extensive codebases that semantically aligns with a given query code snippet. Collecting a broad and high-quality set of query and code pairs is crucial to the success of this task. However, existing data collection methods struggle to effectively balance scalability and annotation quality. In this paper, we first analyze the factors influencing the quality of function annotations generated by Large Language Models (LLMs). We find that the invocation of intra-repository functions and third-party APIs plays a significant role. Building on this insight, we propose a novel annotation method that enhances the annotation context by incorporating the content of functions called within the repository and information on third-party API functionalities. Additionally, we integrate LLMs with a novel sorting method to address the multi-level function call relationships within repositories. Furthermore, by applying our proposed method across a range of repositories, we have developed the Query4Code dataset. The quality of this synthesized dataset is validated through both model training and human evaluation, demonstrating high-quality annotations. Moreover, cost analysis confirms the scalability of our annotation method.
[ "Li, Rui", "Liu, Qi", "He, Liyang", "Zhang, Zheng", "Zhang, Hao", "Ye, Shengyu", "Lu, Junyu", "Huang, Zhenya" ]
Optimizing Code Retrieval: High-Quality and Scalable Dataset Annotation through Large Language Models
emnlp-main.123
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.124.bib
https://aclanthology.org/2024.emnlp-main.124/
@inproceedings{yang-etal-2024-towards, title = "Towards Difficulty-Agnostic Efficient Transfer Learning for Vision-Language Models", author = "Yang, Yongjin and Ko, Jongwoo and Yun, Se-Young", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.124", pages = "2066--2085", abstract = "Vision-language models (VLMs) like CLIP have demonstrated remarkable applicability across a variety of downstream tasks, including zero-shot image classification. Recently, the use of prompts or adapters for efficient transfer learning (ETL) has gained significant attention for effectively adapting to downstream tasks. However, previous studies have overlooked the challenge of varying transfer difficulty of downstream tasks. In this paper, we empirically analyze how each ETL method behaves with respect to transfer difficulty. Our observations indicate that utilizing vision prompts and text adapters is crucial for adaptability and generalizability in domains with high difficulty. Also, by applying an adaptive ensemble approach that integrates task-adapted VLMs with pre-trained VLMs and strategically leverages more general knowledge in low-difficulty and less in high-difficulty domains, we consistently enhance performance across both types of domains. Based on these observations, we propose an adaptive ensemble method that combines visual prompts and text adapters with pre-trained VLMs, tailored by transfer difficulty, to achieve optimal performance for any target domain. Upon experimenting with extensive benchmarks, our method consistently outperforms all baselines, particularly on unseen tasks, demonstrating its effectiveness.", }
Vision-language models (VLMs) like CLIP have demonstrated remarkable applicability across a variety of downstream tasks, including zero-shot image classification. Recently, the use of prompts or adapters for efficient transfer learning (ETL) has gained significant attention for effectively adapting to downstream tasks. However, previous studies have overlooked the challenge of varying transfer difficulty of downstream tasks. In this paper, we empirically analyze how each ETL method behaves with respect to transfer difficulty. Our observations indicate that utilizing vision prompts and text adapters is crucial for adaptability and generalizability in domains with high difficulty. Also, by applying an adaptive ensemble approach that integrates task-adapted VLMs with pre-trained VLMs and strategically leverages more general knowledge in low-difficulty and less in high-difficulty domains, we consistently enhance performance across both types of domains. Based on these observations, we propose an adaptive ensemble method that combines visual prompts and text adapters with pre-trained VLMs, tailored by transfer difficulty, to achieve optimal performance for any target domain. Upon experimenting with extensive benchmarks, our method consistently outperforms all baselines, particularly on unseen tasks, demonstrating its effectiveness.
[ "Yang, Yongjin", "Ko, Jongwoo", "Yun, Se-Young" ]
Towards Difficulty-Agnostic Efficient Transfer Learning for Vision-Language Models
emnlp-main.124
Poster
2311.15569
[ "https://github.com/YangYongJin/APEX" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.125.bib
https://aclanthology.org/2024.emnlp-main.125/
@inproceedings{he-etal-2024-advancing, title = "Advancing Process Verification for Large Language Models via Tree-Based Preference Learning", author = "He, Mingqian and Shen, Yongliang and Zhang, Wenqi and Tan, Zeqi and Lu, Weiming", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.125", pages = "2086--2099", abstract = "Large Language Models (LLMs) have demonstrated remarkable potential in handling complex reasoning tasks by generating step-by-step rationales. Some methods have proven effective in boosting accuracy by introducing extra verifiers to assess these paths. However, existing verifiers, typically trained on binary-labeled reasoning paths, fail to fully utilize the relative merits of intermediate steps, thereby limiting the effectiveness of the feedback provided. To overcome this limitation, we propose Tree-based Preference Learning Verifier (Tree-PLV), a novel approach that constructs reasoning trees via a best-first search algorithm and collects step-level paired data for preference training. Compared to traditional binary classification, step-level preferences more finely capture the nuances between reasoning steps, allowing for a more precise evaluation of the complete reasoning path. We empirically evaluate Tree-PLV across a range of arithmetic and commonsense reasoning tasks, where it significantly outperforms existing benchmarks. For instance, Tree-PLV achieved substantial performance gains over the Mistral-7B self-consistency baseline on GSM8K (67.55{\%} → 82.79{\%}), MATH (17.00{\%} → 26.80{\%}), CSQA (68.14{\%} → 72.97{\%}), and StrategyQA (82.86{\%} → 83.25{\%}). Additionally, our study explores the appropriate granularity for applying preference learning, revealing that step-level guidance provides feedback that better aligns with the evaluation of the reasoning process.", }
Large Language Models (LLMs) have demonstrated remarkable potential in handling complex reasoning tasks by generating step-by-step rationales. Some methods have proven effective in boosting accuracy by introducing extra verifiers to assess these paths. However, existing verifiers, typically trained on binary-labeled reasoning paths, fail to fully utilize the relative merits of intermediate steps, thereby limiting the effectiveness of the feedback provided. To overcome this limitation, we propose Tree-based Preference Learning Verifier (Tree-PLV), a novel approach that constructs reasoning trees via a best-first search algorithm and collects step-level paired data for preference training. Compared to traditional binary classification, step-level preferences more finely capture the nuances between reasoning steps, allowing for a more precise evaluation of the complete reasoning path. We empirically evaluate Tree-PLV across a range of arithmetic and commonsense reasoning tasks, where it significantly outperforms existing benchmarks. For instance, Tree-PLV achieved substantial performance gains over the Mistral-7B self-consistency baseline on GSM8K (67.55{\%} → 82.79{\%}), MATH (17.00{\%} → 26.80{\%}), CSQA (68.14{\%} → 72.97{\%}), and StrategyQA (82.86{\%} → 83.25{\%}). Additionally, our study explores the appropriate granularity for applying preference learning, revealing that step-level guidance provides feedback that better aligns with the evaluation of the reasoning process.
[ "He, Mingqian", "Shen, Yongliang", "Zhang, Wenqi", "Tan, Zeqi", "Lu, Weiming" ]
Advancing Process Verification for Large Language Models via Tree-Based Preference Learning
emnlp-main.125
Poster
2407.00390
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.126.bib
https://aclanthology.org/2024.emnlp-main.126/
@inproceedings{lin-etal-2024-inversion, title = "An Inversion Attack Against Obfuscated Embedding Matrix in Language Model Inference", author = "Lin, Yu and Zhang, Qizhi and Cai, Quanwei and Hong, Jue and Ye, Wu and Liu, Huiqi and Duan, Bing", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.126", pages = "2100--2104", abstract = "With the rapidly-growing deployment of large language model (LLM) inference services, privacy concerns have arisen regarding to the user input data. Recent studies are exploring transforming user inputs to obfuscated embedded vectors, so that the data will not be eavesdropped by service provides. However, in this paper we show that again, without a solid and deliberate security design and analysis, such embedded vector obfuscation failed to protect users{'} privacy. We demonstrate the conclusion via conducting a novel inversion attack called Element-wise Differential Nearest Neighbor (EDNN) on the glide-reflection proposed in (CITATION), and the result showed that the original user input text can be 100{\%} recovered from the obfuscated embedded vectors. We further analyze security requirements on embedding obfuscation and present several remedies to our proposed attack.", }
With the rapidly-growing deployment of large language model (LLM) inference services, privacy concerns have arisen regarding to the user input data. Recent studies are exploring transforming user inputs to obfuscated embedded vectors, so that the data will not be eavesdropped by service provides. However, in this paper we show that again, without a solid and deliberate security design and analysis, such embedded vector obfuscation failed to protect users{'} privacy. We demonstrate the conclusion via conducting a novel inversion attack called Element-wise Differential Nearest Neighbor (EDNN) on the glide-reflection proposed in (CITATION), and the result showed that the original user input text can be 100{\%} recovered from the obfuscated embedded vectors. We further analyze security requirements on embedding obfuscation and present several remedies to our proposed attack.
[ "Lin, Yu", "Zhang, Qizhi", "Cai, Quanwei", "Hong, Jue", "Ye, Wu", "Liu, Huiqi", "Duan, Bing" ]
An Inversion Attack Against Obfuscated Embedding Matrix in Language Model Inference
emnlp-main.126
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.127.bib
https://aclanthology.org/2024.emnlp-main.127/
@inproceedings{he-etal-2024-videoscore, title = "{V}ideo{S}core: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation", author = "He, Xuan and Jiang, Dongfu and Zhang, Ge and Ku, Max and Soni, Achint and Siu, Sherman and Chen, Haonan and Chandra, Abhranil and Jiang, Ziyan and Arulraj, Aaran and Wang, Kai and Do, Quy Duc and Ni, Yuansheng and Lyu, Bohan and Narsupalli, Yaswanth and Fan, Rongqi and Lyu, Zhiheng and Lin, Bill Yuchen and Chen, Wenhu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.127", pages = "2105--2123", abstract = "The recent years have witnessed great advances in video generation. However, the development of automatic video metrics is lagging significantly behind. None of the existing metric is able to provide reliable scores over generated videos. The main barrier is the lack of large-scale human-annotated dataset. In this paper, we release VideoFeedback, the first large-scale dataset containing human-provided multi-aspect score over 37.6K synthesized videos from 11 existing video generative models. We train VideoScore (initialized from Mantis)based on VideoFeedback to enable automatic video quality assessment. Experiments show that the Spearman{'}s correlation betweenVideoScore and humans can reach 77.1 on VideoFeedback-test, beating the prior best metrics by about 50 points. Further result onother held-out EvalCrafter, GenAI-Bench, and VBench show that VideoScore has consistently much higher correlation with humanjudges than other metrics. Due to these results, we believe VideoScore can serve as a great proxy for human raters to (1) rate different video models to track progress (2) simulate fine-grained human feedback in Reinforcement Learning with Human Feedback (RLHF) to improve current video generation models.", }
The recent years have witnessed great advances in video generation. However, the development of automatic video metrics is lagging significantly behind. None of the existing metric is able to provide reliable scores over generated videos. The main barrier is the lack of large-scale human-annotated dataset. In this paper, we release VideoFeedback, the first large-scale dataset containing human-provided multi-aspect score over 37.6K synthesized videos from 11 existing video generative models. We train VideoScore (initialized from Mantis)based on VideoFeedback to enable automatic video quality assessment. Experiments show that the Spearman{'}s correlation betweenVideoScore and humans can reach 77.1 on VideoFeedback-test, beating the prior best metrics by about 50 points. Further result onother held-out EvalCrafter, GenAI-Bench, and VBench show that VideoScore has consistently much higher correlation with humanjudges than other metrics. Due to these results, we believe VideoScore can serve as a great proxy for human raters to (1) rate different video models to track progress (2) simulate fine-grained human feedback in Reinforcement Learning with Human Feedback (RLHF) to improve current video generation models.
[ "He, Xuan", "Jiang, Dongfu", "Zhang, Ge", "Ku, Max", "Soni, Achint", "Siu, Sherman", "Chen, Haonan", "Ch", "ra, Abhranil", "Jiang, Ziyan", "Arulraj, Aaran", "Wang, Kai", "Do, Quy Duc", "Ni, Yuansheng", "Lyu, Bohan", "Narsupalli, Yaswanth", "Fan, Rongqi", "Lyu, Zhiheng", "Lin, Bill Yuchen", "Chen, Wenhu" ]
VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation
emnlp-main.127
Poster
2406.15252
[ "" ]
https://huggingface.co/papers/2406.15252
14
14
1
19
[ "TIGER-Lab/VideoScore", "TIGER-Lab/VideoScore-anno-only" ]
[ "TIGER-Lab/VideoFeedback", "TIGER-Lab/VideoScore-Bench" ]
[ "TIGER-Lab/VideoScore", "TIGER-Lab/VideoScore-Leaderboard", "Mantis-VL/VideoScore" ]
[ "TIGER-Lab/VideoScore", "TIGER-Lab/VideoScore-anno-only" ]
[ "TIGER-Lab/VideoFeedback", "TIGER-Lab/VideoScore-Bench" ]
[ "TIGER-Lab/VideoScore", "TIGER-Lab/VideoScore-Leaderboard", "Mantis-VL/VideoScore" ]
1
https://aclanthology.org/2024.emnlp-main.128.bib
https://aclanthology.org/2024.emnlp-main.128/
@inproceedings{wan-etal-2024-logicasker, title = "{L}ogic{A}sker: Evaluating and Improving the Logical Reasoning Ability of Large Language Models", author = "Wan, Yuxuan and Wang, Wenxuan and Yang, Yiliu and Yuan, Youliang and Huang, Jen-tse and He, Pinjia and Jiao, Wenxiang and Lyu, Michael", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.128", pages = "2124--2155", abstract = "We introduce LogicAsker, a novel approach for evaluating and enhancing the logical reasoning capabilities of large language models (LLMs) such as ChatGPT and GPT-4. Despite LLMs{'} prowess in tasks like writing assistance, code generation, and machine translation, assessing their ability to reason has been challenging. Traditional evaluations often prioritize accuracy on downstream tasks over direct assessments of reasoning processes. LogicAsker addresses this gap by employing a set of atomic reasoning skills grounded in propositional and predicate logic to systematically examine and improve the reasoning prowess of LLMs. Our methodology reveals significant gaps in LLMs{'} learning of logical rules, with identified reasoning failures ranging from 29{\%} to 90{\%} across different models. Moreover, we leverage these findings to construct targeted demonstration examples and fine-tune data, notably enhancing logical reasoning in models like GPT-4o by up to 5{\%}. To our knowledge, this is the first effort to utilize test case outcomes to effectively refine LLMs{'} formal reasoning capabilities. We make our code, data, and results publicly available(https://github.com/yxwan123/LogicAsker) to facilitate further research and replication of our findings.", }
We introduce LogicAsker, a novel approach for evaluating and enhancing the logical reasoning capabilities of large language models (LLMs) such as ChatGPT and GPT-4. Despite LLMs{'} prowess in tasks like writing assistance, code generation, and machine translation, assessing their ability to reason has been challenging. Traditional evaluations often prioritize accuracy on downstream tasks over direct assessments of reasoning processes. LogicAsker addresses this gap by employing a set of atomic reasoning skills grounded in propositional and predicate logic to systematically examine and improve the reasoning prowess of LLMs. Our methodology reveals significant gaps in LLMs{'} learning of logical rules, with identified reasoning failures ranging from 29{\%} to 90{\%} across different models. Moreover, we leverage these findings to construct targeted demonstration examples and fine-tune data, notably enhancing logical reasoning in models like GPT-4o by up to 5{\%}. To our knowledge, this is the first effort to utilize test case outcomes to effectively refine LLMs{'} formal reasoning capabilities. We make our code, data, and results publicly available(https://github.com/yxwan123/LogicAsker) to facilitate further research and replication of our findings.
[ "Wan, Yuxuan", "Wang, Wenxuan", "Yang, Yiliu", "Yuan, Youliang", "Huang, Jen-tse", "He, Pinjia", "Jiao, Wenxiang", "Lyu, Michael" ]
LogicAsker: Evaluating and Improving the Logical Reasoning Ability of Large Language Models
emnlp-main.128
Poster
2401.00757
[ "https://github.com/yxwan123/logicasker" ]
https://huggingface.co/papers/2401.00757
1
0
0
8
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.129.bib
https://aclanthology.org/2024.emnlp-main.129/
@inproceedings{yi-etal-2024-integrating, title = "Integrating Structural Semantic Knowledge for Enhanced Information Extraction Pre-training", author = "Yi, Xiaoyang and Bao, Yuru and Zhang, Jian and Qin, Yifang and Lin, Faxin", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.129", pages = "2156--2171", abstract = "Information Extraction (IE), aiming to extract structured information from unstructured natural language texts, can significantly benefit from pre-trained language models. However, existing pre-training methods solely focus on exploiting the textual knowledge, relying extensively on annotated large-scale datasets, which is labor-intensive and thus limits the scalability and versatility of the resulting models. To address these issues, we propose SKIE, a novel pre-training framework tailored for IE that integrates structural semantic knowledge via contrastive learning, effectively alleviating the annotation burden. Specifically, SKIE utilizes Abstract Meaning Representation (AMR) as a low-cost supervision source to boost model performance without human intervention. By enhancing the topology of AMR graphs, SKIE derives high-quality cohesive subgraphs as additional training samples, providing diverse multi-level structural semantic knowledge. Furthermore, SKIE refines the graph encoder to better capture cohesive information and edge relation information, thereby improving the pre-training efficacy. Extensive experimental results demonstrate that SKIE outperforms state-of-the-art baselines across multiple IE tasks and showcases exceptional performance in few-shot and zero-shot settings.", }
Information Extraction (IE), aiming to extract structured information from unstructured natural language texts, can significantly benefit from pre-trained language models. However, existing pre-training methods solely focus on exploiting the textual knowledge, relying extensively on annotated large-scale datasets, which is labor-intensive and thus limits the scalability and versatility of the resulting models. To address these issues, we propose SKIE, a novel pre-training framework tailored for IE that integrates structural semantic knowledge via contrastive learning, effectively alleviating the annotation burden. Specifically, SKIE utilizes Abstract Meaning Representation (AMR) as a low-cost supervision source to boost model performance without human intervention. By enhancing the topology of AMR graphs, SKIE derives high-quality cohesive subgraphs as additional training samples, providing diverse multi-level structural semantic knowledge. Furthermore, SKIE refines the graph encoder to better capture cohesive information and edge relation information, thereby improving the pre-training efficacy. Extensive experimental results demonstrate that SKIE outperforms state-of-the-art baselines across multiple IE tasks and showcases exceptional performance in few-shot and zero-shot settings.
[ "Yi, Xiaoyang", "Bao, Yuru", "Zhang, Jian", "Qin, Yifang", "Lin, Faxin" ]
Integrating Structural Semantic Knowledge for Enhanced Information Extraction Pre-training
emnlp-main.129
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.130.bib
https://aclanthology.org/2024.emnlp-main.130/
@inproceedings{zou-etal-2024-fusegen, title = "{F}use{G}en: {PLM} Fusion for Data-generation based Zero-shot Learning", author = "Zou, Tianyuan and Liu, Yang and Li, Peng and Zhang, Jianqing and Liu, Jingjing and Zhang, Ya-Qin", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.130", pages = "2172--2190", abstract = "Data-generation based zero-shot learning, although effective in training Small Task-specific Models (STMs) via synthetic datasets generated by Pre-trained Language Models (PLMs), is often limited by the low quality of such synthetic datasets. Previous solutions have primarily focused on single PLM settings, where synthetic datasets are typically restricted to specific sub-spaces and often deviate from real-world distributions, leading to severe distribution bias. To mitigate such bias, we propose FuseGen, a novel data-generation based zero-shot learning framework that introduces a new criteria for subset selection from synthetic datasets via utilizing multiple PLMs and trained STMs. The chosen subset provides in-context feedback to each PLM, enhancing dataset quality through iterative data generation. Trained STMs are then used for sample re-weighting as well, further improving data quality. Extensive experiments across diverse tasks demonstrate that FuseGen substantially outperforms existing methods, highly effective in boosting STM performance in a PLM-agnostic way. The code is available at https://github.com/LindaLydia/FuseGen.", }
Data-generation based zero-shot learning, although effective in training Small Task-specific Models (STMs) via synthetic datasets generated by Pre-trained Language Models (PLMs), is often limited by the low quality of such synthetic datasets. Previous solutions have primarily focused on single PLM settings, where synthetic datasets are typically restricted to specific sub-spaces and often deviate from real-world distributions, leading to severe distribution bias. To mitigate such bias, we propose FuseGen, a novel data-generation based zero-shot learning framework that introduces a new criteria for subset selection from synthetic datasets via utilizing multiple PLMs and trained STMs. The chosen subset provides in-context feedback to each PLM, enhancing dataset quality through iterative data generation. Trained STMs are then used for sample re-weighting as well, further improving data quality. Extensive experiments across diverse tasks demonstrate that FuseGen substantially outperforms existing methods, highly effective in boosting STM performance in a PLM-agnostic way. The code is available at https://github.com/LindaLydia/FuseGen.
[ "Zou, Tianyuan", "Liu, Yang", "Li, Peng", "Zhang, Jianqing", "Liu, Jingjing", "Zhang, Ya-Qin" ]
FuseGen: PLM Fusion for Data-generation based Zero-shot Learning
emnlp-main.130
Oral
2406.12527
[ "https://github.com/LindaLydia/FuseGen" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.131.bib
https://aclanthology.org/2024.emnlp-main.131/
@inproceedings{wu-etal-2024-need, title = "{I} Need Help! Evaluating {LLM}{'}s Ability to Ask for Users{'} Support: A Case Study on Text-to-{SQL} Generation", author = "Wu, Cheng-Kuang and Tam, Zhi Rui and Wu, Chao-Chung and Lin, Chieh-Yen and Lee, Hung-yi and Chen, Yun-Nung", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.131", pages = "2191--2199", abstract = "This study explores the proactive ability of LLMs to seek user support. We propose metrics to evaluate the trade-off between performance improvements and user burden, and investigate whether LLMs can determine when to request help under varying information availability. Our experiments show that without external feedback, many LLMs struggle to recognize their need for user support. The findings highlight the importance of external signals and provide insights for future research on improving support-seeking strategies. Source code: https://github.com/appier-research/i-need-help", }
This study explores the proactive ability of LLMs to seek user support. We propose metrics to evaluate the trade-off between performance improvements and user burden, and investigate whether LLMs can determine when to request help under varying information availability. Our experiments show that without external feedback, many LLMs struggle to recognize their need for user support. The findings highlight the importance of external signals and provide insights for future research on improving support-seeking strategies. Source code: https://github.com/appier-research/i-need-help
[ "Wu, Cheng-Kuang", "Tam, Zhi Rui", "Wu, Chao-Chung", "Lin, Chieh-Yen", "Lee, Hung-yi", "Chen, Yun-Nung" ]
I Need Help! Evaluating LLM's Ability to Ask for Users' Support: A Case Study on Text-to-SQL Generation
emnlp-main.131
Poster
2407.14767
[ "https://github.com/appier-research/i-need-help" ]
https://huggingface.co/papers/2407.14767
0
1
0
6
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.132.bib
https://aclanthology.org/2024.emnlp-main.132/
@inproceedings{wiegand-ruppenhofer-2024-oddballs, title = "Oddballs and Misfits: Detecting Implicit Abuse in Which Identity Groups are Depicted as Deviating from the Norm", author = "Wiegand, Michael and Ruppenhofer, Josef", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.132", pages = "2200--2218", abstract = "We address the task of detecting abusive sentences in which identity groups are depicted as deviating from the norm (e.g. Gays sprinkle flour over their gardens for good luck). These abusive utterances need not be stereotypes or negative in sentiment. We introduce the first dataset for this task. It is created via crowdsourcing and includes 7 identity groups. We also report on classification experiments.", }
We address the task of detecting abusive sentences in which identity groups are depicted as deviating from the norm (e.g. Gays sprinkle flour over their gardens for good luck). These abusive utterances need not be stereotypes or negative in sentiment. We introduce the first dataset for this task. It is created via crowdsourcing and includes 7 identity groups. We also report on classification experiments.
[ "Wieg", ", Michael", "Ruppenhofer, Josef" ]
Oddballs and Misfits: Detecting Implicit Abuse in Which Identity Groups are Depicted as Deviating from the Norm
emnlp-main.132
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.133.bib
https://aclanthology.org/2024.emnlp-main.133/
@inproceedings{yoon-etal-2024-eyes, title = "By My Eyes: Grounding Multimodal Large Language Models with Sensor Data via Visual Prompting", author = "Yoon, Hyungjun and Tolera, Biniyam Aschalew and Gong, Taesik and Lee, Kimin and Lee, Sung-Ju", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.133", pages = "2219--2241", abstract = "Large language models (LLMs) have demonstrated exceptional abilities across various domains. However, utilizing LLMs for ubiquitous sensing applications remains challenging as existing text-prompt methods show significant performance degradation when handling long sensor data sequences. In this paper, we propose a visual prompting approach for sensor data using multimodal LLMs (MLLMs). Specifically, we design a visual prompt that directs MLLMs to utilize visualized sensor data alongside descriptions of the target sensory task. Additionally, we introduce a visualization generator that automates the creation of optimal visualizations tailored to a given sensory task, eliminating the need for prior task-specific knowledge. We evaluated our approach on nine sensory tasks involving four sensing modalities, achieving an average of 10{\%} higher accuracy compared to text-based prompts and reducing token costs by 15.8 times. Our findings highlight the effectiveness and cost-efficiency of using visual prompts with MLLMs for various sensory tasks. The source code is available at https://github.com/diamond264/ByMyEyes.", }
Large language models (LLMs) have demonstrated exceptional abilities across various domains. However, utilizing LLMs for ubiquitous sensing applications remains challenging as existing text-prompt methods show significant performance degradation when handling long sensor data sequences. In this paper, we propose a visual prompting approach for sensor data using multimodal LLMs (MLLMs). Specifically, we design a visual prompt that directs MLLMs to utilize visualized sensor data alongside descriptions of the target sensory task. Additionally, we introduce a visualization generator that automates the creation of optimal visualizations tailored to a given sensory task, eliminating the need for prior task-specific knowledge. We evaluated our approach on nine sensory tasks involving four sensing modalities, achieving an average of 10{\%} higher accuracy compared to text-based prompts and reducing token costs by 15.8 times. Our findings highlight the effectiveness and cost-efficiency of using visual prompts with MLLMs for various sensory tasks. The source code is available at https://github.com/diamond264/ByMyEyes.
[ "Yoon, Hyungjun", "Tolera, Biniyam Aschalew", "Gong, Taesik", "Lee, Kimin", "Lee, Sung-Ju" ]
By My Eyes: Grounding Multimodal Large Language Models with Sensor Data via Visual Prompting
emnlp-main.133
Poster
2407.10385
[ "https://github.com/diamond264/ByMyEyes" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.134.bib
https://aclanthology.org/2024.emnlp-main.134/
@inproceedings{son-etal-2024-prefixing, title = "Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization", author = "Son, Seungwoo and Park, Wonpyo and Han, Woohyun and Kim, Kyuyeun and Lee, Jaeho", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.134", pages = "2242--2252", abstract = "Despite recent advances in LLM quantization, activation quantization remains to be challenging due to the activation outliers. Conventional remedies, e.g., mixing precisions for different channels, introduce extra overhead and reduce the speedup. In this work, we develop a simple yet effective strategy to facilitate per-tensor activation quantization by preventing the generation of problematic tokens. Precisely, we propose a method to find a set of key-value cache, coined {\_}CushionCache{\_}, which mitigates outliers in subsequent tokens when inserted as a prefix. CushionCache works in two steps: First, we greedily search for a prompt token sequence that minimizes the maximum activation values in subsequent tokens. Then, we further tune the token cache to regularize the activations of subsequent tokens to be more quantization-friendly. The proposed method successfully addresses activation outliers of LLMs, providing a substantial performance boost for per-tensor activation quantization methods. We thoroughly evaluate our method over a wide range of models and benchmarks and find that it significantly surpasses the established baseline of per-tensor W8A8 quantization and can be seamlessly integrated with the recent activation quantization method.", }
Despite recent advances in LLM quantization, activation quantization remains to be challenging due to the activation outliers. Conventional remedies, e.g., mixing precisions for different channels, introduce extra overhead and reduce the speedup. In this work, we develop a simple yet effective strategy to facilitate per-tensor activation quantization by preventing the generation of problematic tokens. Precisely, we propose a method to find a set of key-value cache, coined {\_}CushionCache{\_}, which mitigates outliers in subsequent tokens when inserted as a prefix. CushionCache works in two steps: First, we greedily search for a prompt token sequence that minimizes the maximum activation values in subsequent tokens. Then, we further tune the token cache to regularize the activations of subsequent tokens to be more quantization-friendly. The proposed method successfully addresses activation outliers of LLMs, providing a substantial performance boost for per-tensor activation quantization methods. We thoroughly evaluate our method over a wide range of models and benchmarks and find that it significantly surpasses the established baseline of per-tensor W8A8 quantization and can be seamlessly integrated with the recent activation quantization method.
[ "Son, Seungwoo", "Park, Wonpyo", "Han, Woohyun", "Kim, Kyuyeun", "Lee, Jaeho" ]
Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization
emnlp-main.134
Poster
2406.12016
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.135.bib
https://aclanthology.org/2024.emnlp-main.135/
@inproceedings{mo-etal-2024-chiq, title = "{CHIQ}: Contextual History Enhancement for Improving Query Rewriting in Conversational Search", author = "Mo, Fengran and Ghaddar, Abbas and Mao, Kelong and Rezagholizadeh, Mehdi and Chen, Boxing and Liu, Qun and Nie, Jian-Yun", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.135", pages = "2253--2268", abstract = "In this paper, we study how open-source large language models (LLMs) can be effectively deployed for improving query rewriting in conversational search, especially for ambiguous queries. We introduce CHIQ, a two-step method that leverages the capabilities of LLMs to resolve ambiguities in the conversation history before query rewriting. This approach contrasts with prior studies that predominantly use closed-source LLMs to directly generate search queries from conversation history. We demonstrate on five well-established benchmarks that CHIQ leads to state-of-the-art results across most settings, showing highly competitive performances with systems leveraging closed-source LLMs. Our study provides a first step towards leveraging open-source LLMs in conversational search, as a competitive alternative to the prevailing reliance on commercial LLMs. Data, models, and source code will be publicly available upon acceptance at https://github.com/fengranMark/CHIQ.", }
In this paper, we study how open-source large language models (LLMs) can be effectively deployed for improving query rewriting in conversational search, especially for ambiguous queries. We introduce CHIQ, a two-step method that leverages the capabilities of LLMs to resolve ambiguities in the conversation history before query rewriting. This approach contrasts with prior studies that predominantly use closed-source LLMs to directly generate search queries from conversation history. We demonstrate on five well-established benchmarks that CHIQ leads to state-of-the-art results across most settings, showing highly competitive performances with systems leveraging closed-source LLMs. Our study provides a first step towards leveraging open-source LLMs in conversational search, as a competitive alternative to the prevailing reliance on commercial LLMs. Data, models, and source code will be publicly available upon acceptance at https://github.com/fengranMark/CHIQ.
[ "Mo, Fengran", "Ghaddar, Abbas", "Mao, Kelong", "Rezagholizadeh, Mehdi", "Chen, Boxing", "Liu, Qun", "Nie, Jian-Yun" ]
CHIQ: Contextual History Enhancement for Improving Query Rewriting in Conversational Search
emnlp-main.135
Poster
2406.05013
[ "https://github.com/fengranmark/chiq" ]
https://huggingface.co/papers/2406.05013
0
0
1
7
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.136.bib
https://aclanthology.org/2024.emnlp-main.136/
@inproceedings{huang-etal-2024-towards, title = "Towards Low-Resource Harmful Meme Detection with {LMM} Agents", author = "Huang, Jianzhao and Lin, Hongzhan and Ziyan, Liu and Luo, Ziyang and Chen, Guang and Ma, Jing", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.136", pages = "2269--2293", abstract = "The proliferation of Internet memes in the age of social media necessitates effective identification of harmful ones. Due to the dynamic nature of memes, existing data-driven models may struggle in low-resource scenarios where only a few labeled examples are available. In this paper, we propose an agency-driven framework for low-resource harmful meme detection, employing both outward and inward analysis with few-shot annotated samples. Inspired by the powerful capacity of Large Multimodal Models (LMMs) on multimodal reasoning, we first retrieve relative memes with annotations to leverage label information as auxiliary signals for the LMM agent. Then, we elicit knowledge-revising behavior within the LMM agent to derive well-generalized insights into meme harmfulness. By combining these strategies, our approach enables dialectical reasoning over intricate and implicit harm-indicative patterns. Extensive experiments conducted on three meme datasets demonstrate that our proposed approach achieves superior performance than state-of-the-art methods on the low-resource harmful meme detection task.", }
The proliferation of Internet memes in the age of social media necessitates effective identification of harmful ones. Due to the dynamic nature of memes, existing data-driven models may struggle in low-resource scenarios where only a few labeled examples are available. In this paper, we propose an agency-driven framework for low-resource harmful meme detection, employing both outward and inward analysis with few-shot annotated samples. Inspired by the powerful capacity of Large Multimodal Models (LMMs) on multimodal reasoning, we first retrieve relative memes with annotations to leverage label information as auxiliary signals for the LMM agent. Then, we elicit knowledge-revising behavior within the LMM agent to derive well-generalized insights into meme harmfulness. By combining these strategies, our approach enables dialectical reasoning over intricate and implicit harm-indicative patterns. Extensive experiments conducted on three meme datasets demonstrate that our proposed approach achieves superior performance than state-of-the-art methods on the low-resource harmful meme detection task.
[ "Huang, Jianzhao", "Lin, Hongzhan", "Ziyan, Liu", "Luo, Ziyang", "Chen, Guang", "Ma, Jing" ]
Towards Low-Resource Harmful Meme Detection with LMM Agents
emnlp-main.136
Poster
2411.05383
[ "https://github.com/jianzhao-huang/lorehm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.137.bib
https://aclanthology.org/2024.emnlp-main.137/
@inproceedings{hu-etal-2024-viva, title = "{VIVA}: A Benchmark for Vision-Grounded Decision-Making with Human Values", author = "Hu, Zhe and Ren, Yixiao and Li, Jing and Yin, Yu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.137", pages = "2294--2311", abstract = "This paper introduces VIVA, a benchmark for VIsion-grounded decision-making driven by human VA. While most large vision-language models (VLMs) focus on physical-level skills, our work is the first to examine their multimodal capabilities in leveraging human values to make decisions under a vision-depicted situation. VIVA contains 1,062 images depicting diverse real-world situations and the manually annotated decisions grounded in them. Given an image there, the model should select the most appropriate action to address the situation and provide the relevant human values and reason underlying the decision. Extensive experiments based on VIVA show the limitation of VLMs in using human values to make multimodal decisions. Further analyses indicate the potential benefits of exploiting action consequences and predicted human values.", }
This paper introduces VIVA, a benchmark for VIsion-grounded decision-making driven by human VA. While most large vision-language models (VLMs) focus on physical-level skills, our work is the first to examine their multimodal capabilities in leveraging human values to make decisions under a vision-depicted situation. VIVA contains 1,062 images depicting diverse real-world situations and the manually annotated decisions grounded in them. Given an image there, the model should select the most appropriate action to address the situation and provide the relevant human values and reason underlying the decision. Extensive experiments based on VIVA show the limitation of VLMs in using human values to make multimodal decisions. Further analyses indicate the potential benefits of exploiting action consequences and predicted human values.
[ "Hu, Zhe", "Ren, Yixiao", "Li, Jing", "Yin, Yu" ]
VIVA: A Benchmark for Vision-Grounded Decision-Making with Human Values
emnlp-main.137
Poster
2407.03000
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.138.bib
https://aclanthology.org/2024.emnlp-main.138/
@inproceedings{shi-etal-2024-direct, title = "Direct Multi-Turn Preference Optimization for Language Agents", author = "Shi, Wentao and Yuan, Mengqi and Wu, Junkang and Wang, Qifan and Feng, Fuli", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.138", pages = "2312--2324", abstract = "Adapting Large Language Models (LLMs) for agent tasks is critical in developing language agents. Direct Preference Optimization (DPO) is a promising technique for this adaptation with the alleviation of compounding errors, offering a means to directly optimize Reinforcement Learning (RL) objectives. However, applying DPO to multi-turn tasks presents challenges due to the inability to cancel the partition function. Overcoming this obstacle involves making the partition function independent of the current state and addressing length disparities between preferred and dis-preferred trajectories. In this light, we replace the policy constraint with the state-action occupancy measure constraint in the RL objective and add length normalization to the Bradley-Terry model, yielding a novel loss function named DMPO for multi-turn agent tasks with theoretical explanations. Extensive experiments on three multi-turn agent task datasets confirm the effectiveness and superiority of the DMPO loss.", }
Adapting Large Language Models (LLMs) for agent tasks is critical in developing language agents. Direct Preference Optimization (DPO) is a promising technique for this adaptation with the alleviation of compounding errors, offering a means to directly optimize Reinforcement Learning (RL) objectives. However, applying DPO to multi-turn tasks presents challenges due to the inability to cancel the partition function. Overcoming this obstacle involves making the partition function independent of the current state and addressing length disparities between preferred and dis-preferred trajectories. In this light, we replace the policy constraint with the state-action occupancy measure constraint in the RL objective and add length normalization to the Bradley-Terry model, yielding a novel loss function named DMPO for multi-turn agent tasks with theoretical explanations. Extensive experiments on three multi-turn agent task datasets confirm the effectiveness and superiority of the DMPO loss.
[ "Shi, Wentao", "Yuan, Mengqi", "Wu, Junkang", "Wang, Qifan", "Feng, Fuli" ]
Direct Multi-Turn Preference Optimization for Language Agents
emnlp-main.138
Poster
2406.14868
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.139.bib
https://aclanthology.org/2024.emnlp-main.139/
@inproceedings{ranaldi-freitas-2024-self, title = "Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models", author = "Ranaldi, Leonardo and Freitas, Andre", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.139", pages = "2325--2347", abstract = "The alignment of reasoning abilities between smaller and larger Language Models are largely conducted via supervised fine-tuning using demonstrations generated from robust Large Language Models (LLMs). Although these approaches deliver more performant models, they do not show sufficiently strong generalization ability as the training only relies on the provided demonstrations.In this paper, we propose the Self-refine Instruction-tuning method that elicits Smaller Language Models to self-improve their abilities.Our approach is based on a two-stage process, where reasoning abilities are first transferred between LLMs and Small Language Models (SLMs) via Instruction-tuning on synthetic demonstrations provided by LLMs, and then the instructed models self-improve their abilities through preference optimization strategies.In particular, the second phase operates refinement heuristics based on Direct Preference Optimization, where the SLMs are elicited to deliver a series of reasoning paths by automatically sampling the generated responses and providing rewards using ground truths from the LLMs.Results obtained on commonsense and math reasoning tasks show that this approach consistently outperforms Instruction-tuning in both in-domain and out-domain scenarios, aligning the reasoning abilities of Smaller and Larger language models.", }
The alignment of reasoning abilities between smaller and larger Language Models are largely conducted via supervised fine-tuning using demonstrations generated from robust Large Language Models (LLMs). Although these approaches deliver more performant models, they do not show sufficiently strong generalization ability as the training only relies on the provided demonstrations.In this paper, we propose the Self-refine Instruction-tuning method that elicits Smaller Language Models to self-improve their abilities.Our approach is based on a two-stage process, where reasoning abilities are first transferred between LLMs and Small Language Models (SLMs) via Instruction-tuning on synthetic demonstrations provided by LLMs, and then the instructed models self-improve their abilities through preference optimization strategies.In particular, the second phase operates refinement heuristics based on Direct Preference Optimization, where the SLMs are elicited to deliver a series of reasoning paths by automatically sampling the generated responses and providing rewards using ground truths from the LLMs.Results obtained on commonsense and math reasoning tasks show that this approach consistently outperforms Instruction-tuning in both in-domain and out-domain scenarios, aligning the reasoning abilities of Smaller and Larger language models.
[ "Ranaldi, Leonardo", "Freitas, Andre" ]
Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models
emnlp-main.139
Poster
2405.00402
[ "" ]
https://huggingface.co/papers/2405.00402
0
0
0
2
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.140.bib
https://aclanthology.org/2024.emnlp-main.140/
@inproceedings{li-etal-2024-search, title = "In Search of the Long-Tail: Systematic Generation of Long-Tail Inferential Knowledge via Logical Rule Guided Search", author = "Li, Huihan and Ning, Yuting and Liao, Zeyi and Wang, Siyuan and Li, Xiang Lorraine and Lu, Ximing and Zhao, Wenting and Brahman, Faeze and Choi, Yejin and Ren, Xiang", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.140", pages = "2348--2370", abstract = "To effectively use large language models (LLMs) for real-world queries, it is imperative that they generalize to the long-tail distribution, i.e. rare examples where models exhibit low confidence. In this work, we take the first step towards evaluating LLMs in the long-tail distribution of inferential knowledge. We exemplify long-tail evaluation on the Natural Language Inference task. First, we introduce Logic-Induced-Knowledge-Search (LINK), a systematic long-tail data generation framework, to obtain factually-correct yet long-tail inferential statements. LINK uses variable-wise prompting grounded on symbolic rules to seek low-confidence statements while ensuring factual correctness. We then use LINK to curate Logic-Induced-Long-Tail (LINT), a large-scale long-tail inferential knowledge dataset that contains 108K statements spanning four domains. We evaluate popular LLMs on LINT; we find that state-of-the-art LLMs show significant performance drop (21{\%} relative drop for GPT4) on long-tail data as compared to on head distribution data, and smaller models show even more generalization weakness. These results further underscore the necessity of long-tail evaluation in developing generalizable LLMs.", }
To effectively use large language models (LLMs) for real-world queries, it is imperative that they generalize to the long-tail distribution, i.e. rare examples where models exhibit low confidence. In this work, we take the first step towards evaluating LLMs in the long-tail distribution of inferential knowledge. We exemplify long-tail evaluation on the Natural Language Inference task. First, we introduce Logic-Induced-Knowledge-Search (LINK), a systematic long-tail data generation framework, to obtain factually-correct yet long-tail inferential statements. LINK uses variable-wise prompting grounded on symbolic rules to seek low-confidence statements while ensuring factual correctness. We then use LINK to curate Logic-Induced-Long-Tail (LINT), a large-scale long-tail inferential knowledge dataset that contains 108K statements spanning four domains. We evaluate popular LLMs on LINT; we find that state-of-the-art LLMs show significant performance drop (21{\%} relative drop for GPT4) on long-tail data as compared to on head distribution data, and smaller models show even more generalization weakness. These results further underscore the necessity of long-tail evaluation in developing generalizable LLMs.
[ "Li, Huihan", "Ning, Yuting", "Liao, Zeyi", "Wang, Siyuan", "Li, Xiang Lorraine", "Lu, Ximing", "Zhao, Wenting", "Brahman, Faeze", "Choi, Yejin", "Ren, Xiang" ]
In Search of the Long-Tail: Systematic Generation of Long-Tail Inferential Knowledge via Logical Rule Guided Search
emnlp-main.140
Poster
2311.07237
[ "https://github.com/ink-usc/link" ]
https://huggingface.co/papers/2311.07237
2
0
0
10
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.141.bib
https://aclanthology.org/2024.emnlp-main.141/
@inproceedings{huang-etal-2024-autoscraper, title = "{A}uto{S}craper: A Progressive Understanding Web Agent for Web Scraper Generation", author = "Huang, Wenhao and Gu, Zhouhong and Peng, Chenghao and Liang, Jiaqing and Li, Zhixu and Xiao, Yanghua and Wen, Liqian and Chen, Zulong", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.141", pages = "2371--2389", abstract = "Web scraping is a powerful technique that extracts data from websites, enabling automated data collection, enhancing data analysis capabilities, and minimizing manual data entry efforts. Existing methods, wrappers-based methods suffer from limited adaptability and scalability when faced with a new website, while language agents, empowered by large language models (LLMs), exhibit poor reusability in diverse web environments. In this work, we introduce the paradigm of generating web scrapers with LLMs and propose AutoScraper, a two-stage framework that can handle diverse and changing web environments more efficiently. AutoScraper leverages the hierarchical structure of HTML and similarity across different web pages for generating web scrapers. Besides, we propose a new executability metric for better measuring the performance of web scraper generation tasks. We conduct comprehensive experiments with multiple LLMs and demonstrate the effectiveness of our framework. Our work is now open-source.", }
Web scraping is a powerful technique that extracts data from websites, enabling automated data collection, enhancing data analysis capabilities, and minimizing manual data entry efforts. Existing methods, wrappers-based methods suffer from limited adaptability and scalability when faced with a new website, while language agents, empowered by large language models (LLMs), exhibit poor reusability in diverse web environments. In this work, we introduce the paradigm of generating web scrapers with LLMs and propose AutoScraper, a two-stage framework that can handle diverse and changing web environments more efficiently. AutoScraper leverages the hierarchical structure of HTML and similarity across different web pages for generating web scrapers. Besides, we propose a new executability metric for better measuring the performance of web scraper generation tasks. We conduct comprehensive experiments with multiple LLMs and demonstrate the effectiveness of our framework. Our work is now open-source.
[ "Huang, Wenhao", "Gu, Zhouhong", "Peng, Chenghao", "Liang, Jiaqing", "Li, Zhixu", "Xiao, Yanghua", "Wen, Liqian", "Chen, Zulong" ]
AutoScraper: A Progressive Understanding Web Agent for Web Scraper Generation
emnlp-main.141
Poster
2404.12753
[ "https://github.com/ez-hwh/autocrawler" ]
https://huggingface.co/papers/2404.12753
5
41
1
7
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.142.bib
https://aclanthology.org/2024.emnlp-main.142/
@inproceedings{katz-etal-2024-backward, title = "Backward Lens: Projecting Language Model Gradients into the Vocabulary Space", author = "Katz, Shahar and Belinkov, Yonatan and Geva, Mor and Wolf, Lior", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.142", pages = "2390--2422", abstract = "Understanding how Transformer-based Language Models (LMs) learn and recall information is a key goal of the deep learning community. Recent interpretability methods project weights and hidden states obtained from the forward pass to the models{'} vocabularies, helping to uncover how information flows within LMs. In this work, we extend this methodology to LMs{'} backward pass and gradients. We first prove that a gradient matrix can be cast as a low-rank linear combination of its forward and backward passes{'} inputs. We then develop methods to project these gradients into vocabulary items and explore the mechanics of how new information is stored in the LMs{'} neurons.", }
Understanding how Transformer-based Language Models (LMs) learn and recall information is a key goal of the deep learning community. Recent interpretability methods project weights and hidden states obtained from the forward pass to the models{'} vocabularies, helping to uncover how information flows within LMs. In this work, we extend this methodology to LMs{'} backward pass and gradients. We first prove that a gradient matrix can be cast as a low-rank linear combination of its forward and backward passes{'} inputs. We then develop methods to project these gradients into vocabulary items and explore the mechanics of how new information is stored in the LMs{'} neurons.
[ "Katz, Shahar", "Belinkov, Yonatan", "Geva, Mor", "Wolf, Lior" ]
Backward Lens: Projecting Language Model Gradients into the Vocabulary Space
emnlp-main.142
Oral
2402.12865
[ "" ]
https://huggingface.co/papers/2402.12865
0
1
0
4
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.143.bib
https://aclanthology.org/2024.emnlp-main.143/
@inproceedings{chung-etal-2024-selective, title = "Selective Vision is the Challenge for Visual Reasoning: A Benchmark for Visual Argument Understanding", author = "Chung, Jiwan and Lee, Sungjae and Kim, Minseo and Han, Seungju and Yousefpour, Ashkan and Hessel, Jack and Yu, Youngjae", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.143", pages = "2423--2451", abstract = "Visual arguments, often used in advertising or social causes, rely on images to persuade viewers to do or believe something. Understanding these arguments requires selective vision: only specific visual stimuli within an image are relevant to the argument, and relevance can only be understood within the context of a broader argumentative structure. While visual arguments are readily appreciated by human audiences, we ask: are today{'}s AI capable of similar understanding?We present VisArgs, a dataset of 1,611 images annotated with 5,112 visual premises (with regions), 5,574 commonsense premises, and reasoning trees connecting them into structured arguments. We propose three tasks for evaluating visual argument understanding: premise localization, premise identification, and conclusion deduction.Experiments show that 1) machines struggle to capture visual cues: GPT-4-O achieved 78.5{\%} accuracy, while humans reached 98.0{\%}. Models also performed 19.5{\%} worse when distinguishing between irrelevant objects within the image compared to external objects. 2) Providing relevant visual premises improved model performance significantly.", }
Visual arguments, often used in advertising or social causes, rely on images to persuade viewers to do or believe something. Understanding these arguments requires selective vision: only specific visual stimuli within an image are relevant to the argument, and relevance can only be understood within the context of a broader argumentative structure. While visual arguments are readily appreciated by human audiences, we ask: are today{'}s AI capable of similar understanding?We present VisArgs, a dataset of 1,611 images annotated with 5,112 visual premises (with regions), 5,574 commonsense premises, and reasoning trees connecting them into structured arguments. We propose three tasks for evaluating visual argument understanding: premise localization, premise identification, and conclusion deduction.Experiments show that 1) machines struggle to capture visual cues: GPT-4-O achieved 78.5{\%} accuracy, while humans reached 98.0{\%}. Models also performed 19.5{\%} worse when distinguishing between irrelevant objects within the image compared to external objects. 2) Providing relevant visual premises improved model performance significantly.
[ "Chung, Jiwan", "Lee, Sungjae", "Kim, Minseo", "Han, Seungju", "Yousefpour, Ashkan", "Hessel, Jack", "Yu, Youngjae" ]
Selective Vision is the Challenge for Visual Reasoning: A Benchmark for Visual Argument Understanding
emnlp-main.143
Oral
2406.18925
[ "https://github.com/jiwanchung/visargs" ]
https://huggingface.co/papers/2406.18925
1
1
0
7
[]
[ "jiwan-chung/visargs" ]
[]
[]
[ "jiwan-chung/visargs" ]
[]
1
https://aclanthology.org/2024.emnlp-main.144.bib
https://aclanthology.org/2024.emnlp-main.144/
@inproceedings{chung-etal-2024-visual, title = "Can visual language models resolve textual ambiguity with visual cues? Let visual puns tell you!", author = "Chung, Jiwan and Lim, Seungwon and Jeon, Jaehyun and Lee, Seungbeen and Yu, Youngjae", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.144", pages = "2452--2469", abstract = "Humans possess multimodal literacy, allowing them to actively integrate information from various modalities to form reasoning. Faced with challenges like lexical ambiguity in text, we supplement this with other modalities, such as thumbnail images or textbook illustrations. Is it possible for machines to achieve a similar multimodal understanding capability?In response, we present Understanding Pun with Image Explanations (UNPIE), a novel benchmark designed to assess the impact of multimodal inputs in resolving lexical ambiguities. Puns serve as the ideal subject for this evaluation due to their intrinsic ambiguity. Our dataset includes 1,000 puns, each accompanied by an image that explains both meanings. We pose three multimodal challenges with the annotations to assess different aspects of multimodal literacy; Pun Grounding, Disambiguation, and Reconstruction. The results indicate that various Socratic Models and Visual-Language Models improve over the text-only models when given visual context, particularly as the complexity of the tasks increases.", }
Humans possess multimodal literacy, allowing them to actively integrate information from various modalities to form reasoning. Faced with challenges like lexical ambiguity in text, we supplement this with other modalities, such as thumbnail images or textbook illustrations. Is it possible for machines to achieve a similar multimodal understanding capability?In response, we present Understanding Pun with Image Explanations (UNPIE), a novel benchmark designed to assess the impact of multimodal inputs in resolving lexical ambiguities. Puns serve as the ideal subject for this evaluation due to their intrinsic ambiguity. Our dataset includes 1,000 puns, each accompanied by an image that explains both meanings. We pose three multimodal challenges with the annotations to assess different aspects of multimodal literacy; Pun Grounding, Disambiguation, and Reconstruction. The results indicate that various Socratic Models and Visual-Language Models improve over the text-only models when given visual context, particularly as the complexity of the tasks increases.
[ "Chung, Jiwan", "Lim, Seungwon", "Jeon, Jaehyun", "Lee, Seungbeen", "Yu, Youngjae" ]
Can visual language models resolve textual ambiguity with visual cues? Let visual puns tell you!
emnlp-main.144
Poster
2410.01023
[ "https://github.com/jiwanchung/visualpun_unpie" ]
https://huggingface.co/papers/2410.01023
0
1
0
5
[]
[ "jiwan-chung/VisualPun_UNPIE" ]
[]
[]
[ "jiwan-chung/VisualPun_UNPIE" ]
[]
1
https://aclanthology.org/2024.emnlp-main.145.bib
https://aclanthology.org/2024.emnlp-main.145/
@inproceedings{jin-etal-2024-reusing, title = "Reusing Transferable Weight Increments for Low-resource Style Generation", author = "Jin, Chunzhen and Huang, Eliot and Chang, Heng and Wang, Yaqi and Cao, Peng and Zaiane, Osmar", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.145", pages = "2470--2488", abstract = "Text style transfer (TST) is crucial in natural language processing, aiming to endow text with a new style without altering its meaning. In real-world scenarios, not all styles have abundant resources. This work introduces TWIST (reusing Transferable Weight Increments for Style Text generation), a novel framework to mitigate data scarcity by utilizing style features in weight increments to transfer low-resource styles effectively. During target style learning, we derive knowledge via a specially designed weight pool and initialize the parameters for the unseen style. To enhance the effectiveness of merging, the target style weight increments are often merged from multiple source style weight increments through singular vectors. Considering the diversity of styles, we also designed a multi-key memory network that simultaneously focuses on task- and instance-level information to derive the most relevant weight increments. Results from multiple style transfer datasets show that TWIST demonstrates remarkable performance across different backbones, achieving particularly effective results in low-resource scenarios.", }
Text style transfer (TST) is crucial in natural language processing, aiming to endow text with a new style without altering its meaning. In real-world scenarios, not all styles have abundant resources. This work introduces TWIST (reusing Transferable Weight Increments for Style Text generation), a novel framework to mitigate data scarcity by utilizing style features in weight increments to transfer low-resource styles effectively. During target style learning, we derive knowledge via a specially designed weight pool and initialize the parameters for the unseen style. To enhance the effectiveness of merging, the target style weight increments are often merged from multiple source style weight increments through singular vectors. Considering the diversity of styles, we also designed a multi-key memory network that simultaneously focuses on task- and instance-level information to derive the most relevant weight increments. Results from multiple style transfer datasets show that TWIST demonstrates remarkable performance across different backbones, achieving particularly effective results in low-resource scenarios.
[ "Jin, Chunzhen", "Huang, Eliot", "Chang, Heng", "Wang, Yaqi", "Cao, Peng", "Zaiane, Osmar" ]
Reusing Transferable Weight Increments for Low-resource Style Generation
emnlp-main.145
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.146.bib
https://aclanthology.org/2024.emnlp-main.146/
@inproceedings{chiang-etal-2024-large, title = "Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course", author = "Chiang, Cheng-Han and Chen, Wei-Chih and Kuan, Chun-Yi and Yang, Chienchou and Lee, Hung-yi", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.146", pages = "2489--2513", abstract = "Using large language models (LLMs) for automatic evaluation has become an important evaluation method in NLP research. However, it is unclear whether these LLM-based evaluators can be effectively applied in real-world classrooms to assess student assignments. This empirical report shares how we use GPT-4 as an automatic assignment evaluator in a university course with over 1000 students. Based on student responses, we found that LLM-based assignment evaluators are generally acceptable to students when they have free access to these tools. However, students also noted that the LLM sometimes fails to adhere to the evaluation instructions, resulting in unreasonable assessments. Additionally, we observed that students can easily manipulate the LLM to output specific strings, allowing them to achieve high scores without meeting the assignment rubric. Based on student feedback and our experience, we offer several recommendations for effectively integrating LLMs into future classroom evaluations. Our observation also highlights potential directions for improving LLM-based evaluators, including their instruction-following ability and vulnerability to prompt hacking.", }
Using large language models (LLMs) for automatic evaluation has become an important evaluation method in NLP research. However, it is unclear whether these LLM-based evaluators can be effectively applied in real-world classrooms to assess student assignments. This empirical report shares how we use GPT-4 as an automatic assignment evaluator in a university course with over 1000 students. Based on student responses, we found that LLM-based assignment evaluators are generally acceptable to students when they have free access to these tools. However, students also noted that the LLM sometimes fails to adhere to the evaluation instructions, resulting in unreasonable assessments. Additionally, we observed that students can easily manipulate the LLM to output specific strings, allowing them to achieve high scores without meeting the assignment rubric. Based on student feedback and our experience, we offer several recommendations for effectively integrating LLMs into future classroom evaluations. Our observation also highlights potential directions for improving LLM-based evaluators, including their instruction-following ability and vulnerability to prompt hacking.
[ "Chiang, Cheng-Han", "Chen, Wei-Chih", "Kuan, Chun-Yi", "Yang, Chienchou", "Lee, Hung-yi" ]
Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course
emnlp-main.146
Poster
2407.05216
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.147.bib
https://aclanthology.org/2024.emnlp-main.147/
@inproceedings{bhuiya-etal-2024-seemingly, title = "Seemingly Plausible Distractors in Multi-Hop Reasoning: Are Large Language Models Attentive Readers?", author = "Bhuiya, Neeladri and Schlegel, Viktor and Winkler, Stefan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.147", pages = "2514--2528", abstract = "State-of-the-art Large Language Models (LLMs) are accredited with an increasing number of different capabilities, ranging from reading comprehension over advanced mathematical and reasoning skills to possessing scientific knowledge. In this paper we focus on multi-hop reasoning{---}the ability to identify and integrate information from multiple textual sources.Given the concerns with the presence of simplifying cues in existing multi-hop reasoning benchmarks, which allow models to circumvent the reasoning requirement, we set out to investigate whether LLMs are prone to exploiting such simplifying cues. We find evidence that they indeed circumvent the requirement to perform multi-hop reasoning, but they do so in more subtle ways than what was reported about their fine-tuned pre-trained language model (PLM) predecessors. We propose a challenging multi-hop reasoning benchmark by generating seemingly plausible multi-hop reasoning chains that ultimately lead to incorrect answers. We evaluate multiple open and proprietary state-of-the-art LLMs and show that their multi-hop reasoning performance is affected, as indicated by up to 45{\%} relative decrease in F1 score when presented with such seemingly plausible alternatives. We also find that{---}while LLMs tend to ignore misleading lexical cues{---}misleading reasoning paths indeed present a significant challenge. The code and data are made available at https://github.com/zawedcvg/Are-Large-Language-Models-Attentive-Readers", }
State-of-the-art Large Language Models (LLMs) are accredited with an increasing number of different capabilities, ranging from reading comprehension over advanced mathematical and reasoning skills to possessing scientific knowledge. In this paper we focus on multi-hop reasoning{---}the ability to identify and integrate information from multiple textual sources.Given the concerns with the presence of simplifying cues in existing multi-hop reasoning benchmarks, which allow models to circumvent the reasoning requirement, we set out to investigate whether LLMs are prone to exploiting such simplifying cues. We find evidence that they indeed circumvent the requirement to perform multi-hop reasoning, but they do so in more subtle ways than what was reported about their fine-tuned pre-trained language model (PLM) predecessors. We propose a challenging multi-hop reasoning benchmark by generating seemingly plausible multi-hop reasoning chains that ultimately lead to incorrect answers. We evaluate multiple open and proprietary state-of-the-art LLMs and show that their multi-hop reasoning performance is affected, as indicated by up to 45{\%} relative decrease in F1 score when presented with such seemingly plausible alternatives. We also find that{---}while LLMs tend to ignore misleading lexical cues{---}misleading reasoning paths indeed present a significant challenge. The code and data are made available at https://github.com/zawedcvg/Are-Large-Language-Models-Attentive-Readers
[ "Bhuiya, Neeladri", "Schlegel, Viktor", "Winkler, Stefan" ]
Seemingly Plausible Distractors in Multi-Hop Reasoning: Are Large Language Models Attentive Readers?
emnlp-main.147
Poster
2409.05197
[ "https://github.com/zawedcvg/are-large-language-models-attentive-readers" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.148.bib
https://aclanthology.org/2024.emnlp-main.148/
@inproceedings{cheng-etal-2024-instruction, title = "Instruction Pre-Training: Language Models are Supervised Multitask Learners", author = "Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.148", pages = "2529--2550", abstract = "Unsupervised multitask pre-training has been the critical method behind the recent success of language models (LMs). However, supervised multitask learning still holds significant promise, as scaling it in the post-training stage trends towards better generalization. In this paper, we explore supervised multitask pre-training by proposing Instruction Pre-training, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train LMs. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction response pairs covering 40+ task categories to verify the effectiveness of Instruction Pre-training. In pre-training from scratch, Instruction Pre-training not only consistently enhances pre-trained base models but also benefits more from further instruction tuning. In continual pre-training, Instruction Pre-training enables Llama3-8B to be comparable to or even outperform Llama3-70B. Our model, code, and data are available at https://github.com/microsoft/LMOps.", }
Unsupervised multitask pre-training has been the critical method behind the recent success of language models (LMs). However, supervised multitask learning still holds significant promise, as scaling it in the post-training stage trends towards better generalization. In this paper, we explore supervised multitask pre-training by proposing Instruction Pre-training, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train LMs. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction response pairs covering 40+ task categories to verify the effectiveness of Instruction Pre-training. In pre-training from scratch, Instruction Pre-training not only consistently enhances pre-trained base models but also benefits more from further instruction tuning. In continual pre-training, Instruction Pre-training enables Llama3-8B to be comparable to or even outperform Llama3-70B. Our model, code, and data are available at https://github.com/microsoft/LMOps.
[ "Cheng, Daixuan", "Gu, Yuxian", "Huang, Shaohan", "Bi, Junyu", "Huang, Minlie", "Wei, Furu" ]
Instruction Pre-Training: Language Models are Supervised Multitask Learners
emnlp-main.148
Poster
2406.14491
[ "https://github.com/microsoft/lmops" ]
https://huggingface.co/papers/2406.14491
4
85
11
6
[ "AdaptLLM/finance-LLM", "AdaptLLM/finance-chat", "instruction-pretrain/instruction-synthesizer", "AdaptLLM/law-LLM", "instruction-pretrain/finance-Llama3-8B", "AdaptLLM/medicine-chat", "AdaptLLM/finance-LLM-13B", "instruction-pretrain/InstructLM-1.3B", "AdaptLLM/medicine-LLM", "instruction-pretrain/medicine-Llama3-8B", "instruction-pretrain/InstructLM-500M", "AdaptLLM/law-chat", "AdaptLLM/law-LLM-13B", "AdaptLLM/medicine-LLM-13B", "QuantFactory/finance-Llama3-8B-GGUF", "QuantFactory/medicine-Llama3-8B-GGUF", "QuantFactory/instruction-synthesizer-GGUF", "mlx-community/instruction-pretrain-instruction-synthesizer", "QuantFactory/InstructLM-1.3B-GGUF", "QuantFactory/InstructLM-500M-GGUF", "grimjim/Kitsunebi-v1-Gemma2-8k-9B", "mlx-community/instruction-pretrain-finance-Llama3-8B-4bit", "RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf", "grimjim/Kitsunebi-v1-Gemma2-8k-9B-GGUF", "Apel-sin/medicine-llama3-8B-exl2", "MurtazaNasir/instruction-pretrain_finance-Llama3-8B-exl2", "RichardErkhov/instruction-pretrain_-_finance-Llama3-8B-gguf", "RichardErkhov/instruction-pretrain_-_medicine-Llama3-8B-gguf", "RichardErkhov/grimjim_-_Kitsunebi-v1-Gemma2-8k-9B-gguf", "mav23/finance-Llama3-8B-GGUF", "NewEden/grimjim_Kitsunebi-v1-Gemma2-8k-9B-exl2", "RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf" ]
[ "AdaptLLM/finance-tasks", "instruction-pretrain/ft-instruction-synthesizer-collection", "AdaptLLM/medicine-tasks", "AdaptLLM/law-tasks", "instruction-pretrain/general-instruction-augmented-corpora", "instruction-pretrain/medicine-instruction-augmented-corpora", "AdaptLLM/med_knowledge_prob", "AdaptLLM/law_knowledge_prob" ]
[ "eduagarcia/open_pt_llm_leaderboard", "davanstrien/instruction-synthesizer", "thePhenom21/AdaptLLM-medicine-LLM", "ngrigg/test", "Ondra18cz/AdaptLLM-finance-LLM", "JosuQ/AdaptLLM-finance-LLMS", "Abdullah5775/AdaptLLM-law-LLM", "christianbaluti/AdaptLLM-law-LLM", "SebastianP23/AdaptLLM-law-chat", "munish2024shah/AdaptLLM-finance-LLM", "JosuQ/AdaptLLM-finance-LLMl", "Tongyang/AdaptLLM-finance-LLM", "greenarcade/AdaptLLM-finance-LLM-13B", "blueberrr/Test1", "aumkar/finance", "greenarcade/AdaptLLM-finance-chat", "gmanojkumar/AdaptLLM-finance-chat", "munish2024shah/AdaptLLM-finance-chat", "SriramGaddam/AdaptLLM-finance-chat", "antonioneto11/msft-finchat", "aforss/vine", "xonack/suplex", "geored/gtmio", "vikaswr/law", "JosuQ/AdaptLLM-law-LLM", "casteelconstructionse/AdaptLLM-law-LLM-13B", "griged/AdaptLLM-law-chat", "yooi/AdaptLLM-medicine-chat", "jonathanjordan21/medical_chatbot", "John6666/votepurchase-crash" ]
[ "AdaptLLM/finance-LLM", "AdaptLLM/finance-chat", "instruction-pretrain/instruction-synthesizer", "AdaptLLM/law-LLM", "instruction-pretrain/finance-Llama3-8B", "AdaptLLM/medicine-chat", "AdaptLLM/finance-LLM-13B", "instruction-pretrain/InstructLM-1.3B", "AdaptLLM/medicine-LLM", "instruction-pretrain/medicine-Llama3-8B", "instruction-pretrain/InstructLM-500M", "AdaptLLM/law-chat", "AdaptLLM/law-LLM-13B", "AdaptLLM/medicine-LLM-13B", "QuantFactory/finance-Llama3-8B-GGUF", "QuantFactory/medicine-Llama3-8B-GGUF", "QuantFactory/instruction-synthesizer-GGUF", "mlx-community/instruction-pretrain-instruction-synthesizer", "QuantFactory/InstructLM-1.3B-GGUF", "QuantFactory/InstructLM-500M-GGUF", "grimjim/Kitsunebi-v1-Gemma2-8k-9B", "mlx-community/instruction-pretrain-finance-Llama3-8B-4bit", "RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf", "grimjim/Kitsunebi-v1-Gemma2-8k-9B-GGUF", "Apel-sin/medicine-llama3-8B-exl2", "MurtazaNasir/instruction-pretrain_finance-Llama3-8B-exl2", "RichardErkhov/instruction-pretrain_-_finance-Llama3-8B-gguf", "RichardErkhov/instruction-pretrain_-_medicine-Llama3-8B-gguf", "RichardErkhov/grimjim_-_Kitsunebi-v1-Gemma2-8k-9B-gguf", "mav23/finance-Llama3-8B-GGUF", "NewEden/grimjim_Kitsunebi-v1-Gemma2-8k-9B-exl2", "RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf" ]
[ "AdaptLLM/finance-tasks", "instruction-pretrain/ft-instruction-synthesizer-collection", "AdaptLLM/medicine-tasks", "AdaptLLM/law-tasks", "instruction-pretrain/general-instruction-augmented-corpora", "instruction-pretrain/medicine-instruction-augmented-corpora", "AdaptLLM/med_knowledge_prob", "AdaptLLM/law_knowledge_prob" ]
[ "eduagarcia/open_pt_llm_leaderboard", "davanstrien/instruction-synthesizer", "thePhenom21/AdaptLLM-medicine-LLM", "ngrigg/test", "Ondra18cz/AdaptLLM-finance-LLM", "JosuQ/AdaptLLM-finance-LLMS", "Abdullah5775/AdaptLLM-law-LLM", "christianbaluti/AdaptLLM-law-LLM", "SebastianP23/AdaptLLM-law-chat", "munish2024shah/AdaptLLM-finance-LLM", "JosuQ/AdaptLLM-finance-LLMl", "Tongyang/AdaptLLM-finance-LLM", "greenarcade/AdaptLLM-finance-LLM-13B", "blueberrr/Test1", "aumkar/finance", "greenarcade/AdaptLLM-finance-chat", "gmanojkumar/AdaptLLM-finance-chat", "munish2024shah/AdaptLLM-finance-chat", "SriramGaddam/AdaptLLM-finance-chat", "antonioneto11/msft-finchat", "aforss/vine", "xonack/suplex", "geored/gtmio", "vikaswr/law", "JosuQ/AdaptLLM-law-LLM", "casteelconstructionse/AdaptLLM-law-LLM-13B", "griged/AdaptLLM-law-chat", "yooi/AdaptLLM-medicine-chat", "jonathanjordan21/medical_chatbot", "John6666/votepurchase-crash" ]
1
https://aclanthology.org/2024.emnlp-main.149.bib
https://aclanthology.org/2024.emnlp-main.149/
@inproceedings{wang-li-2024-lemoe, title = "{LEM}o{E}: Advanced Mixture of Experts Adaptor for Lifelong Model Editing of Large Language Models", author = "Wang, Renzhi and Li, Piji", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.149", pages = "2551--2575", abstract = "Large language models (LLMs) require continual knowledge updates to stay abreast of the ever-changing world facts, prompting the formulation of lifelong model editing task. While recent years have witnessed the development of various techniques for single and batch editing, these methods either fail to apply or perform sub-optimally when faced with lifelong editing. In this paper, we introduce LEMoE, an advanced Mixture of Experts (MoE) adaptor for lifelong model editing. We first analyze the factors influencing the effectiveness of conventional MoE adaptor in lifelong editing, including catastrophic forgetting, inconsistent routing and order sensitivity. Based on these insights, we propose a tailored module insertion method to achieve lifelong editing, incorporating a novel KV anchor routing to enhance routing consistency between training and inference stage, along with a concise yet effective clustering-based editing order planning. Experimental results demonstrate the effectiveness of our method in lifelong editing, surpassing previous model editing techniques while maintaining outstanding performance in batch editing task. Our code will be available.", }
Large language models (LLMs) require continual knowledge updates to stay abreast of the ever-changing world facts, prompting the formulation of lifelong model editing task. While recent years have witnessed the development of various techniques for single and batch editing, these methods either fail to apply or perform sub-optimally when faced with lifelong editing. In this paper, we introduce LEMoE, an advanced Mixture of Experts (MoE) adaptor for lifelong model editing. We first analyze the factors influencing the effectiveness of conventional MoE adaptor in lifelong editing, including catastrophic forgetting, inconsistent routing and order sensitivity. Based on these insights, we propose a tailored module insertion method to achieve lifelong editing, incorporating a novel KV anchor routing to enhance routing consistency between training and inference stage, along with a concise yet effective clustering-based editing order planning. Experimental results demonstrate the effectiveness of our method in lifelong editing, surpassing previous model editing techniques while maintaining outstanding performance in batch editing task. Our code will be available.
[ "Wang, Renzhi", "Li, Piji" ]
LEMoE: Advanced Mixture of Experts Adaptor for Lifelong Model Editing of Large Language Models
emnlp-main.149
Poster
2406.20030
[ "" ]
https://huggingface.co/papers/2406.20030
0
1
0
2
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.150.bib
https://aclanthology.org/2024.emnlp-main.150/
@inproceedings{zhang-etal-2024-collaborative, title = "Collaborative Performance Prediction for Large Language Models", author = "Zhang, Qiyuan and Lyu, Fuyuan and Liu, Xue and Ma, Chen", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.150", pages = "2576--2596", abstract = "Comprehensively understanding and accurately predicting the performance of large language models across diverse downstream tasks has emerged as a pivotal challenge in NLP research. The pioneering scaling law on downstream works demonstrated intrinsic similarities within model families and utilized such similarities for performance prediction. However, they tend to overlook the similarities between model families and only consider design factors listed in the original scaling law. To overcome these limitations, we introduce a novel framework, Collaborative Performance Prediction (CPP), which significantly enhances prediction accuracy by leveraging the historical performance of various models on downstream tasks and other design factors for both model and task. We also collect a collaborative data sourced from online platforms containing both historical performance and additional design factors. With the support of the collaborative data, CPP not only surpasses traditional scaling laws in predicting the performance of scaled LLMs but also facilitates a detailed analysis of factor importance, an area previously overlooked.", }
Comprehensively understanding and accurately predicting the performance of large language models across diverse downstream tasks has emerged as a pivotal challenge in NLP research. The pioneering scaling law on downstream works demonstrated intrinsic similarities within model families and utilized such similarities for performance prediction. However, they tend to overlook the similarities between model families and only consider design factors listed in the original scaling law. To overcome these limitations, we introduce a novel framework, Collaborative Performance Prediction (CPP), which significantly enhances prediction accuracy by leveraging the historical performance of various models on downstream tasks and other design factors for both model and task. We also collect a collaborative data sourced from online platforms containing both historical performance and additional design factors. With the support of the collaborative data, CPP not only surpasses traditional scaling laws in predicting the performance of scaled LLMs but also facilitates a detailed analysis of factor importance, an area previously overlooked.
[ "Zhang, Qiyuan", "Lyu, Fuyuan", "Liu, Xue", "Ma, Chen" ]
Collaborative Performance Prediction for Large Language Models
emnlp-main.150
Poster
2407.01300
[ "https://github.com/Don-Joey/CPP_LLM" ]
https://huggingface.co/papers/2407.01300
2
2
0
4
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.151.bib
https://aclanthology.org/2024.emnlp-main.151/
@inproceedings{chen-etal-2024-surveying, title = "Surveying the Dead Minds: Historical-Psychological Text Analysis with Contextualized Construct Representation ({CCR}) for Classical {C}hinese", author = "Chen, Yuqi and Li, Sixuan and Li, Ying and Atari, Mohammad", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.151", pages = "2597--2615", abstract = "In this work, we develop a pipeline for historical-psychological text analysis in classical Chinese. Humans have produced texts in various languages for thousands of years; however, most of the computational literature is focused on contemporary languages and corpora. The emerging field of historical psychology relies on computational techniques to extract aspects of psychology from historical corpora using new methods developed in natural language processing (NLP). The present pipeline, called Contextualized Construct Representations (CCR), combines expert knowledge in psychometrics (i.e., psychological surveys) with text representations generated via Transformer-based language models to measure psychological constructs such as traditionalism, norm strength, and collectivism in classical Chinese corpora. Considering the scarcity of available data, we propose an indirect supervised contrastive learning approach and build the first Chinese historical psychology corpus (C-HI-PSY) to fine-tune pre-trained models. We evaluate the pipeline to demonstrate its superior performance compared with other approaches. The CCR method outperforms word-embedding-based approaches across all of our tasks and exceeds prompting with GPT-4 in most tasks. Finally, we benchmark the pipeline against objective, external data to further verify its validity.", }
In this work, we develop a pipeline for historical-psychological text analysis in classical Chinese. Humans have produced texts in various languages for thousands of years; however, most of the computational literature is focused on contemporary languages and corpora. The emerging field of historical psychology relies on computational techniques to extract aspects of psychology from historical corpora using new methods developed in natural language processing (NLP). The present pipeline, called Contextualized Construct Representations (CCR), combines expert knowledge in psychometrics (i.e., psychological surveys) with text representations generated via Transformer-based language models to measure psychological constructs such as traditionalism, norm strength, and collectivism in classical Chinese corpora. Considering the scarcity of available data, we propose an indirect supervised contrastive learning approach and build the first Chinese historical psychology corpus (C-HI-PSY) to fine-tune pre-trained models. We evaluate the pipeline to demonstrate its superior performance compared with other approaches. The CCR method outperforms word-embedding-based approaches across all of our tasks and exceeds prompting with GPT-4 in most tasks. Finally, we benchmark the pipeline against objective, external data to further verify its validity.
[ "Chen, Yuqi", "Li, Sixuan", "Li, Ying", "Atari, Mohammad" ]
Surveying the Dead Minds: Historical-Psychological Text Analysis with Contextualized Construct Representation (CCR) for Classical Chinese
emnlp-main.151
Poster
2403.00509
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.152.bib
https://aclanthology.org/2024.emnlp-main.152/
@inproceedings{wan-etal-2024-knowledge, title = "Knowledge Verification to Nip Hallucination in the Bud", author = "Wan, Fanqi and Huang, Xinting and Cui, Leyang and Quan, Xiaojun and Bi, Wei and Shi, Shuming", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.152", pages = "2616--2633", abstract = "While large language models (LLMs) have demonstrated exceptional performance across various tasks following human alignment, they may still generate responses that sound plausible but contradict factual knowledge, a phenomenon known as hallucination. In this paper, we demonstrate the feasibility of mitigating hallucinations by verifying and minimizing the inconsistency between external knowledge present in the alignment data and the intrinsic knowledge embedded within foundation LLMs. Specifically, we propose a novel approach called Knowledge Consistent Alignment (KCA), which employs a well-aligned LLM to automatically formulate assessments based on external knowledge to evaluate the knowledge boundaries of foundation LLMs. To address knowledge inconsistencies in the alignment data, KCA implements several specific strategies to deal with these data instances. We demonstrate the superior efficacy of KCA in reducing hallucinations across six benchmarks, utilizing foundation LLMs of varying backbones and scales. This confirms the effectiveness of mitigating hallucinations by reducing knowledge inconsistency. Our code, model weights, and data are openly accessible at \url{https://github.com/fanqiwan/KCA}.", }
While large language models (LLMs) have demonstrated exceptional performance across various tasks following human alignment, they may still generate responses that sound plausible but contradict factual knowledge, a phenomenon known as hallucination. In this paper, we demonstrate the feasibility of mitigating hallucinations by verifying and minimizing the inconsistency between external knowledge present in the alignment data and the intrinsic knowledge embedded within foundation LLMs. Specifically, we propose a novel approach called Knowledge Consistent Alignment (KCA), which employs a well-aligned LLM to automatically formulate assessments based on external knowledge to evaluate the knowledge boundaries of foundation LLMs. To address knowledge inconsistencies in the alignment data, KCA implements several specific strategies to deal with these data instances. We demonstrate the superior efficacy of KCA in reducing hallucinations across six benchmarks, utilizing foundation LLMs of varying backbones and scales. This confirms the effectiveness of mitigating hallucinations by reducing knowledge inconsistency. Our code, model weights, and data are openly accessible at \url{https://github.com/fanqiwan/KCA}.
[ "Wan, Fanqi", "Huang, Xinting", "Cui, Leyang", "Quan, Xiaojun", "Bi, Wei", "Shi, Shuming" ]
Knowledge Verification to Nip Hallucination in the Bud
emnlp-main.152
Poster
2401.10768
[ "https://github.com/fanqiwan/kca" ]
https://huggingface.co/papers/2401.10768
2
2
0
6
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.153.bib
https://aclanthology.org/2024.emnlp-main.153/
@inproceedings{schrader-etal-2024-quite, title = "{QUITE}: Quantifying Uncertainty in Natural Language Text in {B}ayesian Reasoning Scenarios", author = "Schrader, Timo Pierre and Lange, Lukas and Razniewski, Simon and Friedrich, Annemarie", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.153", pages = "2634--2652", abstract = "Reasoning is key to many decision making processes. It requires consolidating a set of rule-like premises that are often associated with degrees of uncertainty and observations to draw conclusions. In this work, we address both the case where premises are specified as numeric probabilistic rules and situations in which humans state their estimates using words expressing degrees of certainty. Existing probabilistic reasoning datasets simplify the task, e.g., by requiring the model to only rank textual alternatives, by including only binary random variables, or by making use of a limited set of templates that result in less varied text.In this work, we present QUITE, a question answering dataset of real-world Bayesian reasoning scenarios with categorical random variables and complex relationships. QUITE provides high-quality natural language verbalizations of premises together with evidence statements and expects the answer to a question in the form of an estimated probability. We conduct an extensive set of experiments, finding that logic-based models outperform out-of-the-box large language models on all reasoning types (causal, evidential, and explaining-away). Our results provide evidence that neuro-symbolic models are a promising direction for improving complex reasoning. We release QUITE and code for training and experiments on Github.", }
Reasoning is key to many decision making processes. It requires consolidating a set of rule-like premises that are often associated with degrees of uncertainty and observations to draw conclusions. In this work, we address both the case where premises are specified as numeric probabilistic rules and situations in which humans state their estimates using words expressing degrees of certainty. Existing probabilistic reasoning datasets simplify the task, e.g., by requiring the model to only rank textual alternatives, by including only binary random variables, or by making use of a limited set of templates that result in less varied text.In this work, we present QUITE, a question answering dataset of real-world Bayesian reasoning scenarios with categorical random variables and complex relationships. QUITE provides high-quality natural language verbalizations of premises together with evidence statements and expects the answer to a question in the form of an estimated probability. We conduct an extensive set of experiments, finding that logic-based models outperform out-of-the-box large language models on all reasoning types (causal, evidential, and explaining-away). Our results provide evidence that neuro-symbolic models are a promising direction for improving complex reasoning. We release QUITE and code for training and experiments on Github.
[ "Schrader, Timo Pierre", "Lange, Lukas", "Razniewski, Simon", "Friedrich, Annemarie" ]
QUITE: Quantifying Uncertainty in Natural Language Text in Bayesian Reasoning Scenarios
emnlp-main.153
Oral
2410.10449
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.154.bib
https://aclanthology.org/2024.emnlp-main.154/
@inproceedings{geigle-etal-2024-african, title = "{A}frican or {E}uropean Swallow? Benchmarking Large Vision-Language Models for Fine-Grained Object Classification", author = "Geigle, Gregor and Timofte, Radu and Glava{\v{s}}, Goran", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.154", pages = "2653--2669", abstract = "Recent Large Vision-Language Models (LVLMs) demonstrate impressive abilities on numerous image understanding and reasoning tasks. The task of fine-grained object classification (e.g., distinction between \textit{animal species}), however, has been probed insufficiently, despite its downstream importance. We fill this evaluation gap by creating FOCI (\textbf{F}ine-grained \textbf{O}bject \textbf{C}lass\textbf{I}fication), a difficult multiple-choice benchmark for fine-grained object classification, from existing object classification datasets: (1) multiple-choice avoids ambiguous answers associated with casting classification as open-ended QA task; (2) we retain classification difficulty by mining negative labels with a CLIP model. FOCI complements five popular classification datasets with four domain-specific subsets from ImageNet-21k. We benchmark 12 public LVLMs on and show that it tests for a \textit{complementary skill} to established image understanding and reasoning benchmarks. Crucially, CLIP models exhibit dramatically better performance than LVLMs. Since the image encoders of LVLMs come from these CLIP models, this points to inadequate alignment for fine-grained object distinction between the encoder and the LLM and warrants (pre)training data with more fine-grained annotation. We release our code at \url{ANONYMIZED}.", }
Recent Large Vision-Language Models (LVLMs) demonstrate impressive abilities on numerous image understanding and reasoning tasks. The task of fine-grained object classification (e.g., distinction between \textit{animal species}), however, has been probed insufficiently, despite its downstream importance. We fill this evaluation gap by creating FOCI (\textbf{F}ine-grained \textbf{O}bject \textbf{C}lass\textbf{I}fication), a difficult multiple-choice benchmark for fine-grained object classification, from existing object classification datasets: (1) multiple-choice avoids ambiguous answers associated with casting classification as open-ended QA task; (2) we retain classification difficulty by mining negative labels with a CLIP model. FOCI complements five popular classification datasets with four domain-specific subsets from ImageNet-21k. We benchmark 12 public LVLMs on and show that it tests for a \textit{complementary skill} to established image understanding and reasoning benchmarks. Crucially, CLIP models exhibit dramatically better performance than LVLMs. Since the image encoders of LVLMs come from these CLIP models, this points to inadequate alignment for fine-grained object distinction between the encoder and the LLM and warrants (pre)training data with more fine-grained annotation. We release our code at \url{ANONYMIZED}.
[ "Geigle, Gregor", "Timofte, Radu", "Glava{\\v{s}}, Goran" ]
African or European Swallow? Benchmarking Large Vision-Language Models for Fine-Grained Object Classification
emnlp-main.154
Poster
2406.14496
[ "https://github.com/gregor-ge/foci-benchmark" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.155.bib
https://aclanthology.org/2024.emnlp-main.155/
@inproceedings{yuan-etal-2024-whispers, title = "Whispers that Shake Foundations: Analyzing and Mitigating False Premise Hallucinations in Large Language Models", author = "Yuan, Hongbang and Cao, Pengfei and Jin, Zhuoran and Chen, Yubo and Zeng, Daojian and Liu, Kang and Zhao, Jun", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.155", pages = "2670--2683", abstract = "Large Language Models (LLMs) have shown impressive capabilities but still suffer from the issue of hallucinations. A significant type of this issue is the false premise hallucination, which we define as the phenomenon when LLMs generate hallucinated text when confronted with false premise questions. In this paper, we perform a comprehensive analysis of the false premise hallucination and elucidate its internal working mechanism: a small subset of attention heads (which we designate as false premise heads) disturb the knowledge extraction process, leading to the occurrence of false premise hallucination. Based on our analysis, we propose \textbf{FAITH} (\textbf{F}alse premise \textbf{A}ttention head constra\textbf{I}ining for mi\textbf{T}igating \textbf{H}allucinations), a novel and effective method to mitigate false premise hallucinations. It constrains the false premise attention heads during the model inference process. Impressively, extensive experiments demonstrate that constraining only approximately 1{\%} of the attention heads in the model yields a notable increase of nearly 20{\%} of model performance.", }
Large Language Models (LLMs) have shown impressive capabilities but still suffer from the issue of hallucinations. A significant type of this issue is the false premise hallucination, which we define as the phenomenon when LLMs generate hallucinated text when confronted with false premise questions. In this paper, we perform a comprehensive analysis of the false premise hallucination and elucidate its internal working mechanism: a small subset of attention heads (which we designate as false premise heads) disturb the knowledge extraction process, leading to the occurrence of false premise hallucination. Based on our analysis, we propose \textbf{FAITH} (\textbf{F}alse premise \textbf{A}ttention head constra\textbf{I}ining for mi\textbf{T}igating \textbf{H}allucinations), a novel and effective method to mitigate false premise hallucinations. It constrains the false premise attention heads during the model inference process. Impressively, extensive experiments demonstrate that constraining only approximately 1{\%} of the attention heads in the model yields a notable increase of nearly 20{\%} of model performance.
[ "Yuan, Hongbang", "Cao, Pengfei", "Jin, Zhuoran", "Chen, Yubo", "Zeng, Daojian", "Liu, Kang", "Zhao, Jun" ]
Whispers that Shake Foundations: Analyzing and Mitigating False Premise Hallucinations in Large Language Models
emnlp-main.155
Poster
2402.19103
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.156.bib
https://aclanthology.org/2024.emnlp-main.156/
@inproceedings{lietard-etal-2024-word, title = "To Word Senses and Beyond: Inducing Concepts with Contextualized Language Models", author = "Li{\'e}tard, Bastien and Denis, Pascal and Keller, Mikaela", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.156", pages = "2684--2696", abstract = "Polysemy and synonymy are two crucial interrelated facets of lexicalambiguity. While both phenomena are widely documented in lexical resources and have been studied extensively in NLP,leading to dedicated systems, they are often being consideredindependently in practictal problems. While many tasks dealing with polysemy (e.g. Word SenseDisambiguiation or Induction) highlight the role of word{'}s senses,the study of synonymy is rooted in the study of concepts, i.e. meaningsshared across the lexicon. In this paper, we introduce ConceptInduction, the unsupervised task of learning a soft clustering amongwords that defines a set of concepts directly from data. This taskgeneralizes Word Sense Induction. We propose a bi-levelapproach to Concept Induction that leverages both a locallemma-centric view and a global cross-lexicon view to induceconcepts. We evaluate the obtained clustering on SemCor{'}s annotateddata and obtain good performance (BCubed F1 above0.60). We find that the local and the global levels are mutuallybeneficial to induce concepts and also senses in our setting. Finally,we create static embeddings representing our induced concepts and usethem on the Word-in-Context task, obtaining competitive performancewith the State-of-the-Art.", }
Polysemy and synonymy are two crucial interrelated facets of lexicalambiguity. While both phenomena are widely documented in lexical resources and have been studied extensively in NLP,leading to dedicated systems, they are often being consideredindependently in practictal problems. While many tasks dealing with polysemy (e.g. Word SenseDisambiguiation or Induction) highlight the role of word{'}s senses,the study of synonymy is rooted in the study of concepts, i.e. meaningsshared across the lexicon. In this paper, we introduce ConceptInduction, the unsupervised task of learning a soft clustering amongwords that defines a set of concepts directly from data. This taskgeneralizes Word Sense Induction. We propose a bi-levelapproach to Concept Induction that leverages both a locallemma-centric view and a global cross-lexicon view to induceconcepts. We evaluate the obtained clustering on SemCor{'}s annotateddata and obtain good performance (BCubed F1 above0.60). We find that the local and the global levels are mutuallybeneficial to induce concepts and also senses in our setting. Finally,we create static embeddings representing our induced concepts and usethem on the Word-in-Context task, obtaining competitive performancewith the State-of-the-Art.
[ "Li{\\'e}tard, Bastien", "Denis, Pascal", "Keller, Mikaela" ]
To Word Senses and Beyond: Inducing Concepts with Contextualized Language Models
emnlp-main.156
Poster
2406.20054
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.157.bib
https://aclanthology.org/2024.emnlp-main.157/
@inproceedings{wang-etal-2024-asetf, title = "{ASETF}: A Novel Method for Jailbreak Attack on {LLM}s through Translate Suffix Embeddings", author = "Wang, Hao and Li, Hao and Huang, Minlie and Sha, Lei", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.157", pages = "2697--2711", abstract = "The safety defense methods of Large language models (LLMs) stays limited because the dangerous prompts are manually curated to just few known attack types, which fails to keep pace with emerging varieties. Recent studies found that attaching suffixes to harmful instructions can hack the defense of LLMs and lead to dangerous outputs. However, similar to traditional text adversarial attacks, this approach, while effective, is limited by the challenge of the discrete tokens. This gradient based discrete optimization attack requires over 100,000 LLM calls, and due to the unreadable of adversarial suffixes, it can be relatively easily penetrated by common defense methods such as perplexity filters.To cope with this challenge, in this paper, we propose an Adversarial Suffix Embedding Translation Framework (ASETF), aimed at transforming continuous adversarial suffix embeddings into coherent and understandable text. This method greatly reduces the computational overhead during the attack process and helps to automatically generate multiple adversarial samples, which can be used as data to strengthen LLM{'}s security defense. Experimental evaluations were conducted on Llama2, Vicuna, and other prominent LLMs, employing harmful directives sourced from the Advbench dataset.The results indicate that our method significantly reduces the computation time of adversarial suffixes and achieves a much better attack success rate than existing techniques, while significantly enhancing the textual fluency of the prompts. In addition, our approach can be generalized into a broader method for generating transferable adversarial suffixes that can successfully attack multiple LLMs, even black-box LLMs, such as ChatGPT and Gemini.", }
The safety defense methods of Large language models (LLMs) stays limited because the dangerous prompts are manually curated to just few known attack types, which fails to keep pace with emerging varieties. Recent studies found that attaching suffixes to harmful instructions can hack the defense of LLMs and lead to dangerous outputs. However, similar to traditional text adversarial attacks, this approach, while effective, is limited by the challenge of the discrete tokens. This gradient based discrete optimization attack requires over 100,000 LLM calls, and due to the unreadable of adversarial suffixes, it can be relatively easily penetrated by common defense methods such as perplexity filters.To cope with this challenge, in this paper, we propose an Adversarial Suffix Embedding Translation Framework (ASETF), aimed at transforming continuous adversarial suffix embeddings into coherent and understandable text. This method greatly reduces the computational overhead during the attack process and helps to automatically generate multiple adversarial samples, which can be used as data to strengthen LLM{'}s security defense. Experimental evaluations were conducted on Llama2, Vicuna, and other prominent LLMs, employing harmful directives sourced from the Advbench dataset.The results indicate that our method significantly reduces the computation time of adversarial suffixes and achieves a much better attack success rate than existing techniques, while significantly enhancing the textual fluency of the prompts. In addition, our approach can be generalized into a broader method for generating transferable adversarial suffixes that can successfully attack multiple LLMs, even black-box LLMs, such as ChatGPT and Gemini.
[ "Wang, Hao", "Li, Hao", "Huang, Minlie", "Sha, Lei" ]
ASETF: A Novel Method for Jailbreak Attack on LLMs through Translate Suffix Embeddings
emnlp-main.157
Oral
2402.16006
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.158.bib
https://aclanthology.org/2024.emnlp-main.158/
@inproceedings{zhao-etal-2024-electoral, title = "An Electoral Approach to Diversify {LLM}-based Multi-Agent Collective Decision-Making", author = "Zhao, Xiutian and Wang, Ke and Peng, Wei", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.158", pages = "2712--2727", abstract = "Modern large language models (LLMs) have exhibited cooperative synergy on complex task-solving, and collective decision-making (CDM) is a pivotal component in LLM-based multi-agent collaboration frameworks. Our survey on 52 recent such systems uncovers a severe lack of diversity, with a heavy reliance on dictatorial and plurality voting for CDM. Through the lens of social choice theory, we scrutinize widely-adopted CDM methods and identify their limitations. To enrich current landscape of LLM-based CDM, we present GEDI, an electoral CDM module that incorporates various ordinal preferential voting mechanisms. Our empirical case study across three benchmarks shows that the integration of certain CDM methods can markedly improve the reasoning capabilities and robustness of some leading LLMs, all without requiring intricate system designs. Additionally, we find that some CDM mechanisms generate positive synergies even with as few as three agents. The voting-based methods also demonstrate robustness against single points of failure, as well as diversity in terms of hit-rate@k and subject-wise impacts.", }
Modern large language models (LLMs) have exhibited cooperative synergy on complex task-solving, and collective decision-making (CDM) is a pivotal component in LLM-based multi-agent collaboration frameworks. Our survey on 52 recent such systems uncovers a severe lack of diversity, with a heavy reliance on dictatorial and plurality voting for CDM. Through the lens of social choice theory, we scrutinize widely-adopted CDM methods and identify their limitations. To enrich current landscape of LLM-based CDM, we present GEDI, an electoral CDM module that incorporates various ordinal preferential voting mechanisms. Our empirical case study across three benchmarks shows that the integration of certain CDM methods can markedly improve the reasoning capabilities and robustness of some leading LLMs, all without requiring intricate system designs. Additionally, we find that some CDM mechanisms generate positive synergies even with as few as three agents. The voting-based methods also demonstrate robustness against single points of failure, as well as diversity in terms of hit-rate@k and subject-wise impacts.
[ "Zhao, Xiutian", "Wang, Ke", "Peng, Wei" ]
An Electoral Approach to Diversify LLM-based Multi-Agent Collective Decision-Making
emnlp-main.158
Poster
2410.15168
[ "https://github.com/xiutian/GEDI" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.159.bib
https://aclanthology.org/2024.emnlp-main.159/
@inproceedings{geigle-etal-2024-object, title = "Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?", author = "Geigle, Gregor and Timofte, Radu and Glava{\v{s}}, Goran", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.159", pages = "2728--2742", abstract = "Large vision-language models (LVLMs) have recently dramatically pushed the state of the art in image captioning and many image understanding tasks (e.g., visual question answering). LVLMs, however, often \textit{hallucinate} and produce captions that mention concepts that cannot be found in the image. These hallucinations erode the trustworthiness of LVLMs and are arguably among the main obstacles to their ubiquitous adoption. Recent work suggests that addition of grounding objectives{---}those that explicitly align image regions or objects to text spans{---}reduces the amount of LVLM hallucination. Although intuitive, this claim is not empirically justified as the reduction effects have been established, we argue, with flawed evaluation protocols that (i) rely on data (i.e., MSCOCO) that has been extensively used in LVLM training and (ii) measure hallucination via question answering rather than open-ended caption generation.In this work, in contrast, we offer the first systematic analysis of the effect of fine-grained object grounding on LVLM hallucination under an evaluation protocol that more realistically captures LVLM hallucination in open generation. Our extensive experiments over three backbone LLMs reveal that grounding objectives have little to no effect on object hallucination in open caption generation.", }
Large vision-language models (LVLMs) have recently dramatically pushed the state of the art in image captioning and many image understanding tasks (e.g., visual question answering). LVLMs, however, often \textit{hallucinate} and produce captions that mention concepts that cannot be found in the image. These hallucinations erode the trustworthiness of LVLMs and are arguably among the main obstacles to their ubiquitous adoption. Recent work suggests that addition of grounding objectives{---}those that explicitly align image regions or objects to text spans{---}reduces the amount of LVLM hallucination. Although intuitive, this claim is not empirically justified as the reduction effects have been established, we argue, with flawed evaluation protocols that (i) rely on data (i.e., MSCOCO) that has been extensively used in LVLM training and (ii) measure hallucination via question answering rather than open-ended caption generation.In this work, in contrast, we offer the first systematic analysis of the effect of fine-grained object grounding on LVLM hallucination under an evaluation protocol that more realistically captures LVLM hallucination in open generation. Our extensive experiments over three backbone LLMs reveal that grounding objectives have little to no effect on object hallucination in open caption generation.
[ "Geigle, Gregor", "Timofte, Radu", "Glava{\\v{s}}, Goran" ]
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?
emnlp-main.159
Poster
2406.14492
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.160.bib
https://aclanthology.org/2024.emnlp-main.160/
@inproceedings{liu-etal-2024-take, title = "Take Off the Training Wheels! Progressive In-Context Learning for Effective Alignment", author = "Liu, Zhenyu and Li, Dongfang and Hu, Xinshuo and Zhao, Xinping and Chen, Yibin and Hu, Baotian and Zhang, Min", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.160", pages = "2743--2757", abstract = "Recent studies have explored the working mechanisms of In-Context Learning (ICL). However, they mainly focus on classification and simple generation tasks, limiting their broader application to more complex generation tasks in practice. To address this gap, we investigate the impact of demonstrations on token representations within the practical alignment tasks. We find that the transformer embeds the task function learned from demonstrations into the separator token representation, which plays an important role in the generation of prior response tokens. Once the prior response tokens are determined, the demonstrations become redundant. Motivated by this finding, we propose an efficient Progressive In-Context Alignment (PICA) method consisting of two stages. In the first few-shot stage, the model generates several prior response tokens via standard ICL while concurrently extracting the ICL vector that stores the task function from the separator token representation. In the following zero-shot stage, this ICL vector guides the model to generate responses without further demonstrations. Extensive experiments demonstrate that our PICA not only surpasses vanilla ICL but also achieves comparable performance to other alignment tuning methods. The proposed training-free method reduces the time cost (e.g., 5.45{\mbox{$\times$}}) with improved alignment performance (e.g., 6.57+). Consequently, our work highlights the application of ICL for alignment and calls for a deeper understanding of ICL for complex generations. The code will be available at https://github.com/HITsz-TMG/PICA.", }
Recent studies have explored the working mechanisms of In-Context Learning (ICL). However, they mainly focus on classification and simple generation tasks, limiting their broader application to more complex generation tasks in practice. To address this gap, we investigate the impact of demonstrations on token representations within the practical alignment tasks. We find that the transformer embeds the task function learned from demonstrations into the separator token representation, which plays an important role in the generation of prior response tokens. Once the prior response tokens are determined, the demonstrations become redundant. Motivated by this finding, we propose an efficient Progressive In-Context Alignment (PICA) method consisting of two stages. In the first few-shot stage, the model generates several prior response tokens via standard ICL while concurrently extracting the ICL vector that stores the task function from the separator token representation. In the following zero-shot stage, this ICL vector guides the model to generate responses without further demonstrations. Extensive experiments demonstrate that our PICA not only surpasses vanilla ICL but also achieves comparable performance to other alignment tuning methods. The proposed training-free method reduces the time cost (e.g., 5.45{\mbox{$\times$}}) with improved alignment performance (e.g., 6.57+). Consequently, our work highlights the application of ICL for alignment and calls for a deeper understanding of ICL for complex generations. The code will be available at https://github.com/HITsz-TMG/PICA.
[ "Liu, Zhenyu", "Li, Dongfang", "Hu, Xinshuo", "Zhao, Xinping", "Chen, Yibin", "Hu, Baotian", "Zhang, Min" ]
Take Off the Training Wheels! Progressive In-Context Learning for Effective Alignment
emnlp-main.160
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.161.bib
https://aclanthology.org/2024.emnlp-main.161/
@inproceedings{ma-etal-2024-modula, title = "{M}o{DULA}: Mixture of Domain-Specific and Universal {L}o{RA} for Multi-Task Learning", author = "Ma, Yufei and Liang, Zihan and Dai, Huangyu and Chen, Ben and Gao, Dehong and Ran, Zhuoran and Zihan, Wang and Jin, Linbo and Jiang, Wen and Zhang, Guannan and Cai, Xiaoyan and Yang, Libin", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.161", pages = "2758--2770", abstract = "The growing demand for larger-scale models in the development of Large Language Models (LLMs) poses challenges for efficient training within limited computational resources. Traditional fine-tuning methods often exhibit instability in multi-task learning and rely heavily on extensive training resources. Here, we propose MoDULA (Mixture of Domain-Specific and Universal LoRA), a novel Parameter Efficient Fine-Tuning (PEFT) Mixture-of-Expert (MoE) paradigm for improved fine-tuning and parameter efficiency in multi-task learning. The paradigm effectively improves the multi-task capability of the model by training universal experts, domain-specific experts, and routers separately. MoDULA-Res is a new method within the MoDULA paradigm, which maintains the model{'}s general capability by connecting universal and task-specific experts through residual connections. The experimental results demonstrate that the overall performance of the MoDULA-Flan and MoDULA-Res methods surpasses that of existing fine-tuning methods on various LLMs. Notably, MoDULA-Res achieves more significant performance improvements in multiple tasks while reducing training costs by over 80{\%} without losing general capability. Moreover, MoDULA displays flexible pluggability, allowing for the efficient addition of new tasks without retraining existing experts from scratch. This progressive training paradigm circumvents data balancing issues, enhancing training efficiency and model stability. Overall, MoDULA provides a scalable, cost-effective solution for fine-tuning LLMs with enhanced parameter efficiency and generalization capability.", }
The growing demand for larger-scale models in the development of Large Language Models (LLMs) poses challenges for efficient training within limited computational resources. Traditional fine-tuning methods often exhibit instability in multi-task learning and rely heavily on extensive training resources. Here, we propose MoDULA (Mixture of Domain-Specific and Universal LoRA), a novel Parameter Efficient Fine-Tuning (PEFT) Mixture-of-Expert (MoE) paradigm for improved fine-tuning and parameter efficiency in multi-task learning. The paradigm effectively improves the multi-task capability of the model by training universal experts, domain-specific experts, and routers separately. MoDULA-Res is a new method within the MoDULA paradigm, which maintains the model{'}s general capability by connecting universal and task-specific experts through residual connections. The experimental results demonstrate that the overall performance of the MoDULA-Flan and MoDULA-Res methods surpasses that of existing fine-tuning methods on various LLMs. Notably, MoDULA-Res achieves more significant performance improvements in multiple tasks while reducing training costs by over 80{\%} without losing general capability. Moreover, MoDULA displays flexible pluggability, allowing for the efficient addition of new tasks without retraining existing experts from scratch. This progressive training paradigm circumvents data balancing issues, enhancing training efficiency and model stability. Overall, MoDULA provides a scalable, cost-effective solution for fine-tuning LLMs with enhanced parameter efficiency and generalization capability.
[ "Ma, Yufei", "Liang, Zihan", "Dai, Huangyu", "Chen, Ben", "Gao, Dehong", "Ran, Zhuoran", "Zihan, Wang", "Jin, Linbo", "Jiang, Wen", "Zhang, Guannan", "Cai, Xiaoyan", "Yang, Libin" ]
MoDULA: Mixture of Domain-Specific and Universal LoRA for Multi-Task Learning
emnlp-main.161
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.162.bib
https://aclanthology.org/2024.emnlp-main.162/
@inproceedings{zhang-etal-2024-message, title = "Message Passing on Semantic-Anchor-Graphs for Fine-grained Emotion Representation Learning and Classification", author = "Zhang, Pinyi and Chen, Jingyang and Shen, Junchen and Zhai, Zijie and Li, Ping and Zhang, Jie and Zhang, Kai", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.162", pages = "2771--2783", abstract = "Emotion classification has wide applications in education, robotics, virtual reality, etc. However, identifying subtle differences between fine-grained emotion categories remains challenging. Current methods typically aggregate numerous token embeddings of a sentence into a single vector, which, while being an efficient compressor, may not fully capture complex semantic and temporal distributions. To solve this problem, we propose SEmantic ANchor Graph Neural Networks (SEAN-GNN) for fine-grained emotion classification. It learns a group of representative, multi-faceted semantic anchors in the token embedding space: using these anchors as a global reference, any sentence can be projected onto them to form a {``}semantic-anchor graph{''}, with node attributes and edge weights quantifying the semantic and temporal information respectively. The graph structure is well aligned across sentences and, importantly, allows for generating comprehensive emotion representations regarding $K$ different anchors. Message passing on this graph can further integrate and refine the learned features. Empirically, SEAN-GNN can generate meaningful semantic anchors and discriminative graph patterns for different emotion, with promising classification results on 6 popular benchmark datasets against state-of-the-arts.", }
Emotion classification has wide applications in education, robotics, virtual reality, etc. However, identifying subtle differences between fine-grained emotion categories remains challenging. Current methods typically aggregate numerous token embeddings of a sentence into a single vector, which, while being an efficient compressor, may not fully capture complex semantic and temporal distributions. To solve this problem, we propose SEmantic ANchor Graph Neural Networks (SEAN-GNN) for fine-grained emotion classification. It learns a group of representative, multi-faceted semantic anchors in the token embedding space: using these anchors as a global reference, any sentence can be projected onto them to form a {``}semantic-anchor graph{''}, with node attributes and edge weights quantifying the semantic and temporal information respectively. The graph structure is well aligned across sentences and, importantly, allows for generating comprehensive emotion representations regarding $K$ different anchors. Message passing on this graph can further integrate and refine the learned features. Empirically, SEAN-GNN can generate meaningful semantic anchors and discriminative graph patterns for different emotion, with promising classification results on 6 popular benchmark datasets against state-of-the-arts.
[ "Zhang, Pinyi", "Chen, Jingyang", "Shen, Junchen", "Zhai, Zijie", "Li, Ping", "Zhang, Jie", "Zhang, Kai" ]
Message Passing on Semantic-Anchor-Graphs for Fine-grained Emotion Representation Learning and Classification
emnlp-main.162
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.163.bib
https://aclanthology.org/2024.emnlp-main.163/
@inproceedings{zhang-etal-2024-philogpt, title = "{P}hilo{GPT}: A Philology-Oriented Large Language Model for {A}ncient {C}hinese Manuscripts with Dunhuang as Case Study", author = "Zhang, Yuqing and He, Baoyi and Chen, Yihan and Li, Hangqi and Yue, Han and Zhang, Shengyu and Dou, Huaiyong and Yan, Junchi and Liu, Zemin and Zhang, Yongquan and Wu, Fei", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.163", pages = "2784--2801", abstract = "Philology, the study of ancient manuscripts, demands years of professional training in ex-tensive knowledge memorization and manual textual retrieval. Despite these requirements align closely with strengths of recent successful Large Language Models (LLMs), the scarcity of high-quality, specialized training data has hindered direct applications. To bridge this gap, we curated the PhiloCorpus-ZH, a rich collec-tion of ancient Chinese texts spanning a millen-nium with 30 diverse topics, including firsthand folk copies. This corpus facilitated the develop-ment of PhiloGPT, the first LLM tailored for discovering ancient Chinese manuscripts. To effectively tackle complex philological tasks like restoration, attribution, and linguistic anal-ysis, we introduced the PhiloCoP framework. Modeled on the analytical patterns of philol-ogists, PhiloCoP enhances LLM{'}s handling of historical linguistic peculiarities such as phonetic loans, polysemy, and syntactic inver-sions. We further integrated these tasks into the PhiloBenchmark, establishing a new standard for evaluating ancient Chinese LLMs address-ing philology tasks. Deploying PhiloGPT in practical scenarios has enabled Dunhuang spe-cialists to resolve philology tasks, such as iden-tifying duplication of copied text and assisting archaeologists with text completion, demon-strating its potential in real-world applications.", }
Philology, the study of ancient manuscripts, demands years of professional training in ex-tensive knowledge memorization and manual textual retrieval. Despite these requirements align closely with strengths of recent successful Large Language Models (LLMs), the scarcity of high-quality, specialized training data has hindered direct applications. To bridge this gap, we curated the PhiloCorpus-ZH, a rich collec-tion of ancient Chinese texts spanning a millen-nium with 30 diverse topics, including firsthand folk copies. This corpus facilitated the develop-ment of PhiloGPT, the first LLM tailored for discovering ancient Chinese manuscripts. To effectively tackle complex philological tasks like restoration, attribution, and linguistic anal-ysis, we introduced the PhiloCoP framework. Modeled on the analytical patterns of philol-ogists, PhiloCoP enhances LLM{'}s handling of historical linguistic peculiarities such as phonetic loans, polysemy, and syntactic inver-sions. We further integrated these tasks into the PhiloBenchmark, establishing a new standard for evaluating ancient Chinese LLMs address-ing philology tasks. Deploying PhiloGPT in practical scenarios has enabled Dunhuang spe-cialists to resolve philology tasks, such as iden-tifying duplication of copied text and assisting archaeologists with text completion, demon-strating its potential in real-world applications.
[ "Zhang, Yuqing", "He, Baoyi", "Chen, Yihan", "Li, Hangqi", "Yue, Han", "Zhang, Shengyu", "Dou, Huaiyong", "Yan, Junchi", "Liu, Zemin", "Zhang, Yongquan", "Wu, Fei" ]
PhiloGPT: A Philology-Oriented Large Language Model for Ancient Chinese Manuscripts with Dunhuang as Case Study
emnlp-main.163
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.164.bib
https://aclanthology.org/2024.emnlp-main.164/
@inproceedings{liu-etal-2024-alignment, title = "Alignment-Enhanced Decoding: Defending Jailbreaks via Token-Level Adaptive Refining of Probability Distributions", author = "Liu, Quan and Zhou, Zhenhong and He, Longzhu and Liu, Yi and Zhang, Wei and Su, Sen", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.164", pages = "2802--2816", abstract = "Large language models are susceptible to jailbreak attacks, which can result in the generation of harmful content. While prior defenses mitigate these risks by perturbing or inspecting inputs, they ignore competing objectives, the underlying cause of alignment failures. In this paper, we propose Alignment-Enhanced Decoding (AED), a novel defense that employs adaptive decoding to address the root causes of jailbreak issues. We first define the Competitive Index to quantify alignment failures and utilize feedback from self-evaluation to compute post-alignment logits. Then, AED adaptively combines Competitive Index and post-alignment logits with the original logits to obtain harmless and helpful distributions. Consequently, our method enhances safety alignment while maintaining helpfulness. We conduct experiments across five models and four common jailbreaks, with the results validating the effectiveness of our approach.", }
Large language models are susceptible to jailbreak attacks, which can result in the generation of harmful content. While prior defenses mitigate these risks by perturbing or inspecting inputs, they ignore competing objectives, the underlying cause of alignment failures. In this paper, we propose Alignment-Enhanced Decoding (AED), a novel defense that employs adaptive decoding to address the root causes of jailbreak issues. We first define the Competitive Index to quantify alignment failures and utilize feedback from self-evaluation to compute post-alignment logits. Then, AED adaptively combines Competitive Index and post-alignment logits with the original logits to obtain harmless and helpful distributions. Consequently, our method enhances safety alignment while maintaining helpfulness. We conduct experiments across five models and four common jailbreaks, with the results validating the effectiveness of our approach.
[ "Liu, Quan", "Zhou, Zhenhong", "He, Longzhu", "Liu, Yi", "Zhang, Wei", "Su, Sen" ]
Alignment-Enhanced Decoding: Defending Jailbreaks via Token-Level Adaptive Refining of Probability Distributions
emnlp-main.164
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.165.bib
https://aclanthology.org/2024.emnlp-main.165/
@inproceedings{sun-etal-2024-minicongts, title = "{M}ini{C}on{GTS}: A Near Ultimate Minimalist Contrastive Grid Tagging Scheme for Aspect Sentiment Triplet Extraction", author = "Sun, Qiao and Yang, Liujia and Ma, Minghao and Ye, Nanyang and Gu, Qinying", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.165", pages = "2817--2834", abstract = "Aspect Sentiment Triplet Extraction (ASTE) aims to co-extract the sentiment triplets in a given corpus. Existing approaches within the pretraining-finetuning paradigm tend to either meticulously craft complex tagging schemes and classification heads, or incorporate external semantic augmentation to enhance performance. In this study, we, for the first time, re-evaluate the redundancy in tagging schemes and the internal enhancement in pretrained representations. We propose a method to improve and utilize pretrained representations by integrating a minimalist tagging scheme and a novel token-level contrastive learning strategy. The proposed approach demonstrates comparable or superior performance compared to state-of-the-art techniques while featuring a more compact design and reduced computational overhead. Additionally, we are the first to formally evaluate GPT-4{'}s performance in few-shot learning and Chain-of-Thought scenarios for this task. The results demonstrate that the pretraining-finetuning paradigm remains highly effective even in the era of large language models.", }
Aspect Sentiment Triplet Extraction (ASTE) aims to co-extract the sentiment triplets in a given corpus. Existing approaches within the pretraining-finetuning paradigm tend to either meticulously craft complex tagging schemes and classification heads, or incorporate external semantic augmentation to enhance performance. In this study, we, for the first time, re-evaluate the redundancy in tagging schemes and the internal enhancement in pretrained representations. We propose a method to improve and utilize pretrained representations by integrating a minimalist tagging scheme and a novel token-level contrastive learning strategy. The proposed approach demonstrates comparable or superior performance compared to state-of-the-art techniques while featuring a more compact design and reduced computational overhead. Additionally, we are the first to formally evaluate GPT-4{'}s performance in few-shot learning and Chain-of-Thought scenarios for this task. The results demonstrate that the pretraining-finetuning paradigm remains highly effective even in the era of large language models.
[ "Sun, Qiao", "Yang, Liujia", "Ma, Minghao", "Ye, Nanyang", "Gu, Qinying" ]
MiniConGTS: A Near Ultimate Minimalist Contrastive Grid Tagging Scheme for Aspect Sentiment Triplet Extraction
emnlp-main.165
Poster
2406.11234
[ "https://github.com/qiaosun22/MiniConGTS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.166.bib
https://aclanthology.org/2024.emnlp-main.166/
@inproceedings{miaschi-etal-2024-evaluating, title = "Evaluating Large Language Models via Linguistic Profiling", author = "Miaschi, Alessio and Dell{'}Orletta, Felice and Venturi, Giulia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.166", pages = "2835--2848", abstract = "Large Language Models (LLMs) undergo extensive evaluation against various benchmarks collected in established leaderboards to assess their performance across multiple tasks. However, to the best of our knowledge, there is a lack of comprehensive studies evaluating these models{'} linguistic abilities independent of specific tasks. In this paper, we introduce a novel evaluation methodology designed to test LLMs{'} sentence generation abilities under specific linguistic constraints. Drawing on the {`}linguistic profiling{'} approach, we rigorously investigate the extent to which five LLMs of varying sizes, tested in both zero- and few-shot scenarios, effectively adhere to (morpho)syntactic constraints. Our findings shed light on the linguistic proficiency of LLMs, revealing both their capabilities and limitations in generating linguistically-constrained sentences.", }
Large Language Models (LLMs) undergo extensive evaluation against various benchmarks collected in established leaderboards to assess their performance across multiple tasks. However, to the best of our knowledge, there is a lack of comprehensive studies evaluating these models{'} linguistic abilities independent of specific tasks. In this paper, we introduce a novel evaluation methodology designed to test LLMs{'} sentence generation abilities under specific linguistic constraints. Drawing on the {`}linguistic profiling{'} approach, we rigorously investigate the extent to which five LLMs of varying sizes, tested in both zero- and few-shot scenarios, effectively adhere to (morpho)syntactic constraints. Our findings shed light on the linguistic proficiency of LLMs, revealing both their capabilities and limitations in generating linguistically-constrained sentences.
[ "Miaschi, Alessio", "Dell{'}Orletta, Felice", "Venturi, Giulia" ]
Evaluating Large Language Models via Linguistic Profiling
emnlp-main.166
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.167.bib
https://aclanthology.org/2024.emnlp-main.167/
@inproceedings{loakman-etal-2024-ears, title = "With Ears to See and Eyes to Hear: Sound Symbolism Experiments with Multimodal Large Language Models", author = "Loakman, Tyler and Li, Yucheng and Lin, Chenghua", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.167", pages = "2849--2867", abstract = "Recently, Large Language Models (LLMs) and Vision Language Models (VLMs) have demonstrated aptitude as potential substitutes for human participants in experiments testing psycholinguistic phenomena. However, an understudied question is to what extent models that only have access to vision and text modalities are able to implicitly understand sound-based phenomena via abstract reasoning from orthography and imagery alone. To investigate this, we analyse the ability of VLMs and LLMs to demonstrate sound symbolism (i.e., to recognise a non-arbitrary link between sounds and concepts) as well as their ability to {``}hear{''} via the interplay of the language and vision modules of open and closed-source multimodal models. We perform multiple experiments, including replicating the classic Kiki-Bouba and Mil-Mal shape and magnitude symbolism tasks and comparing human judgements of linguistic iconicity with that of LLMs. Our results show that VLMs demonstrate varying levels of agreement with human labels, and more task information may be required for VLMs versus their human counterparts for \textit{in silico} experimentation. We additionally see through higher maximum agreement levels that Magnitude Symbolism is an easier pattern for VLMs to identify than Shape Symbolism, and that an understanding of linguistic iconicity is highly dependent on model size.", }
Recently, Large Language Models (LLMs) and Vision Language Models (VLMs) have demonstrated aptitude as potential substitutes for human participants in experiments testing psycholinguistic phenomena. However, an understudied question is to what extent models that only have access to vision and text modalities are able to implicitly understand sound-based phenomena via abstract reasoning from orthography and imagery alone. To investigate this, we analyse the ability of VLMs and LLMs to demonstrate sound symbolism (i.e., to recognise a non-arbitrary link between sounds and concepts) as well as their ability to {``}hear{''} via the interplay of the language and vision modules of open and closed-source multimodal models. We perform multiple experiments, including replicating the classic Kiki-Bouba and Mil-Mal shape and magnitude symbolism tasks and comparing human judgements of linguistic iconicity with that of LLMs. Our results show that VLMs demonstrate varying levels of agreement with human labels, and more task information may be required for VLMs versus their human counterparts for \textit{in silico} experimentation. We additionally see through higher maximum agreement levels that Magnitude Symbolism is an easier pattern for VLMs to identify than Shape Symbolism, and that an understanding of linguistic iconicity is highly dependent on model size.
[ "Loakman, Tyler", "Li, Yucheng", "Lin, Chenghua" ]
With Ears to See and Eyes to Hear: Sound Symbolism Experiments with Multimodal Large Language Models
emnlp-main.167
Poster
2409.14917
[ "https://github.com/tylerL404/WETSAETH" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.168.bib
https://aclanthology.org/2024.emnlp-main.168/
@inproceedings{zhang-etal-2024-kb, title = "{KB}-Plugin: A Plug-and-play Framework for Large Language Models to Induce Programs over Low-resourced Knowledge Bases", author = "Zhang, Jiajie and Cao, Shulin and Hu, Linmei and Feng, Ling and Hou, Lei and Li, Juanzi", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.168", pages = "2868--2882", abstract = "Program induction (PI) has become a promising paradigm for using knowledge bases (KBs) to help large language models (LLMs) answer complex knowledge-intensive questions. Nonetheless, PI typically relies on a large number of parallel question-program pairs to make the LLM aware of the schema of a given KB, and is thus challenging for many low-resourced KBs that lack annotated data. To this end, we propose KB-Plugin, a plug-and-play framework that enables LLMs to induce programs over any low-resourced KB. Firstly, KB-Plugin adopts self-supervised learning to encode the detailed schema information of a given KB into a pluggable module, namely schema plugin. Secondly, KB-Plugin utilizes abundant annotated data from a rich-resourced KB to train another pluggable module, namely PI plugin, which can help the LLM extract question-relevant schema information from the schema plugin of any KB and utilize the information to induce programs over this KB. Experiments show that KB-Plugin outperforms SoTA low-resourced PI methods with 25x smaller backbone LLM on both large-scale and domain-specific KBs, and even approaches the performance of supervised methods.", }
Program induction (PI) has become a promising paradigm for using knowledge bases (KBs) to help large language models (LLMs) answer complex knowledge-intensive questions. Nonetheless, PI typically relies on a large number of parallel question-program pairs to make the LLM aware of the schema of a given KB, and is thus challenging for many low-resourced KBs that lack annotated data. To this end, we propose KB-Plugin, a plug-and-play framework that enables LLMs to induce programs over any low-resourced KB. Firstly, KB-Plugin adopts self-supervised learning to encode the detailed schema information of a given KB into a pluggable module, namely schema plugin. Secondly, KB-Plugin utilizes abundant annotated data from a rich-resourced KB to train another pluggable module, namely PI plugin, which can help the LLM extract question-relevant schema information from the schema plugin of any KB and utilize the information to induce programs over this KB. Experiments show that KB-Plugin outperforms SoTA low-resourced PI methods with 25x smaller backbone LLM on both large-scale and domain-specific KBs, and even approaches the performance of supervised methods.
[ "Zhang, Jiajie", "Cao, Shulin", "Hu, Linmei", "Feng, Ling", "Hou, Lei", "Li, Juanzi" ]
KB-Plugin: A Plug-and-play Framework for Large Language Models to Induce Programs over Low-resourced Knowledge Bases
emnlp-main.168
Poster
2402.01619
[ "https://github.com/thu-keg/kb-plugin" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.169.bib
https://aclanthology.org/2024.emnlp-main.169/
@inproceedings{oyama-etal-2024-understanding, title = "Understanding Higher-Order Correlations Among Semantic Components in Embeddings", author = "Oyama, Momose and Yamagiwa, Hiroaki and Shimodaira, Hidetoshi", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.169", pages = "2883--2899", abstract = "Independent Component Analysis (ICA) offers interpretable semantic components of embeddings.While ICA theory assumes that embeddings can be linearly decomposed into independent components, real-world data often do not satisfy this assumption. Consequently, non-independencies remain between the estimated components, which ICA cannot eliminate. We quantified these non-independencies using higher-order correlations and demonstrated that when the higher-order correlation between two components is large, it indicates a strong semantic association between them, along with many words sharing common meanings with both components. The entire structure of non-independencies was visualized using a maximum spanning tree of semantic components. These findings provide deeper insights into embeddings through ICA.", }
Independent Component Analysis (ICA) offers interpretable semantic components of embeddings.While ICA theory assumes that embeddings can be linearly decomposed into independent components, real-world data often do not satisfy this assumption. Consequently, non-independencies remain between the estimated components, which ICA cannot eliminate. We quantified these non-independencies using higher-order correlations and demonstrated that when the higher-order correlation between two components is large, it indicates a strong semantic association between them, along with many words sharing common meanings with both components. The entire structure of non-independencies was visualized using a maximum spanning tree of semantic components. These findings provide deeper insights into embeddings through ICA.
[ "Oyama, Momose", "Yamagiwa, Hiroaki", "Shimodaira, Hidetoshi" ]
Understanding Higher-Order Correlations Among Semantic Components in Embeddings
emnlp-main.169
Poster
2409.19919
[ "https://github.com/momoseoyama/hoc" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.170.bib
https://aclanthology.org/2024.emnlp-main.170/
@inproceedings{zhu-etal-2024-dglf, title = "{DGLF}: A Dual Graph-based Learning Framework for Multi-modal Sarcasm Detection", author = "Zhu, Zhihong and Shen, Kefan and Chen, Zhaorun and Zhang, Yunyan and Chen, Yuyan and Jiao, Xiaoqi and Wan, Zhongwei and Xie, Shaorong and Liu, Wei and Wu, Xian and Zheng, Yefeng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.170", pages = "2900--2912", }
No abstract found
[ "Zhu, Zhihong", "Shen, Kefan", "Chen, Zhaorun", "Zhang, Yunyan", "Chen, Yuyan", "Jiao, Xiaoqi", "Wan, Zhongwei", "Xie, Shaorong", "Liu, Wei", "Wu, Xian", "Zheng, Yefeng" ]
DGLF: A Dual Graph-based Learning Framework for Multi-modal Sarcasm Detection
emnlp-main.170
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.171.bib
https://aclanthology.org/2024.emnlp-main.171/
@inproceedings{rassin-etal-2024-evaluating, title = "Evaluating {D}-{MERIT} of Partial-annotation on Information Retrieval", author = "Rassin, Royi and Fairstein, Yaron and Kalinsky, Oren and Kushilevitz, Guy and Cohen, Nachshon and Libov, Alexander and Goldberg, Yoav", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.171", pages = "2913--2932", }
No abstract found
[ "Rassin, Royi", "Fairstein, Yaron", "Kalinsky, Oren", "Kushilevitz, Guy", "Cohen, Nachshon", "Libov, Alex", "er", "Goldberg, Yoav" ]
Evaluating D-MERIT of Partial-annotation on Information Retrieval
emnlp-main.171
Poster
2406.16048
[ "" ]
https://huggingface.co/papers/2406.16048
5
34
2
7
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.172.bib
https://aclanthology.org/2024.emnlp-main.172/
@inproceedings{quan-etal-2024-verification, title = "Verification and Refinement of Natural Language Explanations through {LLM}-Symbolic Theorem Proving", author = "Quan, Xin and Valentino, Marco and Dennis, Louise A. and Freitas, Andre", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.172", pages = "2933--2958", abstract = "Natural language explanations represent a proxy for evaluating explanation-based and multi-step Natural Language Inference (NLI) models. However, assessing the validity of explanations for NLI is challenging as it typically involves the crowd-sourcing of apposite datasets, a process that is time-consuming and prone to logical errors. To address existing limitations, this paper investigates the verification and refinement of natural language explanations through the integration of Large Language Models (LLMs) and Theorem Provers (TPs). Specifically, we present a neuro-symbolic framework, named Explanation-Refiner, that integrates TPs with LLMs to generate and formalise explanatory sentences and suggest potential inference strategies for NLI. In turn, the TP is employed to provide formal guarantees on the logical validity of the explanations and to generate feedback for subsequent improvements. We demonstrate how Explanation-Refiner can be jointly used to evaluate explanatory reasoning, autoformalisation, and error correction mechanisms of state-of-the-art LLMs as well as to automatically enhance the quality of explanations of variable complexity in different domains.", }
Natural language explanations represent a proxy for evaluating explanation-based and multi-step Natural Language Inference (NLI) models. However, assessing the validity of explanations for NLI is challenging as it typically involves the crowd-sourcing of apposite datasets, a process that is time-consuming and prone to logical errors. To address existing limitations, this paper investigates the verification and refinement of natural language explanations through the integration of Large Language Models (LLMs) and Theorem Provers (TPs). Specifically, we present a neuro-symbolic framework, named Explanation-Refiner, that integrates TPs with LLMs to generate and formalise explanatory sentences and suggest potential inference strategies for NLI. In turn, the TP is employed to provide formal guarantees on the logical validity of the explanations and to generate feedback for subsequent improvements. We demonstrate how Explanation-Refiner can be jointly used to evaluate explanatory reasoning, autoformalisation, and error correction mechanisms of state-of-the-art LLMs as well as to automatically enhance the quality of explanations of variable complexity in different domains.
[ "Quan, Xin", "Valentino, Marco", "Dennis, Louise A.", "Freitas, Andre" ]
Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving
emnlp-main.172
Oral
2405.01379
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.173.bib
https://aclanthology.org/2024.emnlp-main.173/
@inproceedings{zhang-etal-2024-calibrating, title = "Calibrating the Confidence of Large Language Models by Eliciting Fidelity", author = "Zhang, Mozhi and Huang, Mianqiu and Shi, Rundong and Guo, Linsen and Peng, Chong and Yan, Peng and Zhou, Yaqian and Qiu, Xipeng", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.173", pages = "2959--2979", abstract = "Large language models optimized with techniques like RLHF have achieved good alignment in being helpful and harmless. However, post-alignment, these language models often exhibit overconfidence, where the expressed confidence does not accurately calibrate with their correctness rate. In this paper, we decompose the language model confidence into the \textit{Uncertainty} about the question and the \textit{Fidelity} to the answer generated by language models. Then, we propose a plug-and-play method, \textit{UF Calibration}, to estimate the confidence of language models. Our method has shown good calibration performance by conducting experiments with 6 RLHF-LMs on four MCQA datasets. Moreover, we propose two novel metrics, IPR and CE, to evaluate the calibration of the model, and we have conducted a detailed discussion on \textit{Truly Well-Calibrated Confidence} for large language models. Our method could serve as a strong baseline, and we hope that this work will provide some insights into the model confidence calibration.", }
Large language models optimized with techniques like RLHF have achieved good alignment in being helpful and harmless. However, post-alignment, these language models often exhibit overconfidence, where the expressed confidence does not accurately calibrate with their correctness rate. In this paper, we decompose the language model confidence into the \textit{Uncertainty} about the question and the \textit{Fidelity} to the answer generated by language models. Then, we propose a plug-and-play method, \textit{UF Calibration}, to estimate the confidence of language models. Our method has shown good calibration performance by conducting experiments with 6 RLHF-LMs on four MCQA datasets. Moreover, we propose two novel metrics, IPR and CE, to evaluate the calibration of the model, and we have conducted a detailed discussion on \textit{Truly Well-Calibrated Confidence} for large language models. Our method could serve as a strong baseline, and we hope that this work will provide some insights into the model confidence calibration.
[ "Zhang, Mozhi", "Huang, Mianqiu", "Shi, Rundong", "Guo, Linsen", "Peng, Chong", "Yan, Peng", "Zhou, Yaqian", "Qiu, Xipeng" ]
Calibrating the Confidence of Large Language Models by Eliciting Fidelity
emnlp-main.173
Poster
2404.02655
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.174.bib
https://aclanthology.org/2024.emnlp-main.174/
@inproceedings{chen-etal-2024-accuracy, title = "The Accuracy Paradox in {RLHF}: When Better Reward Models Don{'}t Yield Better Language Models", author = "Chen, Yanjun and Zhu, Dawei and Sun, Yirong and Chen, Xinghao and Zhang, Wei and Shen, Xiaoyu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.174", pages = "2980--2989", abstract = "Reinforcement Learning from Human Feedback significantly enhances Natural Language Processing by aligning language models with human expectations. A critical factor in this alignment is the strength of reward models used during training. This study explores whether stronger reward models invariably lead to better language models. In this paper, through experiments on relevance, factuality, and completeness tasks using the QA-FEEDBACK dataset and reward models based on Longformer, we uncover a surprising paradox: language models trained with moderately accurate reward models outperform those guided by highly accurate ones. This challenges the widely held belief that stronger reward models always lead to better language models, and opens up new avenues for future research into the key factors driving model performance and how to choose the most suitable reward models.", }
Reinforcement Learning from Human Feedback significantly enhances Natural Language Processing by aligning language models with human expectations. A critical factor in this alignment is the strength of reward models used during training. This study explores whether stronger reward models invariably lead to better language models. In this paper, through experiments on relevance, factuality, and completeness tasks using the QA-FEEDBACK dataset and reward models based on Longformer, we uncover a surprising paradox: language models trained with moderately accurate reward models outperform those guided by highly accurate ones. This challenges the widely held belief that stronger reward models always lead to better language models, and opens up new avenues for future research into the key factors driving model performance and how to choose the most suitable reward models.
[ "Chen, Yanjun", "Zhu, Dawei", "Sun, Yirong", "Chen, Xinghao", "Zhang, Wei", "Shen, Xiaoyu" ]
The Accuracy Paradox in RLHF: When Better Reward Models Don't Yield Better Language Models
emnlp-main.174
Poster
2410.06554
[ "https://github.com/EIT-NLP/AccuracyParadox-RLHF" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.175.bib
https://aclanthology.org/2024.emnlp-main.175/
@inproceedings{cosma-etal-2024-hard, title = "How Hard is this Test Set? {NLI} Characterization by Exploiting Training Dynamics", author = "Cosma, Adrian and Ruseti, Stefan and Dascalu, Mihai and Caragea, Cornelia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.175", pages = "2990--3001", abstract = "Natural Language Inference (NLI) evaluation is crucial for assessing language understanding models; however, popular datasets suffer from systematic spurious correlations that artificially inflate actual model performance. To address this, we propose a method for the automated creation of a challenging test set without relying on the manual construction of artificial and unrealistic examples. We categorize the test set of popular NLI datasets into three difficulty levels by leveraging methods that exploit training dynamics. This categorization significantly reduces spurious correlation measures, with examples labeled as having the highest difficulty showing markedly decreased performance and encompassing more realistic and diverse linguistic phenomena. When our characterization method is applied to the training set, models trained with only a fraction of the data achieve comparable performance to those trained on the full dataset, surpassing other dataset characterization techniques. Our research addresses limitations in NLI dataset construction, providing a more authentic evaluation of model performance with implications for diverse NLU applications.", }
Natural Language Inference (NLI) evaluation is crucial for assessing language understanding models; however, popular datasets suffer from systematic spurious correlations that artificially inflate actual model performance. To address this, we propose a method for the automated creation of a challenging test set without relying on the manual construction of artificial and unrealistic examples. We categorize the test set of popular NLI datasets into three difficulty levels by leveraging methods that exploit training dynamics. This categorization significantly reduces spurious correlation measures, with examples labeled as having the highest difficulty showing markedly decreased performance and encompassing more realistic and diverse linguistic phenomena. When our characterization method is applied to the training set, models trained with only a fraction of the data achieve comparable performance to those trained on the full dataset, surpassing other dataset characterization techniques. Our research addresses limitations in NLI dataset construction, providing a more authentic evaluation of model performance with implications for diverse NLU applications.
[ "Cosma, Adrian", "Ruseti, Stefan", "Dascalu, Mihai", "Caragea, Cornelia" ]
How Hard is this Test Set? NLI Characterization by Exploiting Training Dynamics
emnlp-main.175
Poster
2410.03429
[ "https://github.com/cosmaadrian/nli-stress-test" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.176.bib
https://aclanthology.org/2024.emnlp-main.176/
@inproceedings{latouche-etal-2024-zero, title = "Zero-shot Cross-Lingual Transfer for Synthetic Data Generation in Grammatical Error Detection", author = "Latouche, Gaetan Lopez and Carbonneau, Marc-Andr{\'e} and Swanson, Benjamin", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.176", pages = "3002--3016", abstract = "Grammatical Error Detection (GED) methods rely heavily on human annotated error corpora. However, these annotations are unavailable in many low-resource languages. In this paper, we investigate GED in this context. Leveraging the zero-shot cross-lingual transfer capabilities of multilingual pre-trained language models, we train a model using data from a diverse set of languages to generate synthetic errors in other languages. These synthetic error corpora are then used to train a GED model. Specifically we propose a two-stage fine-tuning pipeline where the GED model is first fine-tuned on multilingual synthetic data from target languages followed by fine-tuning on human-annotated GED corpora from source languages. This approach outperforms current state-of-the-art annotation-free GED methods. We also analyse the errors produced by our method and other strong baselines, finding that our approach produces errors that are more diverse and more similar to human errors.", }
Grammatical Error Detection (GED) methods rely heavily on human annotated error corpora. However, these annotations are unavailable in many low-resource languages. In this paper, we investigate GED in this context. Leveraging the zero-shot cross-lingual transfer capabilities of multilingual pre-trained language models, we train a model using data from a diverse set of languages to generate synthetic errors in other languages. These synthetic error corpora are then used to train a GED model. Specifically we propose a two-stage fine-tuning pipeline where the GED model is first fine-tuned on multilingual synthetic data from target languages followed by fine-tuning on human-annotated GED corpora from source languages. This approach outperforms current state-of-the-art annotation-free GED methods. We also analyse the errors produced by our method and other strong baselines, finding that our approach produces errors that are more diverse and more similar to human errors.
[ "Latouche, Gaetan Lopez", "Carbonneau, Marc-Andr{\\'e}", "Swanson, Benjamin" ]
Zero-shot Cross-Lingual Transfer for Synthetic Data Generation in Grammatical Error Detection
emnlp-main.176
Poster
2407.11854
[ "" ]
https://huggingface.co/papers/2407.11854
1
2
2
3
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.177.bib
https://aclanthology.org/2024.emnlp-main.177/
@inproceedings{edman-etal-2024-cute, title = "{CUTE}: Measuring {LLM}s{'} Understanding of Their Tokens", author = "Edman, Lukas and Schmid, Helmut and Fraser, Alexander", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.177", pages = "3017--3026", abstract = "Large Language Models (LLMs) show remarkable performance on a wide variety of tasks. Most LLMs split text into multi-character tokens and process them as atomic units without direct access to individual characters. This raises the question: To what extent can LLMs learn orthographic information? To answer this, we propose a new benchmark, CUTE, which features a collection of tasks designed to test the orthographic knowledge of LLMs. We evaluate popular LLMs on CUTE, finding that most of them seem to know the spelling of their tokens, yet fail to use this information effectively to manipulate text, calling into question how much of this knowledge is generalizable.", }
Large Language Models (LLMs) show remarkable performance on a wide variety of tasks. Most LLMs split text into multi-character tokens and process them as atomic units without direct access to individual characters. This raises the question: To what extent can LLMs learn orthographic information? To answer this, we propose a new benchmark, CUTE, which features a collection of tasks designed to test the orthographic knowledge of LLMs. We evaluate popular LLMs on CUTE, finding that most of them seem to know the spelling of their tokens, yet fail to use this information effectively to manipulate text, calling into question how much of this knowledge is generalizable.
[ "Edman, Lukas", "Schmid, Helmut", "Fraser, Alex", "er" ]
CUTE: Measuring LLMs' Understanding of Their Tokens
emnlp-main.177
Poster
2409.15452
[ "https://github.com/leukas/cute" ]
https://huggingface.co/papers/2409.15452
0
1
0
3
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.178.bib
https://aclanthology.org/2024.emnlp-main.178/
@inproceedings{zhao-etal-2024-seer, title = "{SEER}: Self-Aligned Evidence Extraction for Retrieval-Augmented Generation", author = "Zhao, Xinping and Li, Dongfang and Zhong, Yan and Hu, Boren and Chen, Yibin and Hu, Baotian and Zhang, Min", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.178", pages = "3027--3041", abstract = "Recent studies in Retrieval-Augmented Generation (RAG) have investigated extracting evidence from retrieved passages to reduce computational costs and enhance the final RAG performance, yet it remains challenging. Existing methods heavily rely on heuristic-based augmentation, encountering several issues: (1) Poor generalization due to hand-crafted context filtering; (2) Semantics deficiency due to rule-based context chunking; (3) Skewed length due to sentence-wise filter learning. To address these issues, we propose a model-based evidence extraction learning framework, SEER, optimizing a vanilla model as an evidence extractor with desired properties through self-aligned learning. Extensive experiments show that our method largely improves the final RAG performance, enhances the faithfulness, helpfulness, and conciseness of the extracted evidence, and reduces the evidence length by 9.25 times. The code will be available at https://github.com/HITsz-TMG/SEER.", }
Recent studies in Retrieval-Augmented Generation (RAG) have investigated extracting evidence from retrieved passages to reduce computational costs and enhance the final RAG performance, yet it remains challenging. Existing methods heavily rely on heuristic-based augmentation, encountering several issues: (1) Poor generalization due to hand-crafted context filtering; (2) Semantics deficiency due to rule-based context chunking; (3) Skewed length due to sentence-wise filter learning. To address these issues, we propose a model-based evidence extraction learning framework, SEER, optimizing a vanilla model as an evidence extractor with desired properties through self-aligned learning. Extensive experiments show that our method largely improves the final RAG performance, enhances the faithfulness, helpfulness, and conciseness of the extracted evidence, and reduces the evidence length by 9.25 times. The code will be available at https://github.com/HITsz-TMG/SEER.
[ "Zhao, Xinping", "Li, Dongfang", "Zhong, Yan", "Hu, Boren", "Chen, Yibin", "Hu, Baotian", "Zhang, Min" ]
SEER: Self-Aligned Evidence Extraction for Retrieval-Augmented Generation
emnlp-main.178
Poster
2410.11315
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.179.bib
https://aclanthology.org/2024.emnlp-main.179/
@inproceedings{opedal-etal-2024-role, title = "On the Role of Context in Reading Time Prediction", author = "Opedal, Andreas and Chodroff, Eleanor and Cotterell, Ryan and Wilcox, Ethan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.179", pages = "3042--3058", abstract = "We present a new perspective on how readers integrate context during real-time language comprehension. Our proposals build on surprisal theory, which posits that the processing effort of a linguistic unit (e.g., a word) is an affine function of its in-context information content. We first observe that surprisal is only one out of many potential ways that a contextual predictor can be derived from a language model. Another one is the pointwise mutual information (PMI) between a unit and its context, which turns out to yield the same predictive power as surprisal when controlling for unigram frequency. Moreover, both PMI and surprisal are correlated with frequency. This means that neither PMI nor surprisal contains information about context alone. In response to this, we propose a technique where we project surprisal onto the orthogonal complement of frequency, yielding a new contextual predictor that is uncorrelated with frequency. Our experiments show that the proportion of variance in reading times explained by context is a lot smaller when context is represented by the orthogonalized predictor. From an interpretability standpoint, this indicates that previous studies may have overstated the role that context has in predicting reading times.", }
We present a new perspective on how readers integrate context during real-time language comprehension. Our proposals build on surprisal theory, which posits that the processing effort of a linguistic unit (e.g., a word) is an affine function of its in-context information content. We first observe that surprisal is only one out of many potential ways that a contextual predictor can be derived from a language model. Another one is the pointwise mutual information (PMI) between a unit and its context, which turns out to yield the same predictive power as surprisal when controlling for unigram frequency. Moreover, both PMI and surprisal are correlated with frequency. This means that neither PMI nor surprisal contains information about context alone. In response to this, we propose a technique where we project surprisal onto the orthogonal complement of frequency, yielding a new contextual predictor that is uncorrelated with frequency. Our experiments show that the proportion of variance in reading times explained by context is a lot smaller when context is represented by the orthogonalized predictor. From an interpretability standpoint, this indicates that previous studies may have overstated the role that context has in predicting reading times.
[ "Opedal, Andreas", "Chodroff, Eleanor", "Cotterell, Ryan", "Wilcox, Ethan" ]
On the Role of Context in Reading Time Prediction
emnlp-main.179
Poster
2409.08160
[ "https://github.com/rycolab/context-reading-time" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.180.bib
https://aclanthology.org/2024.emnlp-main.180/
@inproceedings{he-etal-2024-bc, title = "{BC}-Prover: Backward Chaining Prover for Formal Theorem Proving", author = "He, Yuhang and Zhang, Jihai and Bao, Jianzhu and Lin, Fangquan and Yang, Cheng and Qin, Bing and Xu, Ruifeng and Yin, Wotao", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.180", pages = "3059--3077", abstract = "Despite the remarkable progress made by large language models in mathematical reasoning, interactive theorem proving in formal logic still remains a prominent challenge. Previous methods resort to neural models for proofstep generation and search. However, they suffer from exploring possible proofsteps empirically in a large search space. Moreover, they directly use a less rigorous informal proof for proofstep generation, neglecting the incomplete reasoning within. In this paper, we propose BC-Prover, a backward chaining framework guided by pseudo steps. Specifically, BC-Prover prioritizes pseudo steps to proofstep generation. The pseudo steps boost the proof construction in two aspects: (1) Backward Chaining that decomposes the proof into sub-goals for goal-oriented exploration. (2) Step Planning that makes a fine-grained planning to bridge the gap between informal and formal proofs. Experiments on the miniF2F benchmark show significant performance gains by our framework over the state-of-the-art approaches. Our framework is also compatible with existing provers and further improves their performance with the backward chaining technique.", }
Despite the remarkable progress made by large language models in mathematical reasoning, interactive theorem proving in formal logic still remains a prominent challenge. Previous methods resort to neural models for proofstep generation and search. However, they suffer from exploring possible proofsteps empirically in a large search space. Moreover, they directly use a less rigorous informal proof for proofstep generation, neglecting the incomplete reasoning within. In this paper, we propose BC-Prover, a backward chaining framework guided by pseudo steps. Specifically, BC-Prover prioritizes pseudo steps to proofstep generation. The pseudo steps boost the proof construction in two aspects: (1) Backward Chaining that decomposes the proof into sub-goals for goal-oriented exploration. (2) Step Planning that makes a fine-grained planning to bridge the gap between informal and formal proofs. Experiments on the miniF2F benchmark show significant performance gains by our framework over the state-of-the-art approaches. Our framework is also compatible with existing provers and further improves their performance with the backward chaining technique.
[ "He, Yuhang", "Zhang, Jihai", "Bao, Jianzhu", "Lin, Fangquan", "Yang, Cheng", "Qin, Bing", "Xu, Ruifeng", "Yin, Wotao" ]
BC-Prover: Backward Chaining Prover for Formal Theorem Proving
emnlp-main.180
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.181.bib
https://aclanthology.org/2024.emnlp-main.181/
@inproceedings{mosbach-etal-2024-insights, title = "From Insights to Actions: The Impact of Interpretability and Analysis Research on {NLP}", author = "Mosbach, Marius and Gautam, Vagrant and Vergara Browne, Tom{\'a}s and Klakow, Dietrich and Geva, Mor", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.181", pages = "3078--3105", abstract = "Interpretability and analysis (IA) research is a growing subfield within NLP with the goal of developing a deeper understanding of the behavior or inner workings of NLP systems and methods. Despite growing interest in the subfield, a criticism of this work is that it lacks actionable insights and therefore has little impact on NLP. In this paper, we seek to quantify the impact of IA research on the broader field of NLP. We approach this with a mixed-methods analysis of: (1) a citation graph of 185K+ papers built from all papers published at ACL and EMNLP conferences from 2018 to 2023, and their references and citations, and (2) a survey of 138 members of the NLP community. Our quantitative results show that IA work is well-cited outside of IA, and central in the NLP citation graph. Through qualitative analysis of survey responses and manual annotation of 556 papers, we find that NLP researchers build on findings from IA work and perceive it as important for progress in NLP, multiple subfields, and rely on its findings and terminology for their own work. Many novel methods are proposed based on IA findings and highly influenced by them, but highly influential non-IA work cites IA findings without being driven by them. We end by summarizing what is missing in IA work today and provide a call to action, to pave the way for a more impactful future of IA research.", }
Interpretability and analysis (IA) research is a growing subfield within NLP with the goal of developing a deeper understanding of the behavior or inner workings of NLP systems and methods. Despite growing interest in the subfield, a criticism of this work is that it lacks actionable insights and therefore has little impact on NLP. In this paper, we seek to quantify the impact of IA research on the broader field of NLP. We approach this with a mixed-methods analysis of: (1) a citation graph of 185K+ papers built from all papers published at ACL and EMNLP conferences from 2018 to 2023, and their references and citations, and (2) a survey of 138 members of the NLP community. Our quantitative results show that IA work is well-cited outside of IA, and central in the NLP citation graph. Through qualitative analysis of survey responses and manual annotation of 556 papers, we find that NLP researchers build on findings from IA work and perceive it as important for progress in NLP, multiple subfields, and rely on its findings and terminology for their own work. Many novel methods are proposed based on IA findings and highly influenced by them, but highly influential non-IA work cites IA findings without being driven by them. We end by summarizing what is missing in IA work today and provide a call to action, to pave the way for a more impactful future of IA research.
[ "Mosbach, Marius", "Gautam, Vagrant", "Vergara Browne, Tom{\\'a}s", "Klakow, Dietrich", "Geva, Mor" ]
From Insights to Actions: The Impact of Interpretability and Analysis Research on NLP
emnlp-main.181
Poster
2406.12618
[ "https://github.com/mmarius/interpretability-impact" ]
https://huggingface.co/papers/2406.12618
2
5
1
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.182.bib
https://aclanthology.org/2024.emnlp-main.182/
@inproceedings{chai-etal-2024-autoregressive, title = "Autoregressive Pre-Training on Pixels and Texts", author = "Chai, Yekun and Liu, Qingyi and Xiao, Jingwu and Wang, Shuohuan and Sun, Yu and Wu, Hua", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.182", pages = "3106--3125", abstract = "The integration of visual and textual information represents a promising direction in the advancement of language models. In this paper, we explore the dual modality of language{---}both visual and textual{---}within an autoregressive framework, pre-trained on both document images and texts. Our method employs a multimodal training strategy, utilizing visual data through next patch prediction with a regression head and/or textual data through next token prediction with a classification head. We focus on understanding the interaction between these two modalities and their combined impact on model performance. Our extensive evaluation across a wide range of benchmarks shows that incorporating both visual and textual data significantly improves the performance of pixel-based language models. Remarkably, we find that a unidirectional pixel-based model trained solely on visual data can achieve comparable results to state-of-the-art bidirectional models on several language understanding tasks. This work uncovers the untapped potential of integrating visual and textual modalities for more effective language modeling. We release our code, data, and model checkpoints at https://github.com/ernie-research/pixelgpt.", }
The integration of visual and textual information represents a promising direction in the advancement of language models. In this paper, we explore the dual modality of language{---}both visual and textual{---}within an autoregressive framework, pre-trained on both document images and texts. Our method employs a multimodal training strategy, utilizing visual data through next patch prediction with a regression head and/or textual data through next token prediction with a classification head. We focus on understanding the interaction between these two modalities and their combined impact on model performance. Our extensive evaluation across a wide range of benchmarks shows that incorporating both visual and textual data significantly improves the performance of pixel-based language models. Remarkably, we find that a unidirectional pixel-based model trained solely on visual data can achieve comparable results to state-of-the-art bidirectional models on several language understanding tasks. This work uncovers the untapped potential of integrating visual and textual modalities for more effective language modeling. We release our code, data, and model checkpoints at https://github.com/ernie-research/pixelgpt.
[ "Chai, Yekun", "Liu, Qingyi", "Xiao, Jingwu", "Wang, Shuohuan", "Sun, Yu", "Wu, Hua" ]
Autoregressive Pre-Training on Pixels and Texts
emnlp-main.182
Poster
2404.10710
[ "https://github.com/ernie-research/pixelgpt" ]
https://huggingface.co/papers/2404.10710
0
0
0
6
[ "baidu/PixelGPT", "baidu/MonoGPT", "baidu/DualGPT" ]
[ "baidu/rendered_xnli", "baidu/rendered_GLUE" ]
[]
[ "baidu/PixelGPT", "baidu/MonoGPT", "baidu/DualGPT" ]
[ "baidu/rendered_xnli", "baidu/rendered_GLUE" ]
[]
1
https://aclanthology.org/2024.emnlp-main.183.bib
https://aclanthology.org/2024.emnlp-main.183/
@inproceedings{chai-etal-2024-training, title = "On Training Data Influence of {GPT} Models", author = "Chai, Yekun and Liu, Qingyi and Wang, Shuohuan and Sun, Yu and Peng, Qiwei and Wu, Hua", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.183", pages = "3126--3150", abstract = "Amidst the rapid advancements in generative language models, the investigation of how training data shapes the performance of GPT models is still emerging. This paper presents GPTfluence, a novel approach that leverages a featurized simulation to assess the impact of training examples on the training dynamics of GPT models. Our approach not only traces the influence of individual training instances on performance trajectories, such as loss and other key metrics, on targeted test points but also enables a comprehensive comparison with existing methods across various training scenarios in GPT models, ranging from 14 million to 2.8 billion parameters, across a range of downstream tasks. Contrary to earlier methods that struggle with generalization to new data, GPTfluence introduces a parameterized simulation of training dynamics, demonstrating robust generalization capabilities to unseen training data. This adaptability is evident across both fine-tuning and instruction-tuning scenarios, spanning tasks in natural language understanding and generation. We make our code and data publicly available at https://github.com/ernie-research/gptfluence.", }
Amidst the rapid advancements in generative language models, the investigation of how training data shapes the performance of GPT models is still emerging. This paper presents GPTfluence, a novel approach that leverages a featurized simulation to assess the impact of training examples on the training dynamics of GPT models. Our approach not only traces the influence of individual training instances on performance trajectories, such as loss and other key metrics, on targeted test points but also enables a comprehensive comparison with existing methods across various training scenarios in GPT models, ranging from 14 million to 2.8 billion parameters, across a range of downstream tasks. Contrary to earlier methods that struggle with generalization to new data, GPTfluence introduces a parameterized simulation of training dynamics, demonstrating robust generalization capabilities to unseen training data. This adaptability is evident across both fine-tuning and instruction-tuning scenarios, spanning tasks in natural language understanding and generation. We make our code and data publicly available at https://github.com/ernie-research/gptfluence.
[ "Chai, Yekun", "Liu, Qingyi", "Wang, Shuohuan", "Sun, Yu", "Peng, Qiwei", "Wu, Hua" ]
On Training Data Influence of GPT Models
emnlp-main.183
Poster
2404.07840
[ "https://github.com/eleutherai/pythia" ]
https://huggingface.co/papers/2404.07840
1
0
0
7
[]
[ "baidu/GPTDynamics" ]
[]
[]
[ "baidu/GPTDynamics" ]
[]
1
https://aclanthology.org/2024.emnlp-main.184.bib
https://aclanthology.org/2024.emnlp-main.184/
@inproceedings{subramonian-etal-2024-understanding, title = "Understanding {``}Democratization{''} in {NLP} and {ML} Research", author = "Subramonian, Arjun and Gautam, Vagrant and Klakow, Dietrich and Talat, Zeerak", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.184", pages = "3151--3166", abstract = "Recent improvements in natural language processing (NLP) and machine learning (ML) and increased mainstream adoption have led to researchers frequently discussing the {``}democratization{''} of artificial intelligence. In this paper, we seek to clarify how democratization is understood in NLP and ML publications, through large-scale mixed-methods analyses of papers using the keyword {``}democra*{''} published in NLP and adjacent venues. We find that democratization is most frequently used to convey (ease of) access to or use of technologies, without meaningfully engaging with theories of democratization, while research using other invocations of {``}democra*{''} tends to be grounded in theories of deliberation and debate. Based on our findings, we call for researchers to enrich their use of the term democratization with appropriate theory, towards democratic technologies beyond superficial access.", }
Recent improvements in natural language processing (NLP) and machine learning (ML) and increased mainstream adoption have led to researchers frequently discussing the {``}democratization{''} of artificial intelligence. In this paper, we seek to clarify how democratization is understood in NLP and ML publications, through large-scale mixed-methods analyses of papers using the keyword {``}democra*{''} published in NLP and adjacent venues. We find that democratization is most frequently used to convey (ease of) access to or use of technologies, without meaningfully engaging with theories of democratization, while research using other invocations of {``}democra*{''} tends to be grounded in theories of deliberation and debate. Based on our findings, we call for researchers to enrich their use of the term democratization with appropriate theory, towards democratic technologies beyond superficial access.
[ "Subramonian, Arjun", "Gautam, Vagrant", "Klakow, Dietrich", "Talat, Zeerak" ]
Understanding “Democratization” in NLP and ML Research
emnlp-main.184
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.185.bib
https://aclanthology.org/2024.emnlp-main.185/
@inproceedings{kim-etal-2024-dockd, title = "{D}oc{KD}: Knowledge Distillation from {LLM}s for Open-World Document Understanding Models", author = "Kim, Sungnyun and Liao, Haofu and Appalaraju, Srikar and Tang, Peng and Tu, Zhuowen and Satzoda, Ravi Kumar and Manmatha, R. and Mahadevan, Vijay and Soatto, Stefano", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.185", pages = "3167--3193", abstract = "Visual document understanding (VDU) is a challenging task that involves understanding documents across various modalities (text and image) and layouts (forms, tables, etc.). This study aims to enhance generalizability of small VDU models by distilling knowledge from LLMs. We identify that directly prompting LLMs often fails to generate informative and useful data. In response, we present a new framework (called DocKD) that enriches the data generation process by integrating external document knowledge. Specifically, we provide an LLM with various document elements like key-value pairs, layouts, and descriptions, to elicit open-ended answers. Our experiments show that DocKD produces high-quality document annotations and surpasses the direct knowledge distillation approach that does not leverage external document knowledge. Moreover, student VDU models trained with solely DocKD-generated data is not only comparable to those trained with human-annotated data on in-domain tasks but also significantly excel them on out-of-domain tasks.", }
Visual document understanding (VDU) is a challenging task that involves understanding documents across various modalities (text and image) and layouts (forms, tables, etc.). This study aims to enhance generalizability of small VDU models by distilling knowledge from LLMs. We identify that directly prompting LLMs often fails to generate informative and useful data. In response, we present a new framework (called DocKD) that enriches the data generation process by integrating external document knowledge. Specifically, we provide an LLM with various document elements like key-value pairs, layouts, and descriptions, to elicit open-ended answers. Our experiments show that DocKD produces high-quality document annotations and surpasses the direct knowledge distillation approach that does not leverage external document knowledge. Moreover, student VDU models trained with solely DocKD-generated data is not only comparable to those trained with human-annotated data on in-domain tasks but also significantly excel them on out-of-domain tasks.
[ "Kim, Sungnyun", "Liao, Haofu", "Appalaraju, Srikar", "Tang, Peng", "Tu, Zhuowen", "Satzoda, Ravi Kumar", "Manmatha, R.", "Mahadevan, Vijay", "Soatto, Stefano" ]
DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models
emnlp-main.185
Poster
2410.03061
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.186.bib
https://aclanthology.org/2024.emnlp-main.186/
@inproceedings{hwang-etal-2024-cross, title = "Cross-lingual Transfer for Automatic Question Generation by Learning Interrogative Structures in Target Languages", author = "Hwang, Seonjeong and Kim, Yunsu and Lee, Gary", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.186", pages = "3194--3208", abstract = "Automatic question generation (QG) serves a wide range of purposes, such as augmenting question-answering (QA) corpora, enhancing chatbot systems, and developing educational materials. Despite its importance, most existing datasets predominantly focus on English, resulting in a considerable gap in data availability for other languages. Cross-lingual transfer for QG (XLT-QG) addresses this limitation by allowing models trained on high-resource language datasets to generate questions in low-resource languages. In this paper, we propose a simple and efficient XLT-QG method that operates without the need for monolingual, parallel, or labeled data in the target language, utilizing a small language model. Our model, trained solely on English QA datasets, learns interrogative structures from a limited set of question exemplars, which are then applied to generate questions in the target language. Experimental results show that our method outperforms several XLT-QG baselines and achieves performance comparable to GPT-3.5-turbo across different languages. Additionally, the synthetic data generated by our model proves beneficial for training multilingual QA models. With significantly fewer parameters than large language models and without requiring additional training for target languages, our approach offers an effective solution for QG and QA tasks across various languages.", }
Automatic question generation (QG) serves a wide range of purposes, such as augmenting question-answering (QA) corpora, enhancing chatbot systems, and developing educational materials. Despite its importance, most existing datasets predominantly focus on English, resulting in a considerable gap in data availability for other languages. Cross-lingual transfer for QG (XLT-QG) addresses this limitation by allowing models trained on high-resource language datasets to generate questions in low-resource languages. In this paper, we propose a simple and efficient XLT-QG method that operates without the need for monolingual, parallel, or labeled data in the target language, utilizing a small language model. Our model, trained solely on English QA datasets, learns interrogative structures from a limited set of question exemplars, which are then applied to generate questions in the target language. Experimental results show that our method outperforms several XLT-QG baselines and achieves performance comparable to GPT-3.5-turbo across different languages. Additionally, the synthetic data generated by our model proves beneficial for training multilingual QA models. With significantly fewer parameters than large language models and without requiring additional training for target languages, our approach offers an effective solution for QG and QA tasks across various languages.
[ "Hwang, Seonjeong", "Kim, Yunsu", "Lee, Gary" ]
Cross-lingual Transfer for Automatic Question Generation by Learning Interrogative Structures in Target Languages
emnlp-main.186
Poster
2410.03197
[ "" ]
https://huggingface.co/papers/2410.03197
1
0
0
3
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.187.bib
https://aclanthology.org/2024.emnlp-main.187/
@inproceedings{li-etal-2024-scalingfilter, title = "{S}caling{F}ilter: Assessing Data Quality through Inverse Utilization of Scaling Laws", author = "Li, Ruihang and Wei, Yixuan and Zhang, Miaosen and Yu, Nenghai and Hu, Han and Peng, Houwen", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.187", pages = "3209--3222", abstract = "High-quality data is crucial for the pre-training performance of large language models. Unfortunately, existing quality filtering methods rely on a known high-quality dataset as reference, which can introduce potential bias and compromise diversity. In this paper, we propose ScalingFilter, a novel approach that evaluates text quality based on the perplexity difference between two language models trained on the same data, thereby eliminating the influence of the reference dataset in the filtering process. An theoretical analysis shows that ScalingFilter is equivalent to an inverse utilization of scaling laws. Through training models with 1.3B parameters on the same data source processed by various quality filters, we find ScalingFilter can improve zero-shot performance of pre-trained models in downstream tasks. To assess the bias introduced by quality filtering, we introduce semantic diversity, a metric of utilizing text embedding models for semantic representations. Extensive experiments reveal that semantic diversity is a reliable indicator of dataset diversity, and ScalingFilter achieves an optimal balance between downstream performance and semantic diversity.", }
High-quality data is crucial for the pre-training performance of large language models. Unfortunately, existing quality filtering methods rely on a known high-quality dataset as reference, which can introduce potential bias and compromise diversity. In this paper, we propose ScalingFilter, a novel approach that evaluates text quality based on the perplexity difference between two language models trained on the same data, thereby eliminating the influence of the reference dataset in the filtering process. An theoretical analysis shows that ScalingFilter is equivalent to an inverse utilization of scaling laws. Through training models with 1.3B parameters on the same data source processed by various quality filters, we find ScalingFilter can improve zero-shot performance of pre-trained models in downstream tasks. To assess the bias introduced by quality filtering, we introduce semantic diversity, a metric of utilizing text embedding models for semantic representations. Extensive experiments reveal that semantic diversity is a reliable indicator of dataset diversity, and ScalingFilter achieves an optimal balance between downstream performance and semantic diversity.
[ "Li, Ruihang", "Wei, Yixuan", "Zhang, Miaosen", "Yu, Nenghai", "Hu, Han", "Peng, Houwen" ]
ScalingFilter: Assessing Data Quality through Inverse Utilization of Scaling Laws
emnlp-main.187
Poster
2408.08310
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.188.bib
https://aclanthology.org/2024.emnlp-main.188/
@inproceedings{wu-etal-2024-word, title = "Word Alignment as Preference for Machine Translation", author = "Wu, Qiyu and Nagata, Masaaki and Miao, Zhongtao and Tsuruoka, Yoshimasa", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.188", pages = "3223--3239", abstract = "The problem of hallucination and omission, a long-standing problem in machine translation (MT), is more pronounced when a large language model (LLM) is used in MT because an LLM itself is susceptible to these phenomena. In this work, we mitigate the problem in an LLM-based MT model by guiding it to better word alignment. We first study the correlation between word alignment and the phenomena of hallucination and omission in MT. Then we propose to utilize word alignment as preference to optimize the LLM-based MT model. The preference data are constructed by selecting chosen and rejected translations from multiple MT tools. Subsequently, direct preference optimization is used to optimize the LLM-based model towards the preference signal. Given the absence of evaluators specifically designed for hallucination and omission in MT, we further propose selecting hard instances and utilizing GPT-4 to directly evaluate the performance of the models in mitigating these issues. We verify the rationality of these designed evaluation methods by experiments, followed by extensive results demonstrating the effectiveness of word alignment-based preference optimization to mitigate hallucination and omission. On the other hand, although it shows promise in mitigating hallucination and omission, the overall performance of MT in different language directions remains mixed, with slight increases in BLEU and decreases in COMET.", }
The problem of hallucination and omission, a long-standing problem in machine translation (MT), is more pronounced when a large language model (LLM) is used in MT because an LLM itself is susceptible to these phenomena. In this work, we mitigate the problem in an LLM-based MT model by guiding it to better word alignment. We first study the correlation between word alignment and the phenomena of hallucination and omission in MT. Then we propose to utilize word alignment as preference to optimize the LLM-based MT model. The preference data are constructed by selecting chosen and rejected translations from multiple MT tools. Subsequently, direct preference optimization is used to optimize the LLM-based model towards the preference signal. Given the absence of evaluators specifically designed for hallucination and omission in MT, we further propose selecting hard instances and utilizing GPT-4 to directly evaluate the performance of the models in mitigating these issues. We verify the rationality of these designed evaluation methods by experiments, followed by extensive results demonstrating the effectiveness of word alignment-based preference optimization to mitigate hallucination and omission. On the other hand, although it shows promise in mitigating hallucination and omission, the overall performance of MT in different language directions remains mixed, with slight increases in BLEU and decreases in COMET.
[ "Wu, Qiyu", "Nagata, Masaaki", "Miao, Zhongtao", "Tsuruoka, Yoshimasa" ]
Word Alignment as Preference for Machine Translation
emnlp-main.188
Oral
2405.09223
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.189.bib
https://aclanthology.org/2024.emnlp-main.189/
@inproceedings{fan-etal-2024-improving, title = "Improving Multi-party Dialogue Generation via Topic and Rhetorical Coherence", author = "Fan, Yaxin and Li, Peifeng and Zhu, Qiaoming", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.189", pages = "3240--3253", abstract = "Previous studies on multi-party dialogue generation predominantly concentrated on modeling the reply-to structure of dialogue histories, always overlooking the coherence between generated responses and target utterances. To address this issue, we propose a Reinforcement Learning approach emphasizing both Topic and Rhetorical Coherence (RL-TRC). In particular, the topic- and rhetorical-coherence tasks are designed to enhance the model{'}s perception of coherence with the target utterance. Subsequently, an agent is employed to learn a coherence policy, which guides the generation of responses that are topically and rhetorically aligned with the target utterance. Furthermore, three discourse-aware rewards are developed to assess the coherence between the generated response and the target utterance, with the objective of optimizing the policy. The experimental results and in-depth analyses on two popular datasets demonstrate that our RL-TRC significantly outperforms the state-of-the-art baselines, particularly in generating responses that are more coherent with the target utterances.", }
Previous studies on multi-party dialogue generation predominantly concentrated on modeling the reply-to structure of dialogue histories, always overlooking the coherence between generated responses and target utterances. To address this issue, we propose a Reinforcement Learning approach emphasizing both Topic and Rhetorical Coherence (RL-TRC). In particular, the topic- and rhetorical-coherence tasks are designed to enhance the model{'}s perception of coherence with the target utterance. Subsequently, an agent is employed to learn a coherence policy, which guides the generation of responses that are topically and rhetorically aligned with the target utterance. Furthermore, three discourse-aware rewards are developed to assess the coherence between the generated response and the target utterance, with the objective of optimizing the policy. The experimental results and in-depth analyses on two popular datasets demonstrate that our RL-TRC significantly outperforms the state-of-the-art baselines, particularly in generating responses that are more coherent with the target utterances.
[ "Fan, Yaxin", "Li, Peifeng", "Zhu, Qiaoming" ]
Improving Multi-party Dialogue Generation via Topic and Rhetorical Coherence
emnlp-main.189
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.190.bib
https://aclanthology.org/2024.emnlp-main.190/
@inproceedings{he-etal-2024-seekr, title = "{SEEKR}: Selective Attention-Guided Knowledge Retention for Continual Learning of Large Language Models", author = "He, Jinghan and Guo, Haiyun and Zhu, Kuan and Zhao, Zihan and Tang, Ming and Wang, Jinqiao", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.190", pages = "3254--3266", abstract = "Continual learning (CL) is crucial for language models to dynamically adapt to the evolving real-world demands. To mitigate the catastrophic forgetting problem in CL, data replay has been proven a simple and effective strategy, and the subsequent data-replay-based distillation can further enhance the performance. However, existing methods fail to fully exploit the knowledge embedded in models from previous tasks, resulting in the need for a relatively large number of replay samples to achieve good results. In this work, we first explore and emphasize the importance of attention weights in knowledge retention, and then propose a SElective attEntion-guided Knowledge Retention method (SEEKR) for data-efficient replay-based continual learning of large language models (LLMs). Specifically, SEEKR performs attention distillation on the selected attention heads for finer-grained knowledge retention, where the proposed forgettability-based and task-sensitivity-based measures are used to identify the most valuable attention heads. Experimental results on two continual learning benchmarks for LLMs demonstrate the superiority of SEEKR over the existing methods on both performance and efficiency. Explicitly, SEEKR achieves comparable or even better performance with only 1/10 of the replayed data used by other methods, and reduces the proportion of replayed data to 1{\%}. The code is available at https://github.com/jinghan1he/SEEKR.", }
Continual learning (CL) is crucial for language models to dynamically adapt to the evolving real-world demands. To mitigate the catastrophic forgetting problem in CL, data replay has been proven a simple and effective strategy, and the subsequent data-replay-based distillation can further enhance the performance. However, existing methods fail to fully exploit the knowledge embedded in models from previous tasks, resulting in the need for a relatively large number of replay samples to achieve good results. In this work, we first explore and emphasize the importance of attention weights in knowledge retention, and then propose a SElective attEntion-guided Knowledge Retention method (SEEKR) for data-efficient replay-based continual learning of large language models (LLMs). Specifically, SEEKR performs attention distillation on the selected attention heads for finer-grained knowledge retention, where the proposed forgettability-based and task-sensitivity-based measures are used to identify the most valuable attention heads. Experimental results on two continual learning benchmarks for LLMs demonstrate the superiority of SEEKR over the existing methods on both performance and efficiency. Explicitly, SEEKR achieves comparable or even better performance with only 1/10 of the replayed data used by other methods, and reduces the proportion of replayed data to 1{\%}. The code is available at https://github.com/jinghan1he/SEEKR.
[ "He, Jinghan", "Guo, Haiyun", "Zhu, Kuan", "Zhao, Zihan", "Tang, Ming", "Wang, Jinqiao" ]
SEEKR: Selective Attention-Guided Knowledge Retention for Continual Learning of Large Language Models
emnlp-main.190
Poster
2411.06171
[ "https://github.com/jinghan1he/seekr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.191.bib
https://aclanthology.org/2024.emnlp-main.191/
@inproceedings{yu-ananiadou-2024-neuron, title = "Neuron-Level Knowledge Attribution in Large Language Models", author = "Yu, Zeping and Ananiadou, Sophia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.191", pages = "3267--3280", abstract = "Identifying important neurons for final predictions is essential for understanding the mechanisms of large language models. Due to computational constraints, current attribution techniques struggle to operate at neuron level. In this paper, we propose a static method for pinpointing significant neurons. Compared to seven other methods, our approach demonstrates superior performance across three metrics. Additionally, since most static methods typically only identify {``}value neurons{''} directly contributing to the final prediction, we propose a method for identifying {``}query neurons{''} which activate these {``}value neurons{''}. Finally, we apply our methods to analyze six types of knowledge across both attention and feed-forward network (FFN) layers. Our method and analysis are helpful for understanding the mechanisms of knowledge storage and set the stage for future research in knowledge editing. The code is available on https://github.com/zepingyu0512/neuron-attribution.", }
Identifying important neurons for final predictions is essential for understanding the mechanisms of large language models. Due to computational constraints, current attribution techniques struggle to operate at neuron level. In this paper, we propose a static method for pinpointing significant neurons. Compared to seven other methods, our approach demonstrates superior performance across three metrics. Additionally, since most static methods typically only identify {``}value neurons{''} directly contributing to the final prediction, we propose a method for identifying {``}query neurons{''} which activate these {``}value neurons{''}. Finally, we apply our methods to analyze six types of knowledge across both attention and feed-forward network (FFN) layers. Our method and analysis are helpful for understanding the mechanisms of knowledge storage and set the stage for future research in knowledge editing. The code is available on https://github.com/zepingyu0512/neuron-attribution.
[ "Yu, Zeping", "Ananiadou, Sophia" ]
Neuron-Level Knowledge Attribution in Large Language Models
emnlp-main.191
Poster
2312.12141
[ "https://github.com/zepingyu0512/neuron-attribution" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.192.bib
https://aclanthology.org/2024.emnlp-main.192/
@inproceedings{yu-ananiadou-2024-large, title = "How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for Metric Learning", author = "Yu, Zeping and Ananiadou, Sophia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.192", pages = "3281--3292", abstract = "We investigate the mechanism of in-context learning (ICL) on sentence classification tasks with semantically-unrelated labels ({``}foo{''}/{``}bar{''}). We find intervening in only 1{\%} heads (named {``}in-context heads{''}) significantly affects ICL accuracy from 87.6{\%} to 24.4{\%}. To understand this phenomenon, we analyze the value-output vectors in these heads and discover that the vectors at each label position contain substantial information about the corresponding labels. Furthermore, we observe that the prediction shift from {``}foo{''} to {``}bar{''} is due to the respective reduction and increase in these heads{'} attention scores at {``}foo{''} and {``}bar{''} positions. Therefore, we propose a hypothesis for ICL: in in-context heads, the value-output matrices extract label features, while the query-key matrices compute the similarity between the features at the last position and those at each label position. The query and key matrices can be considered as two towers that learn the similarity metric between the last position{'}s features and each demonstration at label positions. Using this hypothesis, we explain the majority label bias and recency bias in ICL and propose two methods to reduce these biases by 22{\%} and 17{\%}, respectively.", }
We investigate the mechanism of in-context learning (ICL) on sentence classification tasks with semantically-unrelated labels ({``}foo{''}/{``}bar{''}). We find intervening in only 1{\%} heads (named {``}in-context heads{''}) significantly affects ICL accuracy from 87.6{\%} to 24.4{\%}. To understand this phenomenon, we analyze the value-output vectors in these heads and discover that the vectors at each label position contain substantial information about the corresponding labels. Furthermore, we observe that the prediction shift from {``}foo{''} to {``}bar{''} is due to the respective reduction and increase in these heads{'} attention scores at {``}foo{''} and {``}bar{''} positions. Therefore, we propose a hypothesis for ICL: in in-context heads, the value-output matrices extract label features, while the query-key matrices compute the similarity between the features at the last position and those at each label position. The query and key matrices can be considered as two towers that learn the similarity metric between the last position{'}s features and each demonstration at label positions. Using this hypothesis, we explain the majority label bias and recency bias in ICL and propose two methods to reduce these biases by 22{\%} and 17{\%}, respectively.
[ "Yu, Zeping", "Ananiadou, Sophia" ]
How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for Metric Learning
emnlp-main.192
Poster
2402.02872
[ "https://github.com/zepingyu0512/in-context-mechanism" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.193.bib
https://aclanthology.org/2024.emnlp-main.193/
@inproceedings{yu-ananiadou-2024-interpreting, title = "Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis", author = "Yu, Zeping and Ananiadou, Sophia", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.193", pages = "3293--3306", abstract = "We find arithmetic ability resides within a limited number of attention heads, with each head specializing in distinct operations. To delve into the reason, we introduce the Comparative Neuron Analysis (CNA) method, which identifies an internal logic chain consisting of four distinct stages from input to prediction: feature enhancing with shallow FFN neurons, feature transferring by shallow attention layers, feature predicting by arithmetic heads, and prediction enhancing among deep FFN neurons. Moreover, we identify the human-interpretable FFN neurons within both feature-enhancing and feature-predicting stages. These findings lead us to investigate the mechanism of LoRA, revealing that it enhances prediction probabilities by amplifying the coefficient scores of FFN neurons related to predictions. Finally, we apply our method in model pruning for arithmetic tasks and model editing for reducing gender bias. Code is on https://github.com/zepingyu0512/arithmetic-mechanism.", }
We find arithmetic ability resides within a limited number of attention heads, with each head specializing in distinct operations. To delve into the reason, we introduce the Comparative Neuron Analysis (CNA) method, which identifies an internal logic chain consisting of four distinct stages from input to prediction: feature enhancing with shallow FFN neurons, feature transferring by shallow attention layers, feature predicting by arithmetic heads, and prediction enhancing among deep FFN neurons. Moreover, we identify the human-interpretable FFN neurons within both feature-enhancing and feature-predicting stages. These findings lead us to investigate the mechanism of LoRA, revealing that it enhances prediction probabilities by amplifying the coefficient scores of FFN neurons related to predictions. Finally, we apply our method in model pruning for arithmetic tasks and model editing for reducing gender bias. Code is on https://github.com/zepingyu0512/arithmetic-mechanism.
[ "Yu, Zeping", "Ananiadou, Sophia" ]
Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis
emnlp-main.193
Oral
2409.14144
[ "https://github.com/zepingyu0512/arithmetic-mechanism" ]
https://huggingface.co/papers/2409.14144
0
0
0
2
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.194.bib
https://aclanthology.org/2024.emnlp-main.194/
@inproceedings{tatariya-etal-2024-pixology, title = "Pixology: Probing the Linguistic and Visual Capabilities of Pixel-based Language Models", author = "Tatariya, Kushal and Araujo, Vladimir and Bauwens, Thomas and de Lhoneux, Miryam", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.194", pages = "3307--3320", abstract = "Pixel-based language models have emerged as a compelling alternative to subword-based language modelling, particularly because they can represent virtually any script. PIXEL, a canonical example of such a model, is a vision transformer that has been pre-trained on rendered text. While PIXEL has shown promising cross-script transfer abilities and robustness to orthographic perturbations, it falls short of outperforming monolingual subword counterparts like BERT in most other contexts. This discrepancy raises questions about the amount of linguistic knowledge learnt by these models and whether their performance in language tasks stems more from their visual capabilities than their linguistic ones. To explore this, we probe PIXEL using a variety of linguistic and visual tasks to assess its position on the vision-to-language spectrum. Our findings reveal a substantial gap between the model{'}s visual and linguistic understanding. The lower layers of PIXEL predominantly capture superficial visual features, whereas the higher layers gradually learn more syntactic and semantic abstractions. Additionally, we examine variants of PIXEL trained with different text rendering strategies, discovering that introducing certain orthographic constraints at the input level can facilitate earlier learning of surface-level features. With this study, we hope to provide insights that aid the further development of pixel-based language models.", }
Pixel-based language models have emerged as a compelling alternative to subword-based language modelling, particularly because they can represent virtually any script. PIXEL, a canonical example of such a model, is a vision transformer that has been pre-trained on rendered text. While PIXEL has shown promising cross-script transfer abilities and robustness to orthographic perturbations, it falls short of outperforming monolingual subword counterparts like BERT in most other contexts. This discrepancy raises questions about the amount of linguistic knowledge learnt by these models and whether their performance in language tasks stems more from their visual capabilities than their linguistic ones. To explore this, we probe PIXEL using a variety of linguistic and visual tasks to assess its position on the vision-to-language spectrum. Our findings reveal a substantial gap between the model{'}s visual and linguistic understanding. The lower layers of PIXEL predominantly capture superficial visual features, whereas the higher layers gradually learn more syntactic and semantic abstractions. Additionally, we examine variants of PIXEL trained with different text rendering strategies, discovering that introducing certain orthographic constraints at the input level can facilitate earlier learning of surface-level features. With this study, we hope to provide insights that aid the further development of pixel-based language models.
[ "Tatariya, Kushal", "Araujo, Vladimir", "Bauwens, Thomas", "de Lhoneux, Miryam" ]
Pixology: Probing the Linguistic and Visual Capabilities of Pixel-based Language Models
emnlp-main.194
Poster
2410.12011
[ "https://github.com/kushaltatariya/Pixology" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.195.bib
https://aclanthology.org/2024.emnlp-main.195/
@inproceedings{fan-etal-2024-goldcoin, title = "{G}old{C}oin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory", author = "Fan, Wei and Li, Haoran and Deng, Zheye and Wang, Weiqi and Song, Yangqiu", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.195", pages = "3321--3343", abstract = "Privacy issues arise prominently during the inappropriate transmission of information between entities. Existing research primarily studies privacy by exploring various privacy attacks, defenses, and evaluations within narrowly predefined patterns, while neglecting that privacy is not an isolated, context-free concept limited to traditionally sensitive data (e.g., social security numbers), but intertwined with intricate social contexts that complicate the identification and analysis of potential privacy violations. The advent of Large Language Models (LLMs) offers unprecedented opportunities for incorporating the nuanced scenarios outlined in privacy laws to tackle these complex privacy issues. However, the scarcity of open-source relevant case studies restricts the efficiency of LLMs in aligning with specific legal statutes. To address this challenge, we introduce a novel framework, GoldCoin, designed to efficiently ground LLMs in privacy laws for judicial assessing privacy violations. Our framework leverages the theory of contextual integrity as a bridge, creating numerous synthetic scenarios grounded in relevant privacy statutes (e.g., HIPAA), to assist LLMs in comprehending the complex contexts for identifying privacy risks in the real world. Extensive experimental results demonstrate that GoldCoin markedly enhances LLMs{'} capabilities in recognizing privacy risks across real court cases, surpassing the baselines on different judicial tasks.", }
Privacy issues arise prominently during the inappropriate transmission of information between entities. Existing research primarily studies privacy by exploring various privacy attacks, defenses, and evaluations within narrowly predefined patterns, while neglecting that privacy is not an isolated, context-free concept limited to traditionally sensitive data (e.g., social security numbers), but intertwined with intricate social contexts that complicate the identification and analysis of potential privacy violations. The advent of Large Language Models (LLMs) offers unprecedented opportunities for incorporating the nuanced scenarios outlined in privacy laws to tackle these complex privacy issues. However, the scarcity of open-source relevant case studies restricts the efficiency of LLMs in aligning with specific legal statutes. To address this challenge, we introduce a novel framework, GoldCoin, designed to efficiently ground LLMs in privacy laws for judicial assessing privacy violations. Our framework leverages the theory of contextual integrity as a bridge, creating numerous synthetic scenarios grounded in relevant privacy statutes (e.g., HIPAA), to assist LLMs in comprehending the complex contexts for identifying privacy risks in the real world. Extensive experimental results demonstrate that GoldCoin markedly enhances LLMs{'} capabilities in recognizing privacy risks across real court cases, surpassing the baselines on different judicial tasks.
[ "Fan, Wei", "Li, Haoran", "Deng, Zheye", "Wang, Weiqi", "Song, Yangqiu" ]
GoldCoin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory
emnlp-main.195
Poster
2406.11149
[ "https://github.com/hkust-knowcomp/goldcoin" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.emnlp-main.196.bib
https://aclanthology.org/2024.emnlp-main.196/
@inproceedings{al-laith-etal-2024-noise, title = "Noise, Novels, Numbers. A Framework for Detecting and Categorizing Noise in {D}anish and {N}orwegian Literature", author = "Al-Laith, Ali and Hershcovich, Daniel and Bjerring-Hansen, Jens and Parby, Jakob Ingemann and Conroy, Alexander and Tangherlini, Timothy R", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.196", pages = "3344--3354", abstract = "We present a framework for detecting and categorizing noise in literary texts, demonstrated through its application to Danish and Norwegian literature from the late 19-th century. Noise, understood as {``}aberrant sonic behaviour,{''} is not only an auditory phenomenon but also a cultural construct tied to the processes of civilization and urbanization.We begin by utilizing topic modeling techniques to identify noise-related documents, followed by fine-tuning BERT-based language models trained on Danish and Norwegian texts to analyze a corpus of over 800 novels.We identify and track the prevalence of noise in these texts, offering insights into the literary perceptions of noise during the Scandinavian {``}Modern Breakthrough{''} period (1870-1899). Our contributions include the development of a comprehensive dataset annotated for noise-related segments and their categorization into human-made, non-human-made, and musical noises. This study illustrates the framework{'}s potential for enhancing the understanding of the relationship between noise and its literary representations, providing a deeper appreciation of the auditory elements in literary works, including as sources for cultural history.", }
We present a framework for detecting and categorizing noise in literary texts, demonstrated through its application to Danish and Norwegian literature from the late 19-th century. Noise, understood as {``}aberrant sonic behaviour,{''} is not only an auditory phenomenon but also a cultural construct tied to the processes of civilization and urbanization.We begin by utilizing topic modeling techniques to identify noise-related documents, followed by fine-tuning BERT-based language models trained on Danish and Norwegian texts to analyze a corpus of over 800 novels.We identify and track the prevalence of noise in these texts, offering insights into the literary perceptions of noise during the Scandinavian {``}Modern Breakthrough{''} period (1870-1899). Our contributions include the development of a comprehensive dataset annotated for noise-related segments and their categorization into human-made, non-human-made, and musical noises. This study illustrates the framework{'}s potential for enhancing the understanding of the relationship between noise and its literary representations, providing a deeper appreciation of the auditory elements in literary works, including as sources for cultural history.
[ "Al-Laith, Ali", "Hershcovich, Daniel", "Bjerring-Hansen, Jens", "Parby, Jakob Ingemann", "Conroy, Alex", "er", "Tangherlini, Timothy R" ]
Noise, Novels, Numbers. A Framework for Detecting and Categorizing Noise in Danish and Norwegian Literature
emnlp-main.196
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.197.bib
https://aclanthology.org/2024.emnlp-main.197/
@inproceedings{ashkboos-etal-2024-quik, title = "{QUIK}: Towards End-to-end 4-Bit Inference on Generative Large Language Models", author = "Ashkboos, Saleh and Markov, Ilia and Frantar, Elias and Zhong, Tingxuan and Wang, Xincheng and Ren, Jie and Hoefler, Torsten and Alistarh, Dan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.197", pages = "3355--3371", abstract = "Large Language Models (LLMs) from the GPT family have become extremely popular, leading to a race towards reducing their inference costs to allow for efficient local computation. However, the vast majority of existing work focuses on weight-only quantization, which can reduce runtime costs in the memory-bound one-token-at-a-time generative setting, but does not address costs in compute-bound scenarios, such as batched inference or prompt processing.In this paper, we address the general quantization problem, where \textit{both weights and activations} should be quantized, which leads to computational improvements in general. We show that the majority of inference computations for large generative models can be performed with both weights and activations being cast to 4 bits, while at the same time maintaining good accuracy. We achieve this via a hybrid quantization strategy called QUIK that compresses most of the weights and activations to 4-bit, while keeping a small fraction of {``}outlier{''} weights and activations in higher-precision. QUIK is that it is designed with computational efficiency in mind: we provide GPU kernels matching the QUIK format with highly-efficient layer-wise runtimes, which lead to practical end-to-end throughput improvements of up to 3.4x relative to FP16 execution. We provide detailed studies for models from the OPT, LLaMA-2 and Falcon families, as well as a first instance of accurate inference using quantization plus 2:4 sparsity.Anonymized code is available.", }
Large Language Models (LLMs) from the GPT family have become extremely popular, leading to a race towards reducing their inference costs to allow for efficient local computation. However, the vast majority of existing work focuses on weight-only quantization, which can reduce runtime costs in the memory-bound one-token-at-a-time generative setting, but does not address costs in compute-bound scenarios, such as batched inference or prompt processing.In this paper, we address the general quantization problem, where \textit{both weights and activations} should be quantized, which leads to computational improvements in general. We show that the majority of inference computations for large generative models can be performed with both weights and activations being cast to 4 bits, while at the same time maintaining good accuracy. We achieve this via a hybrid quantization strategy called QUIK that compresses most of the weights and activations to 4-bit, while keeping a small fraction of {``}outlier{''} weights and activations in higher-precision. QUIK is that it is designed with computational efficiency in mind: we provide GPU kernels matching the QUIK format with highly-efficient layer-wise runtimes, which lead to practical end-to-end throughput improvements of up to 3.4x relative to FP16 execution. We provide detailed studies for models from the OPT, LLaMA-2 and Falcon families, as well as a first instance of accurate inference using quantization plus 2:4 sparsity.Anonymized code is available.
[ "Ashkboos, Saleh", "Markov, Ilia", "Frantar, Elias", "Zhong, Tingxuan", "Wang, Xincheng", "Ren, Jie", "Hoefler, Torsten", "Alistarh, Dan" ]
QUIK: Towards End-to-end 4-Bit Inference on Generative Large Language Models
emnlp-main.197
Poster
2310.09259
[ "https://github.com/ist-daslab/quik" ]
https://huggingface.co/papers/2310.09259
1
1
0
8
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.198.bib
https://aclanthology.org/2024.emnlp-main.198/
@inproceedings{shubi-etal-2024-fine, title = "Fine-Grained Prediction of Reading Comprehension from Eye Movements", author = "Shubi, Omer and Meiri, Yoav and Hadar, Cfir Avraham and Berzak, Yevgeni", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.198", pages = "3372--3391", abstract = "Can human reading comprehension be assessed from eye movements in reading? In this work, we address this longstanding question using large-scale eyetracking data. We focus on a cardinal and largely unaddressed variant of this question: predicting reading comprehension of a single participant for a single question from their eye movements over a single paragraph. We tackle this task using a battery of recent models from the literature, and three new multimodal language models. We evaluate the models in two different reading regimes: ordinary reading and information seeking, and examine their generalization to new textual items, new participants, and the combination of both. The evaluations suggest that the task is highly challenging, and highlight the importance of benchmarking against a strong text-only baseline. While in some cases eye movements provide improvements over such a baseline, they tend to be small. This could be due to limitations of current modelling approaches, limitations of the data, or because eye movement behavior does not sufficiently pertain to fine-grained aspects of reading comprehension processes. Our study provides an infrastructure for making further progress on this question.", }
Can human reading comprehension be assessed from eye movements in reading? In this work, we address this longstanding question using large-scale eyetracking data. We focus on a cardinal and largely unaddressed variant of this question: predicting reading comprehension of a single participant for a single question from their eye movements over a single paragraph. We tackle this task using a battery of recent models from the literature, and three new multimodal language models. We evaluate the models in two different reading regimes: ordinary reading and information seeking, and examine their generalization to new textual items, new participants, and the combination of both. The evaluations suggest that the task is highly challenging, and highlight the importance of benchmarking against a strong text-only baseline. While in some cases eye movements provide improvements over such a baseline, they tend to be small. This could be due to limitations of current modelling approaches, limitations of the data, or because eye movement behavior does not sufficiently pertain to fine-grained aspects of reading comprehension processes. Our study provides an infrastructure for making further progress on this question.
[ "Shubi, Omer", "Meiri, Yoav", "Hadar, Cfir Avraham", "Berzak, Yevgeni" ]
Fine-Grained Prediction of Reading Comprehension from Eye Movements
emnlp-main.198
Oral
2410.04484
[ "https://github.com/lacclab/Reading-Comprehension-Prediction" ]
https://huggingface.co/papers/2410.04484
1
1
1
4
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.199.bib
https://aclanthology.org/2024.emnlp-main.199/
@inproceedings{zhuang-etal-2024-efficientrag, title = "{E}fficient{RAG}: Efficient Retriever for Multi-Hop Question Answering", author = "Zhuang, Ziyuan and Zhang, Zhiyang and Cheng, Sitao and Yang, Fangkai and Liu, Jia and Huang, Shujian and Lin, Qingwei and Rajmohan, Saravan and Zhang, Dongmei and Zhang, Qi", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.199", pages = "3392--3411", abstract = "Retrieval-augmented generation (RAG) methods encounter difficulties when addressing complex questions like multi-hop queries.While iterative retrieval methods improve performance by gathering additional information, current approaches often rely on multiple calls of large language models (LLMs).In this paper, we introduce EfficientRAG, an efficient retriever for multi-hop question answering.EfficientRAG iteratively generates new queries without the need for LLM calls at each iteration and filters out irrelevant information.Experimental results demonstrate that EfficientRAG surpasses existing RAG methods on three open-domain multi-hop question-answering datasets.The code is available in [aka.ms/efficientrag](https://github.com/NIL-zhuang/EfficientRAG-official).", }
Retrieval-augmented generation (RAG) methods encounter difficulties when addressing complex questions like multi-hop queries.While iterative retrieval methods improve performance by gathering additional information, current approaches often rely on multiple calls of large language models (LLMs).In this paper, we introduce EfficientRAG, an efficient retriever for multi-hop question answering.EfficientRAG iteratively generates new queries without the need for LLM calls at each iteration and filters out irrelevant information.Experimental results demonstrate that EfficientRAG surpasses existing RAG methods on three open-domain multi-hop question-answering datasets.The code is available in [aka.ms/efficientrag](https://github.com/NIL-zhuang/EfficientRAG-official).
[ "Zhuang, Ziyuan", "Zhang, Zhiyang", "Cheng, Sitao", "Yang, Fangkai", "Liu, Jia", "Huang, Shujian", "Lin, Qingwei", "Rajmohan, Saravan", "Zhang, Dongmei", "Zhang, Qi" ]
EfficientRAG: Efficient Retriever for Multi-Hop Question Answering
emnlp-main.199
Poster
2408.04259
[ "https://github.com/nil-zhuang/efficientrag-official" ]
https://huggingface.co/papers/2408.04259
1
0
0
10
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.emnlp-main.200.bib
https://aclanthology.org/2024.emnlp-main.200/
@inproceedings{shashidhar-etal-2024-unsupervised, title = "Unsupervised Human Preference Learning", author = "Shashidhar, Sumuk and Chinta, Abhinav and Sahai, Vaibhav and Tur, Dilek Hakkani", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.200", pages = "3412--3445", abstract = "Large language models demonstrate impressive reasoning abilities but struggle to provide personalized content due to their lack of individual user preference information. Existing methods, such as in-context learning and parameter-efficient fine-tuning, fall short in capturing the complexity of human preferences, especially given the small, personal datasets individuals possess. In this paper, we propose a novel approach utilizing small parameter models as preference agents to generate natural language rules that guide a larger, pre-trained model, enabling efficient personalization. Our method involves a small, local {``}steering wheel{''} model that directs the outputs of a much larger foundation model, producing content tailored to an individual{'}s preferences while leveraging the extensive knowledge and capabilities of the large model. Importantly, this personalization is achieved without the need to fine-tune the large model. Experimental results on email and article datasets, demonstrate that our technique significantly outperforms baseline personalization methods. By allowing foundation models to adapt to individual preferences in a data and compute-efficient manner, our approach paves the way for highly personalized language model applications.", }
Large language models demonstrate impressive reasoning abilities but struggle to provide personalized content due to their lack of individual user preference information. Existing methods, such as in-context learning and parameter-efficient fine-tuning, fall short in capturing the complexity of human preferences, especially given the small, personal datasets individuals possess. In this paper, we propose a novel approach utilizing small parameter models as preference agents to generate natural language rules that guide a larger, pre-trained model, enabling efficient personalization. Our method involves a small, local {``}steering wheel{''} model that directs the outputs of a much larger foundation model, producing content tailored to an individual{'}s preferences while leveraging the extensive knowledge and capabilities of the large model. Importantly, this personalization is achieved without the need to fine-tune the large model. Experimental results on email and article datasets, demonstrate that our technique significantly outperforms baseline personalization methods. By allowing foundation models to adapt to individual preferences in a data and compute-efficient manner, our approach paves the way for highly personalized language model applications.
[ "Shashidhar, Sumuk", "Chinta, Abhinav", "Sahai, Vaibhav", "Tur, Dilek Hakkani" ]
Unsupervised Human Preference Learning
emnlp-main.200
Poster
2410.03731
[ "" ]
https://huggingface.co/papers/2410.03731
2
2
0
4
[]
[]
[]
[]
[]
[]
1