bibtex_url
stringlengths 41
53
| proceedings
stringlengths 38
50
| bibtext
stringlengths 566
3.75k
| abstract
stringlengths 4
3.1k
| authors
sequencelengths 1
66
| title
stringlengths 12
172
| id
stringlengths 7
19
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringlengths 0
40
| n_linked_authors
int64 -1
21
| upvotes
int64 -1
116
| num_comments
int64 -1
11
| n_authors
int64 -1
61
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
100
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
100
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.emnlp-main.801.bib | https://aclanthology.org/2024.emnlp-main.801/ | @inproceedings{yin-etal-2024-asl,
title = "{ASL} {STEM} {W}iki: Dataset and Benchmark for Interpreting {STEM} Articles",
author = "Yin, Kayo and
Singh, Chinmay and
Minakov, Fyodor O and
Milan, Vanessa and
Daum{\'e} Iii, Hal and
Zhang, Cyril and
Lu, Alex Xijie and
Bragg, Danielle",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.801",
pages = "14474--14490",
abstract = "Deaf and hard-of-hearing (DHH) students face significant barriers in accessing science, technology, engineering, and mathematics (STEM) education, notably due to the scarcity of STEM resources in signed languages. To help address this, we introduce ASL STEM Wiki: a parallel corpus of 254 Wikipedia articles on STEM topics in English, interpreted into over 300 hours of American Sign Language (ASL). ASL STEM Wiki is the first continuous signing dataset focused on STEM, facilitating the development of AI resources for STEM education in ASL.We identify several use cases of ASL STEM Wiki with human-centered applications. For example, because this dataset highlights the frequent use of fingerspelling for technical concepts, which inhibits DHH students{'} ability to learn,we develop models to identify fingerspelled words{---}which can later be used to query for appropriate ASL signs to suggest to interpreters.",
}
| Deaf and hard-of-hearing (DHH) students face significant barriers in accessing science, technology, engineering, and mathematics (STEM) education, notably due to the scarcity of STEM resources in signed languages. To help address this, we introduce ASL STEM Wiki: a parallel corpus of 254 Wikipedia articles on STEM topics in English, interpreted into over 300 hours of American Sign Language (ASL). ASL STEM Wiki is the first continuous signing dataset focused on STEM, facilitating the development of AI resources for STEM education in ASL.We identify several use cases of ASL STEM Wiki with human-centered applications. For example, because this dataset highlights the frequent use of fingerspelling for technical concepts, which inhibits DHH students{'} ability to learn,we develop models to identify fingerspelled words{---}which can later be used to query for appropriate ASL signs to suggest to interpreters. | [
"Yin, Kayo",
"Singh, Chinmay",
"Minakov, Fyodor O",
"Milan, Vanessa",
"Daum{\\'e} Iii, Hal",
"Zhang, Cyril",
"Lu, Alex Xijie",
"Bragg, Danielle"
] | ASL STEM Wiki: Dataset and Benchmark for Interpreting STEM Articles | emnlp-main.801 | Poster | 2411.05783 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.802.bib | https://aclanthology.org/2024.emnlp-main.802/ | @inproceedings{agrawal-etal-2024-automatic-metrics,
title = "Can Automatic Metrics Assess High-Quality Translations?",
author = "Agrawal, Sweta and
Farinhas, Ant{\'o}nio and
Rei, Ricardo and
Martins, Andre",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.802",
pages = "14491--14502",
abstract = "Automatic metrics for evaluating translation quality are typically validated by measuring how well they correlate with human assessments. However, correlation methods tend to capture only the ability of metrics to differentiate between good and bad source-translation pairs, overlooking their reliability in distinguishing alternative translations for the same source. In this paper, we confirm that this is indeed the case by showing that current metrics are insensitive to nuanced differences in translation quality. This effect is most pronounced when the quality is high and the variance among alternatives is low. Given this finding, we shift towards detecting high-quality correct translations, an important problem in practical decision-making scenarios where a binary check of correctness is prioritized over a nuanced evaluation of quality. Using the MQM framework as the gold standard, we systematically stress-test the ability of current metrics to identify translations with no errors as marked by humans. Our findings reveal that current metrics often over or underestimate translation quality, indicating significant room for improvement in machine translation evaluation.",
}
| Automatic metrics for evaluating translation quality are typically validated by measuring how well they correlate with human assessments. However, correlation methods tend to capture only the ability of metrics to differentiate between good and bad source-translation pairs, overlooking their reliability in distinguishing alternative translations for the same source. In this paper, we confirm that this is indeed the case by showing that current metrics are insensitive to nuanced differences in translation quality. This effect is most pronounced when the quality is high and the variance among alternatives is low. Given this finding, we shift towards detecting high-quality correct translations, an important problem in practical decision-making scenarios where a binary check of correctness is prioritized over a nuanced evaluation of quality. Using the MQM framework as the gold standard, we systematically stress-test the ability of current metrics to identify translations with no errors as marked by humans. Our findings reveal that current metrics often over or underestimate translation quality, indicating significant room for improvement in machine translation evaluation. | [
"Agrawal, Sweta",
"Farinhas, Ant{\\'o}nio",
"Rei, Ricardo",
"Martins, Andre"
] | Can Automatic Metrics Assess High-Quality Translations? | emnlp-main.802 | Poster | 2405.18348 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.803.bib | https://aclanthology.org/2024.emnlp-main.803/ | @inproceedings{agrawal-etal-2024-modeling,
title = "Modeling User Preferences with Automatic Metrics: Creating a High-Quality Preference Dataset for Machine Translation",
author = "Agrawal, Sweta and
De Souza, Jos{\'e} G. C. and
Rei, Ricardo and
Farinhas, Ant{\'o}nio and
Faria, Gon{\c{c}}alo and
Fernandes, Patrick and
Guerreiro, Nuno M and
Martins, Andre",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.803",
pages = "14503--14519",
abstract = "Alignment with human preferences is an important step in developing accurate and safe large language models. This is no exception in machine translation (MT), where better handling of language nuances and context-specific variations leads to improved quality. However, preference data based on human feedback can be very expensive to obtain and curate at a large scale. Automatic metrics, on the other hand, can induce preferences, but they might not match human expectations perfectly. In this paper, we propose an approach that leverages the best of both worlds. We first collect sentence-level quality assessments from professional linguists on translations generated by multiple high-quality MT systems and evaluate the ability of current automatic metrics to recover these preferences. We then use this analysis to curate a new dataset, MT-Pref (metric induced translation preference) dataset, which comprises 18k instances covering 18 language directions, using texts sourced from multiple domains post-2022. We show that aligning TOWER models on MT-Pref significantly improves translation quality on WMT23 and FLORES benchmarks.",
}
| Alignment with human preferences is an important step in developing accurate and safe large language models. This is no exception in machine translation (MT), where better handling of language nuances and context-specific variations leads to improved quality. However, preference data based on human feedback can be very expensive to obtain and curate at a large scale. Automatic metrics, on the other hand, can induce preferences, but they might not match human expectations perfectly. In this paper, we propose an approach that leverages the best of both worlds. We first collect sentence-level quality assessments from professional linguists on translations generated by multiple high-quality MT systems and evaluate the ability of current automatic metrics to recover these preferences. We then use this analysis to curate a new dataset, MT-Pref (metric induced translation preference) dataset, which comprises 18k instances covering 18 language directions, using texts sourced from multiple domains post-2022. We show that aligning TOWER models on MT-Pref significantly improves translation quality on WMT23 and FLORES benchmarks. | [
"Agrawal, Sweta",
"De Souza, Jos{\\'e} G. C.",
"Rei, Ricardo",
"Farinhas, Ant{\\'o}nio",
"Faria, Gon{\\c{c}}alo",
"Fern",
"es, Patrick",
"Guerreiro, Nuno M",
"Martins, Andre"
] | Modeling User Preferences with Automatic Metrics: Creating a High-Quality Preference Dataset for Machine Translation | emnlp-main.803 | Poster | 2410.07779 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.804.bib | https://aclanthology.org/2024.emnlp-main.804/ | @inproceedings{xing-etal-2024-dc,
title = "{DC}-Instruct: An Effective Framework for Generative Multi-intent Spoken Language Understanding",
author = "Xing, Bowen and
Liao, Lizi and
Huang, Minlie and
Tsang, Ivor",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.804",
pages = "14520--14534",
abstract = "In the realm of multi-intent spoken language understanding, recent advancements have leveraged the potential of prompt learning frameworks. However, critical gaps exist in these frameworks: the lack of explicit modeling of dual-task dependencies and the oversight of task-specific semantic differences among utterances. To address these shortcomings, we propose DC-Instruct, a novel generative framework based on Dual-task Inter-dependent Instructions (DII) and Supervised Contrastive Instructions (SCI). Specifically, DII guides large language models (LLMs) to generate labels for one task based on the other task{'}s labels, thereby explicitly capturing dual-task inter-dependencies. Moreover, SCI leverages utterance semantics differences by guiding LLMs to determine whether a pair of utterances share the same or similar labels. This can improve LLMs on extracting and discriminating task-specific semantics, thus enhancing their SLU reasoning abilities. Extensive experiments on public benchmark datasets show that DC-Instruct markedly outperforms current generative models and state-of-the-art methods, demonstrating its effectiveness in enhancing dialogue language understanding and reasoning.",
}
| In the realm of multi-intent spoken language understanding, recent advancements have leveraged the potential of prompt learning frameworks. However, critical gaps exist in these frameworks: the lack of explicit modeling of dual-task dependencies and the oversight of task-specific semantic differences among utterances. To address these shortcomings, we propose DC-Instruct, a novel generative framework based on Dual-task Inter-dependent Instructions (DII) and Supervised Contrastive Instructions (SCI). Specifically, DII guides large language models (LLMs) to generate labels for one task based on the other task{'}s labels, thereby explicitly capturing dual-task inter-dependencies. Moreover, SCI leverages utterance semantics differences by guiding LLMs to determine whether a pair of utterances share the same or similar labels. This can improve LLMs on extracting and discriminating task-specific semantics, thus enhancing their SLU reasoning abilities. Extensive experiments on public benchmark datasets show that DC-Instruct markedly outperforms current generative models and state-of-the-art methods, demonstrating its effectiveness in enhancing dialogue language understanding and reasoning. | [
"Xing, Bowen",
"Liao, Lizi",
"Huang, Minlie",
"Tsang, Ivor"
] | DC-Instruct: An Effective Framework for Generative Multi-intent Spoken Language Understanding | emnlp-main.804 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.805.bib | https://aclanthology.org/2024.emnlp-main.805/ | @inproceedings{lyu-etal-2024-knowtuning,
title = "{K}now{T}uning: Knowledge-aware Fine-tuning for Large Language Models",
author = "Lyu, Yougang and
Yan, Lingyong and
Wang, Shuaiqiang and
Shi, Haibo and
Yin, Dawei and
Ren, Pengjie and
Chen, Zhumin and
de Rijke, Maarten and
Ren, Zhaochun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.805",
pages = "14535--14556",
abstract = "Despite their success at many natural language processing (NLP) tasks, large language models still struggle to effectively leverage knowledge for knowledge-intensive tasks, manifesting limitations such as generating incomplete, non-factual, or illogical answers. These limitations stem from inadequate knowledge awareness of LLMs during vanilla fine-tuning. To address these problems, we propose a knowledge-aware fine-tuning (KnowTuning) method to improve fine-grained and coarse-grained knowledge awareness of LLMs. We devise a fine-grained knowledge augmentation stage to train LLMs to identify difficult fine-grained knowledge in answers. We also propose a coarse-grained knowledge comparison stage to train LLMs to distinguish between reliable and unreliable knowledge, in three aspects: completeness, factuality, and logicality. Extensive experiments on both generic and medical question answering (QA) datasets confirm the effectiveness of KnowTuning, through automatic and human evaluations, across various sizes of LLMs. We further verify that KnowTuning generates more facts with less factual error rate under fine-grained facts evaluation.",
}
| Despite their success at many natural language processing (NLP) tasks, large language models still struggle to effectively leverage knowledge for knowledge-intensive tasks, manifesting limitations such as generating incomplete, non-factual, or illogical answers. These limitations stem from inadequate knowledge awareness of LLMs during vanilla fine-tuning. To address these problems, we propose a knowledge-aware fine-tuning (KnowTuning) method to improve fine-grained and coarse-grained knowledge awareness of LLMs. We devise a fine-grained knowledge augmentation stage to train LLMs to identify difficult fine-grained knowledge in answers. We also propose a coarse-grained knowledge comparison stage to train LLMs to distinguish between reliable and unreliable knowledge, in three aspects: completeness, factuality, and logicality. Extensive experiments on both generic and medical question answering (QA) datasets confirm the effectiveness of KnowTuning, through automatic and human evaluations, across various sizes of LLMs. We further verify that KnowTuning generates more facts with less factual error rate under fine-grained facts evaluation. | [
"Lyu, Yougang",
"Yan, Lingyong",
"Wang, Shuaiqiang",
"Shi, Haibo",
"Yin, Dawei",
"Ren, Pengjie",
"Chen, Zhumin",
"de Rijke, Maarten",
"Ren, Zhaochun"
] | KnowTuning: Knowledge-aware Fine-tuning for Large Language Models | emnlp-main.805 | Poster | 2402.11176 | [
"https://github.com/youganglyu/knowtuning"
] | https://huggingface.co/papers/2402.11176 | 1 | 2 | 0 | 9 | [
"sunatte/txt2sql",
"MachoMaheen/devdock4bit"
] | [] | [
"Justinrune/LLaMA-Factory",
"smarttang/blingsec"
] | [
"sunatte/txt2sql",
"MachoMaheen/devdock4bit"
] | [] | [
"Justinrune/LLaMA-Factory",
"smarttang/blingsec"
] | 1 |
https://aclanthology.org/2024.emnlp-main.806.bib | https://aclanthology.org/2024.emnlp-main.806/ | @inproceedings{zhang-etal-2024-seccoder,
title = "{S}ec{C}oder: Towards Generalizable and Robust Secure Code Generation",
author = "Zhang, Boyu and
Du, Tianyu and
Tong, Junkai and
Zhang, Xuhong and
Chow, Kingsum and
Cheng, Sheng and
Wang, Xun and
Yin, Jianwei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.806",
pages = "14557--14571",
abstract = "After large models (LMs) have gained widespread acceptance in code-related tasks, their superior generative capacity has greatly promoted the application of the code LM. Nevertheless, the security of the generated code has raised attention to its potential damage. Existing secure code generation methods have limited generalizability to unseen test cases and poor robustness against the attacked model, leading to safety failures in code generation. In this paper, we propose a generalizable and robust secure code generation method SecCoder by using in-context learning (ICL) and the safe demonstration. The dense retriever is also used to select the most helpful demonstration to maximize the improvement of the generated code{'}s security. Experimental results show the superior generalizability of the proposed model SecCoder compared to the current secure code generation method, achieving a significant security improvement of an average of 7.20{\%} on unseen test cases. The results also show the better robustness of SecCoder compared to the current attacked code LM, achieving a significant security improvement of an average of 7.74{\%}. Our analysis indicates that SecCoder enhances the security of LMs in generating code, and it is more generalizable and robust.",
}
| After large models (LMs) have gained widespread acceptance in code-related tasks, their superior generative capacity has greatly promoted the application of the code LM. Nevertheless, the security of the generated code has raised attention to its potential damage. Existing secure code generation methods have limited generalizability to unseen test cases and poor robustness against the attacked model, leading to safety failures in code generation. In this paper, we propose a generalizable and robust secure code generation method SecCoder by using in-context learning (ICL) and the safe demonstration. The dense retriever is also used to select the most helpful demonstration to maximize the improvement of the generated code{'}s security. Experimental results show the superior generalizability of the proposed model SecCoder compared to the current secure code generation method, achieving a significant security improvement of an average of 7.20{\%} on unseen test cases. The results also show the better robustness of SecCoder compared to the current attacked code LM, achieving a significant security improvement of an average of 7.74{\%}. Our analysis indicates that SecCoder enhances the security of LMs in generating code, and it is more generalizable and robust. | [
"Zhang, Boyu",
"Du, Tianyu",
"Tong, Junkai",
"Zhang, Xuhong",
"Chow, Kingsum",
"Cheng, Sheng",
"Wang, Xun",
"Yin, Jianwei"
] | SecCoder: Towards Generalizable and Robust Secure Code Generation | emnlp-main.806 | Poster | 2410.01488 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.807.bib | https://aclanthology.org/2024.emnlp-main.807/ | @inproceedings{zhang-etal-2024-nash,
title = "Nash {C}o{T}: Multi-Path Inference with Preference Equilibrium",
author = "Zhang, Ziqi and
Wang, Cunxiang and
Xiong, Xiao and
Zhang, Yue and
Wang, Donglin",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.807",
pages = "14572--14587",
abstract = "Chain of thought (CoT) is a reasoning framework that can enhance the performance of large language models (LLMs) on complex inference tasks. In particular, among various studies related to CoT, multi-path inference stands out as a simple yet effective improvement. However, there is no optimal setting for the number of inference paths. Therefore, we have to increase the number of inference paths to obtain better results, which in turn increases the inference cost. To address this limitation, we can utilize question-related role templates to guide LLMs into relevant roles, thereby increasing the possibility of correct inferences for each path and further reducing dependence on the number of inference paths while improving reasoning accuracy. However, placing LLMs into specific roles may reduce their reasoning diversity and performance on a few tasks where role dependence is low. To alleviate the excessive immersion of the LLM into a specific role, we propose Nash CoT by constructing a competitive system on each path that balances the generation from role-specific LLMs{'} and the general LLMs{'} generation, thereby ensuring both effective role adoption and diversity in LLM generation further maintaining the performance of multi-path inference while reducing the requirement of the number of inference paths. We evaluate Nash CoT across various inference tasks, including Arabic Reasoning, Commonsense Question Answering, and Symbolic Inference, achieving results that are comparable to or better than those of multi-path CoT with the equal number of inference paths.",
}
| Chain of thought (CoT) is a reasoning framework that can enhance the performance of large language models (LLMs) on complex inference tasks. In particular, among various studies related to CoT, multi-path inference stands out as a simple yet effective improvement. However, there is no optimal setting for the number of inference paths. Therefore, we have to increase the number of inference paths to obtain better results, which in turn increases the inference cost. To address this limitation, we can utilize question-related role templates to guide LLMs into relevant roles, thereby increasing the possibility of correct inferences for each path and further reducing dependence on the number of inference paths while improving reasoning accuracy. However, placing LLMs into specific roles may reduce their reasoning diversity and performance on a few tasks where role dependence is low. To alleviate the excessive immersion of the LLM into a specific role, we propose Nash CoT by constructing a competitive system on each path that balances the generation from role-specific LLMs{'} and the general LLMs{'} generation, thereby ensuring both effective role adoption and diversity in LLM generation further maintaining the performance of multi-path inference while reducing the requirement of the number of inference paths. We evaluate Nash CoT across various inference tasks, including Arabic Reasoning, Commonsense Question Answering, and Symbolic Inference, achieving results that are comparable to or better than those of multi-path CoT with the equal number of inference paths. | [
"Zhang, Ziqi",
"Wang, Cunxiang",
"Xiong, Xiao",
"Zhang, Yue",
"Wang, Donglin"
] | Nash CoT: Multi-Path Inference with Preference Equilibrium | emnlp-main.807 | Poster | 2407.07099 | [
"https://github.com/stevezhangza/nash-chain-of-thought"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.808.bib | https://aclanthology.org/2024.emnlp-main.808/ | @inproceedings{lv-etal-2024-scalable,
title = "Scalable Efficient Training of Large Language Models with Low-dimensional Projected Attention",
author = "Lv, Xingtai and
Ding, Ning and
Zhang, Kaiyan and
Hua, Ermo and
Cui, Ganqu and
Zhou, Bowen",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.808",
pages = "14588--14599",
abstract = "Improving the effectiveness and efficiency of large language models (LLMs) simultaneously is a critical yet challenging research goal. In this paper, we find that low-rank pre-training, normally considered as efficient methods that will compromise performance, can be scalably effective when reduced parameters are precisely targeted. Specifically, by applying low-dimensional module only to the attention layer {---} resolves this issue and enhances both effectiveness and efficiency. We refer to this structure as *Low-dimensional Projected Attention (LPA)* and provide an explanatory analysis. Through extensive experimentation at parameter scales of 130M, 370M, and scaling up to 3B, we have validated the effectiveness and scalability of LPA. Our results show that LPA model can save up to 12.4{\%} in time while achieving an approximate 5{\%} improvement in test perplexity (ppl) and on downstream tasks compared with vanilla Transformer.",
}
| Improving the effectiveness and efficiency of large language models (LLMs) simultaneously is a critical yet challenging research goal. In this paper, we find that low-rank pre-training, normally considered as efficient methods that will compromise performance, can be scalably effective when reduced parameters are precisely targeted. Specifically, by applying low-dimensional module only to the attention layer {---} resolves this issue and enhances both effectiveness and efficiency. We refer to this structure as *Low-dimensional Projected Attention (LPA)* and provide an explanatory analysis. Through extensive experimentation at parameter scales of 130M, 370M, and scaling up to 3B, we have validated the effectiveness and scalability of LPA. Our results show that LPA model can save up to 12.4{\%} in time while achieving an approximate 5{\%} improvement in test perplexity (ppl) and on downstream tasks compared with vanilla Transformer. | [
"Lv, Xingtai",
"Ding, Ning",
"Zhang, Kaiyan",
"Hua, Ermo",
"Cui, Ganqu",
"Zhou, Bowen"
] | Scalable Efficient Training of Large Language Models with Low-dimensional Projected Attention | emnlp-main.808 | Poster | 2411.02063 | [
"https://github.com/tsinghuac3i/lpa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.809.bib | https://aclanthology.org/2024.emnlp-main.809/ | @inproceedings{cheng-etal-2024-small,
title = "Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector",
author = "Cheng, Xiaoxue and
Li, Junyi and
Zhao, Xin and
Zhang, Hongzhi and
Zhang, Fuzheng and
Zhang, Di and
Gai, Kun and
Wen, Ji-Rong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.809",
pages = "14600--14615",
abstract = "Hallucination detection is a challenging task for large language models (LLMs), and existing studies heavily rely on powerful closed-source LLMs such as GPT-4. In this paper, we propose an autonomous LLM-based agent framework, called HaluAgent, which enables relatively smaller LLMs (e.g. Baichuan2-Chat 7B) to actively select suitable tools for detecting multiple hallucination types such as text, code, and mathematical expression. In HaluAgent, we integrate the LLM, multi-functional toolbox, and design a fine-grained three-stage detection framework along with memory mechanism. To facilitate the effectiveness of HaluAgent, we leverage existing Chinese and English datasets to synthesize detection trajectories for fine-tuning, which endows HaluAgent with the capability for bilingual hallucination detection. Extensive experiments demonstrate that only using 2K samples for tuning LLMs, HaluAgent can perform hallucination detection on various types of tasks and datasets, achieving performance comparable to or even higher than GPT-4 without tool enhancements on both in-domain and out-of-domain datasets.",
}
| Hallucination detection is a challenging task for large language models (LLMs), and existing studies heavily rely on powerful closed-source LLMs such as GPT-4. In this paper, we propose an autonomous LLM-based agent framework, called HaluAgent, which enables relatively smaller LLMs (e.g. Baichuan2-Chat 7B) to actively select suitable tools for detecting multiple hallucination types such as text, code, and mathematical expression. In HaluAgent, we integrate the LLM, multi-functional toolbox, and design a fine-grained three-stage detection framework along with memory mechanism. To facilitate the effectiveness of HaluAgent, we leverage existing Chinese and English datasets to synthesize detection trajectories for fine-tuning, which endows HaluAgent with the capability for bilingual hallucination detection. Extensive experiments demonstrate that only using 2K samples for tuning LLMs, HaluAgent can perform hallucination detection on various types of tasks and datasets, achieving performance comparable to or even higher than GPT-4 without tool enhancements on both in-domain and out-of-domain datasets. | [
"Cheng, Xiaoxue",
"Li, Junyi",
"Zhao, Xin",
"Zhang, Hongzhi",
"Zhang, Fuzheng",
"Zhang, Di",
"Gai, Kun",
"Wen, Ji-Rong"
] | Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector | emnlp-main.809 | Poster | 2406.11277 | [
"https://github.com/rucaibox/haluagent"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.810.bib | https://aclanthology.org/2024.emnlp-main.810/ | @inproceedings{li-etal-2024-interpretable,
title = "Interpretable Composition Attribution Enhancement for Visio-linguistic Compositional Understanding",
author = "Li, Wei and
Huang, Zhen and
Tian, Xinmei and
Lu, Le and
Li, Houqiang and
Shen, Xu and
Ye, Jieping",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.810",
pages = "14616--14632",
abstract = "Contrastively trained vision-language models such as CLIP have achieved remarkable progress in vision and language representation learning. Despite the promising progress, their proficiency in compositional reasoning over attributes and relations (e.g., distinguishing between {``}the car is underneath the person{''} and {``}the person is underneath the car{''}) remains notably inadequate. We investigate the cause for this deficient behavior is the composition attribution issue, where the attribution scores (e.g., attention scores or GradCAM scores) for relations (e.g., underneath) or attributes (e.g., red) in the text are substantially lower than those for object terms. In this work, we show such issue is mitigated via a novel framework called CAE (Composition Attribution Enhancement). This generic framework incorporates various interpretable attribution methods to encourage the model to pay greater attention to composition words denoting relationships and attributes within the text. Detailed analysis shows that our approach enables the models to adjust and rectify the attribution of the texts. Extensive experiments across seven benchmarks reveal that our framework significantly enhances the ability to discern intricate details and construct more sophisticated interpretations of combined visual and linguistic elements.",
}
| Contrastively trained vision-language models such as CLIP have achieved remarkable progress in vision and language representation learning. Despite the promising progress, their proficiency in compositional reasoning over attributes and relations (e.g., distinguishing between {``}the car is underneath the person{''} and {``}the person is underneath the car{''}) remains notably inadequate. We investigate the cause for this deficient behavior is the composition attribution issue, where the attribution scores (e.g., attention scores or GradCAM scores) for relations (e.g., underneath) or attributes (e.g., red) in the text are substantially lower than those for object terms. In this work, we show such issue is mitigated via a novel framework called CAE (Composition Attribution Enhancement). This generic framework incorporates various interpretable attribution methods to encourage the model to pay greater attention to composition words denoting relationships and attributes within the text. Detailed analysis shows that our approach enables the models to adjust and rectify the attribution of the texts. Extensive experiments across seven benchmarks reveal that our framework significantly enhances the ability to discern intricate details and construct more sophisticated interpretations of combined visual and linguistic elements. | [
"Li, Wei",
"Huang, Zhen",
"Tian, Xinmei",
"Lu, Le",
"Li, Houqiang",
"Shen, Xu",
"Ye, Jieping"
] | Interpretable Composition Attribution Enhancement for Visio-linguistic Compositional Understanding | emnlp-main.810 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.811.bib | https://aclanthology.org/2024.emnlp-main.811/ | @inproceedings{gupta-etal-2024-llm,
title = "{LLM} Task Interference: An Initial Study on the Impact of Task-Switch in Conversational History",
author = "Gupta, Akash and
Sheth, Ivaxi and
Raina, Vyas and
Gales, Mark and
Fritz, Mario",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.811",
pages = "14633--14652",
abstract = "With the recent emergence of powerful instruction-tuned large language models (LLMs), various helpful conversational Artificial Intelligence (AI) systems have been deployed across many applications. When prompted by users, these AI systems successfully perform a wide range of tasks as part of a conversation. To provide some sort of memory and context, such approaches typically condition their output on the entire conversational history. Although this sensitivity to the conversational history can often lead to improved performance on subsequent tasks, we find that performance can in fact also be negatively impacted, if there is a {\_}task-switch{\_}. To the best of our knowledge, our work makes the first attempt to formalize the study of such vulnerabilities and interference of tasks in conversational LLMs caused by task-switches in the conversational history. Our experiments across 5 datasets with 15 task switches using popular LLMs reveal that many of the task-switches can lead to significant performance degradation.",
}
| With the recent emergence of powerful instruction-tuned large language models (LLMs), various helpful conversational Artificial Intelligence (AI) systems have been deployed across many applications. When prompted by users, these AI systems successfully perform a wide range of tasks as part of a conversation. To provide some sort of memory and context, such approaches typically condition their output on the entire conversational history. Although this sensitivity to the conversational history can often lead to improved performance on subsequent tasks, we find that performance can in fact also be negatively impacted, if there is a {\_}task-switch{\_}. To the best of our knowledge, our work makes the first attempt to formalize the study of such vulnerabilities and interference of tasks in conversational LLMs caused by task-switches in the conversational history. Our experiments across 5 datasets with 15 task switches using popular LLMs reveal that many of the task-switches can lead to significant performance degradation. | [
"Gupta, Akash",
"Sheth, Ivaxi",
"Raina, Vyas",
"Gales, Mark",
"Fritz, Mario"
] | LLM Task Interference: An Initial Study on the Impact of Task-Switch in Conversational History | emnlp-main.811 | Poster | 2402.18216 | [
"https://github.com/ivaxi0s/llm-task-switch"
] | https://huggingface.co/papers/2402.18216 | 0 | 1 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.812.bib | https://aclanthology.org/2024.emnlp-main.812/ | @inproceedings{marchiori-manerba-etal-2024-social,
title = "Social Bias Probing: Fairness Benchmarking for Language Models",
author = "Marchiori Manerba, Marta and
Stanczak, Karolina and
Guidotti, Riccardo and
Augenstein, Isabelle",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.812",
pages = "14653--14671",
abstract = "While the impact of social biases in language models has been recognized, prior methods for bias evaluation have been limited to binary association tests on small datasets, limiting our understanding of bias complexities. This paper proposes a novel framework for probing language models for social biases by assessing disparate treatment, which involves treating individuals differently according to their affiliation with a sensitive demographic group. We curate SoFa, a large-scale benchmark designed to address the limitations of existing fairness collections. SoFa expands the analysis beyond the binary comparison of stereotypical versus anti-stereotypical identities to include a diverse range of identities and stereotypes. Comparing our methodology with existing benchmarks, we reveal that biases within language models are more nuanced than acknowledged, indicating a broader scope of encoded biases than previously recognized. Benchmarking LMs on SoFa, we expose how identities expressing different religions lead to the most pronounced disparate treatments across all models. Finally, our findings indicate that real-life adversities faced by various groups such as women and people with disabilities are mirrored in the behavior of these models.",
}
| While the impact of social biases in language models has been recognized, prior methods for bias evaluation have been limited to binary association tests on small datasets, limiting our understanding of bias complexities. This paper proposes a novel framework for probing language models for social biases by assessing disparate treatment, which involves treating individuals differently according to their affiliation with a sensitive demographic group. We curate SoFa, a large-scale benchmark designed to address the limitations of existing fairness collections. SoFa expands the analysis beyond the binary comparison of stereotypical versus anti-stereotypical identities to include a diverse range of identities and stereotypes. Comparing our methodology with existing benchmarks, we reveal that biases within language models are more nuanced than acknowledged, indicating a broader scope of encoded biases than previously recognized. Benchmarking LMs on SoFa, we expose how identities expressing different religions lead to the most pronounced disparate treatments across all models. Finally, our findings indicate that real-life adversities faced by various groups such as women and people with disabilities are mirrored in the behavior of these models. | [
"Marchiori Manerba, Marta",
"Stanczak, Karolina",
"Guidotti, Riccardo",
"Augenstein, Isabelle"
] | Social Bias Probing: Fairness Benchmarking for Language Models | emnlp-main.812 | Oral | 2311.09090 | [
""
] | https://huggingface.co/papers/2311.09090 | 0 | 0 | 0 | 4 | [] | [
"copenlu/sofa"
] | [] | [] | [
"copenlu/sofa"
] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.813.bib | https://aclanthology.org/2024.emnlp-main.813/ | @inproceedings{yu-etal-2024-chain,
title = "Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models",
author = "Yu, Wenhao and
Zhang, Hongming and
Pan, Xiaoman and
Cao, Peixin and
Ma, Kaixin and
Li, Jian and
Wang, Hongwei and
Yu, Dong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.813",
pages = "14672--14685",
abstract = "Retrieval-augmented language model (RALM) represents a significant advancement in mitigating factual hallucination by leveraging external knowledge sources. However, the reliability of the retrieved information is not always guaranteed, and the retrieval of irrelevant data can mislead the response generation. Moreover, standard RALMs frequently neglect their intrinsic knowledge due to the interference from retrieved information. In instances where the retrieved information is irrelevant, RALMs should ideally utilize their intrinsic knowledge or, in the absence of both intrinsic and retrieved knowledge, opt to respond with {``}unknown{''} to avoid hallucination. In this paper, we introduces Chain-of-Note (CoN), a novel approach to improve robustness of RALMs in facing noisy, irrelevant documents and in handling unknown scenarios. The core idea of CoN is to generate sequential reading notes for each retrieved document, enabling a thorough evaluation of their relevance to the given question and integrating this information to formulate the final answer. Our experimental results show that GPT-4, when equipped with CoN, outperforms the Chain-of-Thought approach. Besides, we utilized GPT-4 to create 10K CoN data, subsequently trained on smaller models like OPT and LLaMa-2. Our experiments across four open-domain QA benchmarks show that fine-tuned RALMs equipped with CoN significantly outperform standard fine-tuned RALMs.",
}
| Retrieval-augmented language model (RALM) represents a significant advancement in mitigating factual hallucination by leveraging external knowledge sources. However, the reliability of the retrieved information is not always guaranteed, and the retrieval of irrelevant data can mislead the response generation. Moreover, standard RALMs frequently neglect their intrinsic knowledge due to the interference from retrieved information. In instances where the retrieved information is irrelevant, RALMs should ideally utilize their intrinsic knowledge or, in the absence of both intrinsic and retrieved knowledge, opt to respond with {``}unknown{''} to avoid hallucination. In this paper, we introduces Chain-of-Note (CoN), a novel approach to improve robustness of RALMs in facing noisy, irrelevant documents and in handling unknown scenarios. The core idea of CoN is to generate sequential reading notes for each retrieved document, enabling a thorough evaluation of their relevance to the given question and integrating this information to formulate the final answer. Our experimental results show that GPT-4, when equipped with CoN, outperforms the Chain-of-Thought approach. Besides, we utilized GPT-4 to create 10K CoN data, subsequently trained on smaller models like OPT and LLaMa-2. Our experiments across four open-domain QA benchmarks show that fine-tuned RALMs equipped with CoN significantly outperform standard fine-tuned RALMs. | [
"Yu, Wenhao",
"Zhang, Hongming",
"Pan, Xiaoman",
"Cao, Peixin",
"Ma, Kaixin",
"Li, Jian",
"Wang, Hongwei",
"Yu, Dong"
] | Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models | emnlp-main.813 | Poster | 2311.09210 | [
""
] | https://huggingface.co/papers/2311.09210 | 0 | 1 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.814.bib | https://aclanthology.org/2024.emnlp-main.814/ | @inproceedings{pan-etal-2024-dynathink,
title = "{D}yna{T}hink: Fast or Slow? A Dynamic Decision-Making Framework for Large Language Models",
author = "Pan, Jiabao and
Zhang, Yan and
Zhang, Chen and
Liu, Zuozhu and
Wang, Hongwei and
Li, Haizhou",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.814",
pages = "14686--14695",
abstract = "Large language models (LLMs) have demonstrated emergent capabilities across diverse reasoning tasks via popular Chains-of-Thought (COT) prompting. However, such a simple and fast COT approach often encounters limitations in dealing with complicated problems, while a thorough method, which considers multiple reasoning pathways and verifies each step carefully, results in slower inference. This paper addresses the challenge of enabling LLMs to autonomously select between fast and slow inference methods, thereby optimizing both efficiency and effectiveness. We introduce a dynamic decision-making framework that categorizes tasks into two distinct pathways: {`}Fast,{'} designated for tasks where the LLM quickly identifies a high-confidence solution, and {`}Slow,{'} allocated for tasks that the LLM perceives as complex and for which it has low confidence in immediate solutions as well as requiring more reasoning paths to verify. Experiments on five popular reasoning benchmarks demonstrated the superiority of the DynaThink over baselines. For example, when we compared it to strong COT with self-consistency baseline on the complicated MATH dataset, DynaThink achieved more than 3{\%} increase in accuracy with lower cost. The code will be made available upon publication.",
}
| Large language models (LLMs) have demonstrated emergent capabilities across diverse reasoning tasks via popular Chains-of-Thought (COT) prompting. However, such a simple and fast COT approach often encounters limitations in dealing with complicated problems, while a thorough method, which considers multiple reasoning pathways and verifies each step carefully, results in slower inference. This paper addresses the challenge of enabling LLMs to autonomously select between fast and slow inference methods, thereby optimizing both efficiency and effectiveness. We introduce a dynamic decision-making framework that categorizes tasks into two distinct pathways: {`}Fast,{'} designated for tasks where the LLM quickly identifies a high-confidence solution, and {`}Slow,{'} allocated for tasks that the LLM perceives as complex and for which it has low confidence in immediate solutions as well as requiring more reasoning paths to verify. Experiments on five popular reasoning benchmarks demonstrated the superiority of the DynaThink over baselines. For example, when we compared it to strong COT with self-consistency baseline on the complicated MATH dataset, DynaThink achieved more than 3{\%} increase in accuracy with lower cost. The code will be made available upon publication. | [
"Pan, Jiabao",
"Zhang, Yan",
"Zhang, Chen",
"Liu, Zuozhu",
"Wang, Hongwei",
"Li, Haizhou"
] | DynaThink: Fast or Slow? A Dynamic Decision-Making Framework for Large Language Models | emnlp-main.814 | Poster | 2407.01009 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.815.bib | https://aclanthology.org/2024.emnlp-main.815/ | @inproceedings{wang-etal-2024-revisiting,
title = "Revisiting Automated Evaluation for Long-form Table Question Answering",
author = "Wang, Yuqi and
Chen, Lyuhao and
Cai, Songcheng and
Xu, Zhijian and
Zhao, Yilun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.815",
pages = "14696--14706",
abstract = "In the era of data-driven decision-making, Long-Form Table Question Answering (LFTQA) is essential for integrating structured data with complex reasoning. Despite recent advancements in Large Language Models (LLMs) for LFTQA, evaluating their effectiveness remains a significant challenge. We introduce LFTQA-Eval, a meta-evaluation dataset comprising 2,988 human-annotated examples, to rigorously assess the efficacy of current automated metrics in assessing LLM-based LFTQA systems, with a focus on faithfulness and comprehensiveness. Our findings reveal that existing automatic metrics poorly correlate with human judgments and fail to consistently differentiate between factually accurate responses and those that are coherent but factually incorrect. Additionally, our in-depth examination of the limitations associated with automated evaluation methods provides essential insights for the improvement of LFTQA automated evaluation.",
}
| In the era of data-driven decision-making, Long-Form Table Question Answering (LFTQA) is essential for integrating structured data with complex reasoning. Despite recent advancements in Large Language Models (LLMs) for LFTQA, evaluating their effectiveness remains a significant challenge. We introduce LFTQA-Eval, a meta-evaluation dataset comprising 2,988 human-annotated examples, to rigorously assess the efficacy of current automated metrics in assessing LLM-based LFTQA systems, with a focus on faithfulness and comprehensiveness. Our findings reveal that existing automatic metrics poorly correlate with human judgments and fail to consistently differentiate between factually accurate responses and those that are coherent but factually incorrect. Additionally, our in-depth examination of the limitations associated with automated evaluation methods provides essential insights for the improvement of LFTQA automated evaluation. | [
"Wang, Yuqi",
"Chen, Lyuhao",
"Cai, Songcheng",
"Xu, Zhijian",
"Zhao, Yilun"
] | Revisiting Automated Evaluation for Long-form Table Question Answering | emnlp-main.815 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.816.bib | https://aclanthology.org/2024.emnlp-main.816/ | @inproceedings{silva-etal-2024-weak,
title = "Weak Reward Model Transforms Generative Models into Robust Causal Event Extraction Systems",
author = "Silva, Italo Luis Da and
Yan, Hanqi and
Gui, Lin and
He, Yulan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.816",
pages = "14707--14719",
abstract = "The inherent ambiguity of cause and effect boundaries poses a challenge in evaluating causal event extraction tasks. Traditional metrics like Exact Match and BertScore poorly reflect model performance, so we trained evaluation models to approximate human evaluation, achieving high agreement. We used them to perform Reinforcement Learning with extraction models to align them with human preference, prioritising semantic understanding. We successfully explored our approach through multiple datasets, including transferring an evaluator trained on one dataset to another as a way to decrease the reliance on human-annotated data. In that vein, we also propose a weak-to-strong supervision method that uses a fraction of the annotated data to train an evaluation model while still achieving high performance in training an RL model.",
}
| The inherent ambiguity of cause and effect boundaries poses a challenge in evaluating causal event extraction tasks. Traditional metrics like Exact Match and BertScore poorly reflect model performance, so we trained evaluation models to approximate human evaluation, achieving high agreement. We used them to perform Reinforcement Learning with extraction models to align them with human preference, prioritising semantic understanding. We successfully explored our approach through multiple datasets, including transferring an evaluator trained on one dataset to another as a way to decrease the reliance on human-annotated data. In that vein, we also propose a weak-to-strong supervision method that uses a fraction of the annotated data to train an evaluation model while still achieving high performance in training an RL model. | [
"Silva, Italo Luis Da",
"Yan, Hanqi",
"Gui, Lin",
"He, Yulan"
] | Weak Reward Model Transforms Generative Models into Robust Causal Event Extraction Systems | emnlp-main.816 | Poster | 2406.18245 | [
"https://github.com/oyarsa/event_extraction"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.817.bib | https://aclanthology.org/2024.emnlp-main.817/ | @inproceedings{zhang-etal-2024-learn,
title = "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning",
author = "Zhang, Zhihan and
Ge, Tao and
Liang, Zhenwen and
Yu, Wenhao and
Yu, Dian and
Jia, Mengzhao and
Yu, Dong and
Jiang, Meng",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.817",
pages = "14720--14738",
abstract = "Supervised fine-tuning enhances the problem-solving abilities of language models across various mathematical reasoning tasks. To maximize such benefits, existing research focuses on *broadening* the training set with various data augmentation techniques, which is effective for standard single-round question-answering settings. Our work introduces a novel technique aimed at cultivating a *deeper* understanding of the training problems at hand, enhancing performance not only in standard settings but also in more complex scenarios that require reflective thinking. Specifically, we propose **reflective augmentation**, a method that embeds problem reflection into each training instance. It trains the model to consider alternative perspectives and engage with abstractions and analogies, thereby fostering a thorough comprehension through reflective reasoning. Extensive experiments validate the achievement of our aim, underscoring the unique advantages of our method and its complementary nature relative to existing augmentation techniques.",
}
| Supervised fine-tuning enhances the problem-solving abilities of language models across various mathematical reasoning tasks. To maximize such benefits, existing research focuses on *broadening* the training set with various data augmentation techniques, which is effective for standard single-round question-answering settings. Our work introduces a novel technique aimed at cultivating a *deeper* understanding of the training problems at hand, enhancing performance not only in standard settings but also in more complex scenarios that require reflective thinking. Specifically, we propose **reflective augmentation**, a method that embeds problem reflection into each training instance. It trains the model to consider alternative perspectives and engage with abstractions and analogies, thereby fostering a thorough comprehension through reflective reasoning. Extensive experiments validate the achievement of our aim, underscoring the unique advantages of our method and its complementary nature relative to existing augmentation techniques. | [
"Zhang, Zhihan",
"Ge, Tao",
"Liang, Zhenwen",
"Yu, Wenhao",
"Yu, Dian",
"Jia, Mengzhao",
"Yu, Dong",
"Jiang, Meng"
] | Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning | emnlp-main.817 | Poster | 2406.12050 | [
"https://github.com/ytyz1307zzh/RefAug"
] | https://huggingface.co/papers/2406.12050 | 3 | 18 | 1 | 7 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.818.bib | https://aclanthology.org/2024.emnlp-main.818/ | @inproceedings{zhao-etal-2024-findver,
title = "{F}in{DV}er: Explainable Claim Verification over Long and Hybrid-content Financial Documents",
author = "Zhao, Yilun and
Long, Yitao and
Jiang, Tintin and
Wang, Chengye and
Chen, Weiyuan and
Liu, Hongjun and
Tang, Xiangru and
Zhang, Yiming and
Zhao, Chen and
Cohan, Arman",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.818",
pages = "14739--14752",
abstract = "We introduce FinDVer, a comprehensive benchmark specifically designed to evaluate the explainable claim verification capabilities of LLMs in the context of understanding and analyzing long, hybrid-content financial documents. FinDVer contains 4,000 expert-annotated examples across four subsets, each focusing on a type of scenario that frequently arises in real-world financial domains. We assess a broad spectrum of 25 LLMs under long-context and RAG settings. Our results show that even the current best-performing system (i.e., GPT-4o) significantly lags behind human experts. Our detailed findings and insights highlight the strengths and limitations of existing LLMs in this new task. We believe FinDVer can serve as a valuable benchmark for evaluating LLM capabilities in claim verification over complex, expert-domain documents.",
}
| We introduce FinDVer, a comprehensive benchmark specifically designed to evaluate the explainable claim verification capabilities of LLMs in the context of understanding and analyzing long, hybrid-content financial documents. FinDVer contains 4,000 expert-annotated examples across four subsets, each focusing on a type of scenario that frequently arises in real-world financial domains. We assess a broad spectrum of 25 LLMs under long-context and RAG settings. Our results show that even the current best-performing system (i.e., GPT-4o) significantly lags behind human experts. Our detailed findings and insights highlight the strengths and limitations of existing LLMs in this new task. We believe FinDVer can serve as a valuable benchmark for evaluating LLM capabilities in claim verification over complex, expert-domain documents. | [
"Zhao, Yilun",
"Long, Yitao",
"Jiang, Tintin",
"Wang, Chengye",
"Chen, Weiyuan",
"Liu, Hongjun",
"Tang, Xiangru",
"Zhang, Yiming",
"Zhao, Chen",
"Cohan, Arman"
] | FinDVer: Explainable Claim Verification over Long and Hybrid-content Financial Documents | emnlp-main.818 | Poster | 2411.05764 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.819.bib | https://aclanthology.org/2024.emnlp-main.819/ | @inproceedings{zhang-etal-2024-extracting,
title = "Extracting Prompts by Inverting {LLM} Outputs",
author = "Zhang, Collin and
Morris, John Xavier and
Shmatikov, Vitaly",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.819",
pages = "14753--14777",
abstract = "We consider the problem of language model inversion: given outputs of a language model, we seek to extract the prompt that generated these outputs. We develop a new black-box method, output2prompt, that extracts prompts without access to the model{'}s logits and without adversarial or jailbreaking queries. Unlike previous methods, output2prompt only needs outputs of normal user queries. To improve memory efficiency, output2prompt employs a new sparse encoding techique. We measure the efficacy of output2prompt on a variety of user and system prompts and demonstrate zero-shot transferability across different LLMs.",
}
| We consider the problem of language model inversion: given outputs of a language model, we seek to extract the prompt that generated these outputs. We develop a new black-box method, output2prompt, that extracts prompts without access to the model{'}s logits and without adversarial or jailbreaking queries. Unlike previous methods, output2prompt only needs outputs of normal user queries. To improve memory efficiency, output2prompt employs a new sparse encoding techique. We measure the efficacy of output2prompt on a variety of user and system prompts and demonstrate zero-shot transferability across different LLMs. | [
"Zhang, Collin",
"Morris, John Xavier",
"Shmatikov, Vitaly"
] | Extracting Prompts by Inverting LLM Outputs | emnlp-main.819 | Poster | 2405.15012 | [
"https://github.com/collinzrj/output2prompt"
] | https://huggingface.co/papers/2405.15012 | 0 | 0 | 0 | 3 | [] | [] | [
"repelloai/whistleblower"
] | [] | [] | [
"repelloai/whistleblower"
] | 1 |
https://aclanthology.org/2024.emnlp-main.820.bib | https://aclanthology.org/2024.emnlp-main.820/ | @inproceedings{fan-etal-2024-biasalert,
title = "{B}ias{A}lert: A Plug-and-play Tool for Social Bias Detection in {LLM}s",
author = "Fan, Zhiting and
Chen, Ruizhe and
Xu, Ruiling and
Liu, Zuozhu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.820",
pages = "14778--14790",
abstract = "Evaluating the bias of LLMs becomes more crucial with their rapid development. However, existing evaluation approaches rely on fixed-form outputs and cannot adapt to the flexible open-text generation scenarios of LLMs (e.g., sentence completion and question answering). To address this, we introduce BiasAlert, a plug-and-play tool designed to detect social bias in open-text generations of LLMs. BiasAlert integrates external human knowledge with its inherent reasoning capabilities to detect bias reliably. Extensive experiments demonstrate that BiasAlert significantly outperforms existing state-of-the-art methods like GPT-4-as-Judge in detecting bias. Furthermore, through application studies, we showcase the utility of BiasAlert in reliable LLM fairness evaluation and bias mitigation across various scenarios. Model and code will be publicly released.",
}
| Evaluating the bias of LLMs becomes more crucial with their rapid development. However, existing evaluation approaches rely on fixed-form outputs and cannot adapt to the flexible open-text generation scenarios of LLMs (e.g., sentence completion and question answering). To address this, we introduce BiasAlert, a plug-and-play tool designed to detect social bias in open-text generations of LLMs. BiasAlert integrates external human knowledge with its inherent reasoning capabilities to detect bias reliably. Extensive experiments demonstrate that BiasAlert significantly outperforms existing state-of-the-art methods like GPT-4-as-Judge in detecting bias. Furthermore, through application studies, we showcase the utility of BiasAlert in reliable LLM fairness evaluation and bias mitigation across various scenarios. Model and code will be publicly released. | [
"Fan, Zhiting",
"Chen, Ruizhe",
"Xu, Ruiling",
"Liu, Zuozhu"
] | BiasAlert: A Plug-and-play Tool for Social Bias Detection in LLMs | emnlp-main.820 | Poster | 2407.10241 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.821.bib | https://aclanthology.org/2024.emnlp-main.821/ | @inproceedings{hu-etal-2024-vhasr,
title = "{VHASR}: A Multimodal Speech Recognition System With Vision Hotwords",
author = "Hu, Jiliang and
Li, Zuchao and
Wang, Ping and
Ai, Haojun and
Zhang, Lefei and
Zhao, Hai",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.821",
pages = "14791--14804",
abstract = "The image-based multimodal automatic speech recognition (ASR) model enhances speech recognition performance by incorporating audio-related image. However, some works suggest that introducing image information to model does not help improving ASR performance. In this paper, we propose a novel approach effectively utilizing audio-related image information and set up VHASR, a multimodal speech recognition system that uses vision as hotwords to strengthen the model{'}s speech recognition capability. Our system utilizes a dual-stream architecture, which firstly transcribes the text on the two streams separately, and then combines the outputs. We evaluate the proposed model on four datasets: Flickr8k, ADE20k, COCO, and OpenImages. The experimental results show that VHASR can effectively utilize key information in images to enhance the model{'}s speech recognition ability. Its performance not only surpasses unimodal ASR, but also achieves SOTA among existing image-based multimodal ASR.",
}
| The image-based multimodal automatic speech recognition (ASR) model enhances speech recognition performance by incorporating audio-related image. However, some works suggest that introducing image information to model does not help improving ASR performance. In this paper, we propose a novel approach effectively utilizing audio-related image information and set up VHASR, a multimodal speech recognition system that uses vision as hotwords to strengthen the model{'}s speech recognition capability. Our system utilizes a dual-stream architecture, which firstly transcribes the text on the two streams separately, and then combines the outputs. We evaluate the proposed model on four datasets: Flickr8k, ADE20k, COCO, and OpenImages. The experimental results show that VHASR can effectively utilize key information in images to enhance the model{'}s speech recognition ability. Its performance not only surpasses unimodal ASR, but also achieves SOTA among existing image-based multimodal ASR. | [
"Hu, Jiliang",
"Li, Zuchao",
"Wang, Ping",
"Ai, Haojun",
"Zhang, Lefei",
"Zhao, Hai"
] | VHASR: A Multimodal Speech Recognition System With Vision Hotwords | emnlp-main.821 | Poster | 2410.00822 | [
"https://github.com/193746/VHASR"
] | https://huggingface.co/papers/2410.00822 | 0 | 0 | 0 | 6 | [
"MYTH-Lab/VHASR"
] | [] | [] | [
"MYTH-Lab/VHASR"
] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.822.bib | https://aclanthology.org/2024.emnlp-main.822/ | @inproceedings{tan-etal-2024-probability,
title = "A Probability{--}Quality Trade-off in Aligned Language Models and its Relation to Sampling Adaptors",
author = "Tan, Naaman and
Valvoda, Josef and
Liu, Tianyu and
Svete, Anej and
Qin, Yanxia and
Kan, Min-Yen and
Cotterell, Ryan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.822",
pages = "14805--14829",
}
| No abstract found | [
"Tan, Naaman",
"Valvoda, Josef",
"Liu, Tianyu",
"Svete, Anej",
"Qin, Yanxia",
"Kan, Min-Yen",
"Cotterell, Ryan"
] | A Probability–Quality Trade-off in Aligned Language Models and its Relation to Sampling Adaptors | emnlp-main.822 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.823.bib | https://aclanthology.org/2024.emnlp-main.823/ | @inproceedings{wang-etal-2024-bridging-local,
title = "Bridging Local Details and Global Context in Text-Attributed Graphs",
author = "Wang, Yaoke and
Zhu, Yun and
Zhang, Wenqiao and
Zhuang, Yueting and
Liyunfei, Liyunfei and
Tang, Siliang",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.823",
pages = "14830--14841",
abstract = "Representation learning on text-attributed graphs (TAGs) is vital for real-world applications, as they combine semantic textual and contextual structural information. Research in this field generally consist of two main perspectives: local-level encoding and global-level aggregating, respectively refer to textual node information unification ($e.g.$, using Language Models) and structure-augmented modeling ($e.g.$, using Graph Neural Networks). Most existing works focus on combining different information levels but overlook the interconnections, $i.e.$, the contextual textual information among nodes, which provides semantic insights to bridge local and global levels. In this paper, we propose GraphBridge, a $multi-granularity integration$ framework that bridges local and global perspectives by leveraging contextual textual information, enhancing fine-grained understanding of TAGs. Besides, to tackle scalability and efficiency challenges, we introduce a graph-aware token reduction module. Extensive experiments across various models and datasets show that our method achieves state-of-the-art performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues. Codes are available at https://github.com/wykk00/GraphBridge.",
}
| Representation learning on text-attributed graphs (TAGs) is vital for real-world applications, as they combine semantic textual and contextual structural information. Research in this field generally consist of two main perspectives: local-level encoding and global-level aggregating, respectively refer to textual node information unification ($e.g.$, using Language Models) and structure-augmented modeling ($e.g.$, using Graph Neural Networks). Most existing works focus on combining different information levels but overlook the interconnections, $i.e.$, the contextual textual information among nodes, which provides semantic insights to bridge local and global levels. In this paper, we propose GraphBridge, a $multi-granularity integration$ framework that bridges local and global perspectives by leveraging contextual textual information, enhancing fine-grained understanding of TAGs. Besides, to tackle scalability and efficiency challenges, we introduce a graph-aware token reduction module. Extensive experiments across various models and datasets show that our method achieves state-of-the-art performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues. Codes are available at https://github.com/wykk00/GraphBridge. | [
"Wang, Yaoke",
"Zhu, Yun",
"Zhang, Wenqiao",
"Zhuang, Yueting",
"Liyunfei, Liyunfei",
"Tang, Siliang"
] | Bridging Local Details and Global Context in Text-Attributed Graphs | emnlp-main.823 | Poster | 2406.12608 | [
"https://github.com/wykk00/graphbridge"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.824.bib | https://aclanthology.org/2024.emnlp-main.824/ | @inproceedings{ali-etal-2024-building,
title = "Building Resources for Emakhuwa: Machine Translation and News Classification Benchmarks",
author = "Ali, Felermino D. M. A. and
Lopes Cardoso, Henrique and
Sousa-Silva, Rui",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.824",
pages = "14842--14857",
abstract = "This paper introduces a comprehensive collection of NLP resources for Emakhuwa, Mozambique{'}s most widely spoken language. The resources include the first manually translated news bitext corpus between Portuguese and Emakhuwa, news topic classification datasets, and monolingual data. We detail the process and challenges of acquiring this data and present benchmark results for machine translation and news topic classification tasks. Our evaluation examines the impact of different data types{---}originally clean text, post-corrected OCR, and back-translated data{---}and the effects of fine-tuning from pre-trained models, including those focused on African languages.Our benchmarks demonstrate good performance in news topic classification and promising results in machine translation. We fine-tuned multilingual encoder-decoder models using real and synthetic data and evaluated them on our test set and the FLORES evaluation sets. The results highlight the importance of incorporating more data and potential for future improvements.All models, code, and datasets are available in the \url{https://huggingface.co/LIACC} repository under the CC BY 4.0 license.",
}
| This paper introduces a comprehensive collection of NLP resources for Emakhuwa, Mozambique{'}s most widely spoken language. The resources include the first manually translated news bitext corpus between Portuguese and Emakhuwa, news topic classification datasets, and monolingual data. We detail the process and challenges of acquiring this data and present benchmark results for machine translation and news topic classification tasks. Our evaluation examines the impact of different data types{---}originally clean text, post-corrected OCR, and back-translated data{---}and the effects of fine-tuning from pre-trained models, including those focused on African languages.Our benchmarks demonstrate good performance in news topic classification and promising results in machine translation. We fine-tuned multilingual encoder-decoder models using real and synthetic data and evaluated them on our test set and the FLORES evaluation sets. The results highlight the importance of incorporating more data and potential for future improvements.All models, code, and datasets are available in the \url{https://huggingface.co/LIACC} repository under the CC BY 4.0 license. | [
"Ali, Felermino D. M. A.",
"Lopes Cardoso, Henrique",
"Sousa-Silva, Rui"
] | Building Resources for Emakhuwa: Machine Translation and News Classification Benchmarks | emnlp-main.824 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.825.bib | https://aclanthology.org/2024.emnlp-main.825/ | @inproceedings{modarres-etal-2024-repmatch,
title = "{R}ep{M}atch: Quantifying Cross-Instance Similarities in Representation Space",
author = "Modarres, Mohammad Reza and
Abbasi, Sina and
Pilehvar, Mohammad Taher",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.825",
pages = "14858--14869",
abstract = "Advances in dataset analysis techniques have enabled more sophisticated approaches to analyzing and characterizing training data instances, often categorizing data based on attributes such as {``}difficulty{''}. In this work, we introduce RepMatch, a novel method that characterizes data through the lens of similarity.RepMatch quantifies the similarity between subsets of training instances by comparing the knowledge encoded in models trained on them, overcoming the limitations of existing analysis methods that focus solely on individual instances and are restricted to within-dataset analysis.Our framework allows for a broader evaluation, enabling similarity comparisons across arbitrary subsets of instances, supporting both dataset-to-dataset and instance-to-dataset analyses. We validate the effectiveness of RepMatch across multiple NLP tasks, datasets, and models. Through extensive experimentation, we demonstrate that RepMatch can effectively compare datasets, identify more representative subsets of a dataset (that lead to better performance than randomly selected subsets of equivalent size), and uncover heuristics underlying the construction of some challenge datasets.",
}
| Advances in dataset analysis techniques have enabled more sophisticated approaches to analyzing and characterizing training data instances, often categorizing data based on attributes such as {``}difficulty{''}. In this work, we introduce RepMatch, a novel method that characterizes data through the lens of similarity.RepMatch quantifies the similarity between subsets of training instances by comparing the knowledge encoded in models trained on them, overcoming the limitations of existing analysis methods that focus solely on individual instances and are restricted to within-dataset analysis.Our framework allows for a broader evaluation, enabling similarity comparisons across arbitrary subsets of instances, supporting both dataset-to-dataset and instance-to-dataset analyses. We validate the effectiveness of RepMatch across multiple NLP tasks, datasets, and models. Through extensive experimentation, we demonstrate that RepMatch can effectively compare datasets, identify more representative subsets of a dataset (that lead to better performance than randomly selected subsets of equivalent size), and uncover heuristics underlying the construction of some challenge datasets. | [
"Modarres, Mohammad Reza",
"Abbasi, Sina",
"Pilehvar, Mohammad Taher"
] | RepMatch: Quantifying Cross-Instance Similarities in Representation Space | emnlp-main.825 | Poster | 2410.09642 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.826.bib | https://aclanthology.org/2024.emnlp-main.826/ | @inproceedings{huang-etal-2024-commonsense,
title = "Commonsense Knowledge Editing Based on Free-Text in {LLM}s",
author = "Huang, Xiusheng and
Wang, Yequan and
Zhao, Jun and
Liu, Kang",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.826",
pages = "14870--14880",
abstract = "Knowledge editing technology is crucial for maintaining the accuracy and timeliness of large language models (LLMs) . However, the setting of this task overlooks a significant portion of commonsense knowledge based on free-text in the real world, characterized by broad knowledge scope, long content and non instantiation. The editing objects of previous methods (e.g., MEMIT) were single token or entity, which were not suitable for commonsense knowledge in free-text form. To address the aforementioned challenges, we conducted experiments from two perspectives: knowledge localization and knowledge editing. Firstly, we introduced Knowledge Localization for Free-Text(KLFT) method, revealing the challenges associated with the distribution of commonsense knowledge in MLP and Attention layers, as well as in decentralized distribution. Next, we propose a Dynamics-aware Editing Method(DEM), which utilizes a Dynamics-aware Module to locate the parameter positions corresponding to commonsense knowledge, and uses Knowledge Editing Module to update knowledge. The DEM method fully explores the potential of the MLP and Attention layers, and successfully edits commonsense knowledge based on free-text. The experimental results indicate that the DEM can achieve excellent editing performance.",
}
| Knowledge editing technology is crucial for maintaining the accuracy and timeliness of large language models (LLMs) . However, the setting of this task overlooks a significant portion of commonsense knowledge based on free-text in the real world, characterized by broad knowledge scope, long content and non instantiation. The editing objects of previous methods (e.g., MEMIT) were single token or entity, which were not suitable for commonsense knowledge in free-text form. To address the aforementioned challenges, we conducted experiments from two perspectives: knowledge localization and knowledge editing. Firstly, we introduced Knowledge Localization for Free-Text(KLFT) method, revealing the challenges associated with the distribution of commonsense knowledge in MLP and Attention layers, as well as in decentralized distribution. Next, we propose a Dynamics-aware Editing Method(DEM), which utilizes a Dynamics-aware Module to locate the parameter positions corresponding to commonsense knowledge, and uses Knowledge Editing Module to update knowledge. The DEM method fully explores the potential of the MLP and Attention layers, and successfully edits commonsense knowledge based on free-text. The experimental results indicate that the DEM can achieve excellent editing performance. | [
"Huang, Xiusheng",
"Wang, Yequan",
"Zhao, Jun",
"Liu, Kang"
] | Commonsense Knowledge Editing Based on Free-Text in LLMs | emnlp-main.826 | Poster | 2410.23844 | [
"https://github.com/huangxiusheng/dem"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.827.bib | https://aclanthology.org/2024.emnlp-main.827/ | @inproceedings{pendzel-etal-2024-closer,
title = "A Closer Look at Multidimensional Online Political Incivility",
author = "Pendzel, Sagi and
Lotan, Nir and
Zoizner, Alon and
Minkov, Einat",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.827",
pages = "14881--14896",
abstract = "Toxic online political discourse has become prevalent, where scholars debate about its impact to Democratic processes. This work presents a large-scale study of political incivility on Twitter. In line with theories of political communication, we differentiate between harsh {`}impolite{'} style and intolerant substance. We present a dataset of 13K political tweets in the U.S. context, which we collected and labeled by those categories using crowd sourcing. Our dataset and results shed light on hostile political discourse focused on partisan conflicts in the U.S. The evaluation of state-of-the-art classifiers illustrates the challenges involved in political incivility detection, which often requires high-level semantic and social understanding. Nevertheless, performing incivility detection at scale, we are able to characterise its distribution across individual users and geopolitical regions, where our findings align and extend existing theories of political communication. In particular, we find that roughly 80{\%} of the uncivil tweets are authored by 20{\%} of the users, where users who are politically engaged are more inclined to use uncivil language. We further find that political incivility exhibits network homophily, and that incivility is more prominent in highly competitive geopolitical regions. Our results apply to both uncivil style and substance.",
}
| Toxic online political discourse has become prevalent, where scholars debate about its impact to Democratic processes. This work presents a large-scale study of political incivility on Twitter. In line with theories of political communication, we differentiate between harsh {`}impolite{'} style and intolerant substance. We present a dataset of 13K political tweets in the U.S. context, which we collected and labeled by those categories using crowd sourcing. Our dataset and results shed light on hostile political discourse focused on partisan conflicts in the U.S. The evaluation of state-of-the-art classifiers illustrates the challenges involved in political incivility detection, which often requires high-level semantic and social understanding. Nevertheless, performing incivility detection at scale, we are able to characterise its distribution across individual users and geopolitical regions, where our findings align and extend existing theories of political communication. In particular, we find that roughly 80{\%} of the uncivil tweets are authored by 20{\%} of the users, where users who are politically engaged are more inclined to use uncivil language. We further find that political incivility exhibits network homophily, and that incivility is more prominent in highly competitive geopolitical regions. Our results apply to both uncivil style and substance. | [
"Pendzel, Sagi",
"Lotan, Nir",
"Zoizner, Alon",
"Minkov, Einat"
] | A Closer Look at Multidimensional Online Political Incivility | emnlp-main.827 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.828.bib | https://aclanthology.org/2024.emnlp-main.828/ | @inproceedings{li-etal-2024-leveraging,
title = "Leveraging {BERT} and {TFIDF} Features for Short Text Clustering via Alignment-Promoting Co-Training",
author = "Li, Zetong and
Su, Qinliang and
Si, Shijing and
Yu, Jianxing",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.828",
pages = "14897--14913",
abstract = "BERT and TFIDF features excel in capturing rich semantics and important words, respectively. Since most existing clustering methods are solely based on the BERT model, they often fall short in utilizing keyword information, which, however, is very useful in clustering short texts. In this paper, we propose a **CO**-**T**raining **C**lustering (**COTC**) framework to make use of the collective strengths of BERT and TFIDF features. Specifically, we develop two modules responsible for the clustering of BERT and TFIDF features, respectively. We use the deep representations and cluster assignments from the TFIDF module outputs to guide the learning of the BERT module, seeking to align them at both the representation and cluster levels. Reversely, we also use the BERT module outputs to train the TFIDF module, thus leading to the mutual promotion. We then show that the alternating co-training framework can be placed under a unified joint training objective, which allows the two modules to be connected tightly and the training signals to be propagated efficiently. Experiments on eight benchmark datasets show that our method outperforms current SOTA methods significantly.",
}
| BERT and TFIDF features excel in capturing rich semantics and important words, respectively. Since most existing clustering methods are solely based on the BERT model, they often fall short in utilizing keyword information, which, however, is very useful in clustering short texts. In this paper, we propose a **CO**-**T**raining **C**lustering (**COTC**) framework to make use of the collective strengths of BERT and TFIDF features. Specifically, we develop two modules responsible for the clustering of BERT and TFIDF features, respectively. We use the deep representations and cluster assignments from the TFIDF module outputs to guide the learning of the BERT module, seeking to align them at both the representation and cluster levels. Reversely, we also use the BERT module outputs to train the TFIDF module, thus leading to the mutual promotion. We then show that the alternating co-training framework can be placed under a unified joint training objective, which allows the two modules to be connected tightly and the training signals to be propagated efficiently. Experiments on eight benchmark datasets show that our method outperforms current SOTA methods significantly. | [
"Li, Zetong",
"Su, Qinliang",
"Si, Shijing",
"Yu, Jianxing"
] | Leveraging BERT and TFIDF Features for Short Text Clustering via Alignment-Promoting Co-Training | emnlp-main.828 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.829.bib | https://aclanthology.org/2024.emnlp-main.829/ | @inproceedings{iluz-etal-2024-applying,
title = "Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation",
author = "Iluz, Bar and
Elazar, Yanai and
Yehudai, Asaf and
Stanovsky, Gabriel",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.829",
pages = "14914--14921",
abstract = "Most works on gender bias focus on intrinsic bias {---} removing traces of information about a protected group from the model{'}s internal representation. However, these works are often disconnected from the impact of such debiasing on downstream applications, which is the main motivation for debiasing in the first place. In this work, we systematically test how methods for intrinsic debiasing affect neural machine translation models, by measuring the extrinsic bias of such systems under different design choices. We highlight three challenges and mismatches between the debiasing techniques and their end-goal usage, including the choice of embeddings to debias, the mismatch between words and sub-word tokens debiasing, and the effect on different target languages. We find that these considerations have a significant impact on downstream performance and the success of debiasing.",
}
| Most works on gender bias focus on intrinsic bias {---} removing traces of information about a protected group from the model{'}s internal representation. However, these works are often disconnected from the impact of such debiasing on downstream applications, which is the main motivation for debiasing in the first place. In this work, we systematically test how methods for intrinsic debiasing affect neural machine translation models, by measuring the extrinsic bias of such systems under different design choices. We highlight three challenges and mismatches between the debiasing techniques and their end-goal usage, including the choice of embeddings to debias, the mismatch between words and sub-word tokens debiasing, and the effect on different target languages. We find that these considerations have a significant impact on downstream performance and the success of debiasing. | [
"Iluz, Bar",
"Elazar, Yanai",
"Yehudai, Asaf",
"Stanovsky, Gabriel"
] | Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation | emnlp-main.829 | Poster | 2406.00787 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.830.bib | https://aclanthology.org/2024.emnlp-main.830/ | @inproceedings{datta-pramanik-2024-unsupervised,
title = "Unsupervised Named Entity Disambiguation for Low Resource Domains",
author = "Datta, Debarghya and
Pramanik, Soumajit",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.830",
pages = "14922--14928",
abstract = "In the ever-evolving landscape of natural language processing and information retrieval, the need for robust and domain-specific entity linking algorithms has become increasingly apparent. It is crucial in a considerable number of fields such as humanities, technical writing and biomedical sciences to enrich texts with semantics and discover more knowledge. The use of Named Entity Disambiguation (NED) in such domains requires handling noisy texts, low resource settings and domain-specific KBs. Existing approaches are mostly inappropriate for such scenarios, as they either depend on training data or are not flexible enough to work with domain-specific KBs. Thus in this work, we present a unsupervised approach leveraging the concept of Group Steiner Trees (GST), which can identify the most relevant candidate for entity disambiguation using the contextual similarities across candidate entities for all the mentions present in a document. We outperform the state-of-the-art unsupervised methods by more than 40{\%}(in avg) in terms of Precision@1 and Hit@5 across various domain-specific datasets.",
}
| In the ever-evolving landscape of natural language processing and information retrieval, the need for robust and domain-specific entity linking algorithms has become increasingly apparent. It is crucial in a considerable number of fields such as humanities, technical writing and biomedical sciences to enrich texts with semantics and discover more knowledge. The use of Named Entity Disambiguation (NED) in such domains requires handling noisy texts, low resource settings and domain-specific KBs. Existing approaches are mostly inappropriate for such scenarios, as they either depend on training data or are not flexible enough to work with domain-specific KBs. Thus in this work, we present a unsupervised approach leveraging the concept of Group Steiner Trees (GST), which can identify the most relevant candidate for entity disambiguation using the contextual similarities across candidate entities for all the mentions present in a document. We outperform the state-of-the-art unsupervised methods by more than 40{\%}(in avg) in terms of Precision@1 and Hit@5 across various domain-specific datasets. | [
"Datta, Debarghya",
"Pramanik, Soumajit"
] | Unsupervised Named Entity Disambiguation for Low Resource Domains | emnlp-main.830 | Oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.831.bib | https://aclanthology.org/2024.emnlp-main.831/ | @inproceedings{chekalina-etal-2024-sparsegrad,
title = "{S}parse{G}rad: A Selective Method for Efficient Fine-tuning of {MLP} Layers",
author = "Chekalina, Viktoriia A. and
Rudenko, Anna and
Mezentsev, Gleb and
Mikhalev, Aleksandr and
Panchenko, Alexander and
Oseledets, Ivan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.831",
pages = "14929--14939",
abstract = "The performance of Transformer models has been enhanced by increasing the number of parameters and the length of the processed text. Consequently, fine-tuning the entire model becomes a memory-intensive process. High-performance methods for parameter-efficient fine-tuning (PEFT) typically work with Attention blocks and often overlook MLP blocks, which contain about half of the model parameters. We propose a new selective PEFT method, namely SparseGrad, that performs well on MLP blocks. We transfer layer gradients to a space where only about 1{\%} of the layer{'}s elements remain significant. By converting gradients into a sparse structure, we reduce the number of updated parameters. We apply SparseGrad to fine-tune BERT and RoBERTa for the NLU task and LLaMa-2 for the Question-Answering task. In these experiments, with identical memory requirements, our method outperforms LoRA and MeProp, robust popular state-of-the-art PEFT approaches.",
}
| The performance of Transformer models has been enhanced by increasing the number of parameters and the length of the processed text. Consequently, fine-tuning the entire model becomes a memory-intensive process. High-performance methods for parameter-efficient fine-tuning (PEFT) typically work with Attention blocks and often overlook MLP blocks, which contain about half of the model parameters. We propose a new selective PEFT method, namely SparseGrad, that performs well on MLP blocks. We transfer layer gradients to a space where only about 1{\%} of the layer{'}s elements remain significant. By converting gradients into a sparse structure, we reduce the number of updated parameters. We apply SparseGrad to fine-tune BERT and RoBERTa for the NLU task and LLaMa-2 for the Question-Answering task. In these experiments, with identical memory requirements, our method outperforms LoRA and MeProp, robust popular state-of-the-art PEFT approaches. | [
"Chekalina, Viktoriia A.",
"Rudenko, Anna",
"Mezentsev, Gleb",
"Mikhalev, Aleks",
"r",
"Panchenko, Alex",
"er",
"Oseledets, Ivan"
] | SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers | emnlp-main.831 | Poster | 2410.07383 | [
"https://github.com/sayankotor/sparse_grads"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.832.bib | https://aclanthology.org/2024.emnlp-main.832/ | @inproceedings{li-etal-2024-mocokgc,
title = "{M}o{C}o{KGC}: Momentum Contrast Entity Encoding for Knowledge Graph Completion",
author = "Li, Qingyang and
Zhong, Yanru and
Qin, Yuchu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.832",
pages = "14940--14952",
abstract = "In recent years, numerous studies have sought to enhance the capabilities of pretrained language models (PLMs) for Knowledge Graph Completion (KGC) tasks by integrating structural information from knowledge graphs. However, existing approaches have not effectively combined the structural attributes of knowledge graphs with the textual descriptions of entities to generate robust entity encodings.To address this issue, this paper proposes MoCoKGC (Momentum Contrast Entity Encoding for Knowledge Graph Completion), which incorporates three primary encoders: the entity-relation encoder, the entity encoder, and the momentum entity encoder. Momentum contrastive learning not only provides more negative samples but also allows for the gradual updating of entity encodings. Consequently, we reintroduce the generated entity encodings into the encoder to incorporate the graph{'}s structural information.Additionally, MoCoKGC enhances the inferential capabilities of the entity-relation encoder through deep prompts of relations. On the standard evaluation metric, Mean Reciprocal Rank (MRR), the MoCoKGC model demonstrates superior performance, achieving a 7.1{\%} improvement on the WN18RR dataset and an 11{\%} improvement on the Wikidata5M dataset, while also surpassing the current best model on the FB15k-237 dataset. Through a series of experiments, this paper thoroughly examines the role and contribution of each component and parameter of the model.",
}
| In recent years, numerous studies have sought to enhance the capabilities of pretrained language models (PLMs) for Knowledge Graph Completion (KGC) tasks by integrating structural information from knowledge graphs. However, existing approaches have not effectively combined the structural attributes of knowledge graphs with the textual descriptions of entities to generate robust entity encodings.To address this issue, this paper proposes MoCoKGC (Momentum Contrast Entity Encoding for Knowledge Graph Completion), which incorporates three primary encoders: the entity-relation encoder, the entity encoder, and the momentum entity encoder. Momentum contrastive learning not only provides more negative samples but also allows for the gradual updating of entity encodings. Consequently, we reintroduce the generated entity encodings into the encoder to incorporate the graph{'}s structural information.Additionally, MoCoKGC enhances the inferential capabilities of the entity-relation encoder through deep prompts of relations. On the standard evaluation metric, Mean Reciprocal Rank (MRR), the MoCoKGC model demonstrates superior performance, achieving a 7.1{\%} improvement on the WN18RR dataset and an 11{\%} improvement on the Wikidata5M dataset, while also surpassing the current best model on the FB15k-237 dataset. Through a series of experiments, this paper thoroughly examines the role and contribution of each component and parameter of the model. | [
"Li, Qingyang",
"Zhong, Yanru",
"Qin, Yuchu"
] | MoCoKGC: Momentum Contrast Entity Encoding for Knowledge Graph Completion | emnlp-main.832 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.833.bib | https://aclanthology.org/2024.emnlp-main.833/ | @inproceedings{su-etal-2024-actplan,
title = "{A}ct{P}lan-1{K}: Benchmarking the Procedural Planning Ability of Visual Language Models in Household Activities",
author = "Su, Ying and
Ling, Zhan and
Shi, Haochen and
Jiayang, Cheng and
Yim, Yauwai and
Song, Yangqiu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.833",
pages = "14953--14965",
abstract = "Large language models(LLMs) have been adopted to process textual task description and accomplish procedural planning in embodied AI tasks because of their powerful reasoning ability. However, there is still lack of study on how vision language models(VLMs) behave when multi-modal task inputs are considered. Counterfactual planning that evaluates the model{'}s reasoning ability over alternative task situations are also under exploited. In order to evaluate the planning ability of both multi-modal and counterfactual aspects, we propose ActPlan-1K. ActPlan-1K is a multi-modal planning benchmark constructed based on ChatGPT and household activity simulator iGibson2. The benchmark consists of 153 activities and 1,187 instances. Each instance describing one activity has a natural language task description and multiple environment images from the simulator. The gold plan of each instance is action sequences over the objects in provided scenes. Both the correctness and commonsense satisfaction are evaluated on typical VLMs. It turns out that current VLMs are still struggling at generating human-level procedural plans for both normal activities and counterfactual activities. We further provide automatic evaluation metrics by finetuning over BLEURT model to facilitate future research on our benchmark.",
}
| Large language models(LLMs) have been adopted to process textual task description and accomplish procedural planning in embodied AI tasks because of their powerful reasoning ability. However, there is still lack of study on how vision language models(VLMs) behave when multi-modal task inputs are considered. Counterfactual planning that evaluates the model{'}s reasoning ability over alternative task situations are also under exploited. In order to evaluate the planning ability of both multi-modal and counterfactual aspects, we propose ActPlan-1K. ActPlan-1K is a multi-modal planning benchmark constructed based on ChatGPT and household activity simulator iGibson2. The benchmark consists of 153 activities and 1,187 instances. Each instance describing one activity has a natural language task description and multiple environment images from the simulator. The gold plan of each instance is action sequences over the objects in provided scenes. Both the correctness and commonsense satisfaction are evaluated on typical VLMs. It turns out that current VLMs are still struggling at generating human-level procedural plans for both normal activities and counterfactual activities. We further provide automatic evaluation metrics by finetuning over BLEURT model to facilitate future research on our benchmark. | [
"Su, Ying",
"Ling, Zhan",
"Shi, Haochen",
"Jiayang, Cheng",
"Yim, Yauwai",
"Song, Yangqiu"
] | ActPlan-1K: Benchmarking the Procedural Planning Ability of Visual Language Models in Household Activities | emnlp-main.833 | Poster | 2410.03907 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.834.bib | https://aclanthology.org/2024.emnlp-main.834/ | @inproceedings{xie-etal-2024-shortcuts,
title = "Shortcuts Arising from Contrast: Towards Effective and Lightweight Clean-Label Attacks in Prompt-Based Learning",
author = "Xie, Xiaopeng and
Yan, Ming and
Zhou, Xiwen and
Zhao, Chenlong and
Wang, Suli and
Zhang, Yong and
Zhou, Joey Tianyi",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.834",
pages = "14966--14977",
abstract = "Prompt-based learning paradigm has been shown to be vulnerable to backdoor attacks. Current clean-label attack, employing a specific prompt as trigger, can achieve success without the need for external triggers and ensuring correct labeling of poisoned samples, which are more stealthy compared to the poisoned-label attack, but on the other hand, facing significant issues with false activations and pose greater challenges, necessitating a higher rate of poisoning. Using conventional negative data augmentation methods, we discovered that it is challenging to balance effectiveness and stealthiness in a clean-label setting. In addressing this issue, we are inspired by the notion that a backdoor acts as a shortcut, and posit that this shortcut stems from the contrast between the trigger and the data utilized for poisoning. In this study, we propose a method named Contrastive Shortcut Injection (CSI), by leveraging activation values, integrates trigger design and data selection strategies to craft stronger shortcut features. With extensive experiments on full-shot and few-shot text classification tasks, we empirically validate CSI{'}s high effectiveness and high stealthiness at low poisoning rates.",
}
| Prompt-based learning paradigm has been shown to be vulnerable to backdoor attacks. Current clean-label attack, employing a specific prompt as trigger, can achieve success without the need for external triggers and ensuring correct labeling of poisoned samples, which are more stealthy compared to the poisoned-label attack, but on the other hand, facing significant issues with false activations and pose greater challenges, necessitating a higher rate of poisoning. Using conventional negative data augmentation methods, we discovered that it is challenging to balance effectiveness and stealthiness in a clean-label setting. In addressing this issue, we are inspired by the notion that a backdoor acts as a shortcut, and posit that this shortcut stems from the contrast between the trigger and the data utilized for poisoning. In this study, we propose a method named Contrastive Shortcut Injection (CSI), by leveraging activation values, integrates trigger design and data selection strategies to craft stronger shortcut features. With extensive experiments on full-shot and few-shot text classification tasks, we empirically validate CSI{'}s high effectiveness and high stealthiness at low poisoning rates. | [
"Xie, Xiaopeng",
"Yan, Ming",
"Zhou, Xiwen",
"Zhao, Chenlong",
"Wang, Suli",
"Zhang, Yong",
"Zhou, Joey Tianyi"
] | Shortcuts Arising from Contrast: Towards Effective and Lightweight Clean-Label Attacks in Prompt-Based Learning | emnlp-main.834 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.835.bib | https://aclanthology.org/2024.emnlp-main.835/ | @inproceedings{muhamed-etal-2024-grass,
title = "{GRASS}: Compute Efficient Low-Memory {LLM} Training with Structured Sparse Gradients",
author = "Muhamed, Aashiq and
Li, Oscar and
Woodruff, David and
Diab, Mona T. and
Smith, Virginia",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.835",
pages = "14978--15003",
}
| No abstract found | [
"Muhamed, Aashiq",
"Li, Oscar",
"Woodruff, David",
"Diab, Mona T.",
"Smith, Virginia"
] | GRASS: Compute Efficient Low-Memory LLM Training with Structured Sparse Gradients | emnlp-main.835 | Poster | 2406.17660 | [
"https://github.com/aashiqmuhamed/grass"
] | https://huggingface.co/papers/2406.17660 | 2 | 5 | 1 | 5 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.836.bib | https://aclanthology.org/2024.emnlp-main.836/ | @inproceedings{zhao-etal-2024-ratescore,
title = "{R}a{TES}core: A Metric for Radiology Report Generation",
author = "Zhao, Weike and
Wu, Chaoyi and
Zhang, Xiaoman and
Zhang, Ya and
Wang, Yanfeng and
Xie, Weidi",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.836",
pages = "15004--15019",
abstract = "This paper introduces a novel, entity-aware metric, termed as Radiological Report (Text) Evaluation (RaTEScore), to assess the quality of medical reports generated by AI models. RaTEScore emphasizes crucial medical entities such as diagnostic outcomes and anatomical details, and is robust against complex medical synonyms and sensitive to negation expressions. Technically, we developed a comprehensive medical NER dataset, RaTE-NER, and trained an NER model specifically for this purpose. This model enables the decomposition of complex radiological reports into constituent medical entities. The metric itself is derived by comparing the similarity of entity embeddings, obtained from a language model, based on their types and relevance to clinical significance. Our evaluations demonstrate that RaTEScore aligns more closely with human preference than existing metrics, validated both on established public benchmarks and our newly proposed RaTE-Eval benchmark.",
}
| This paper introduces a novel, entity-aware metric, termed as Radiological Report (Text) Evaluation (RaTEScore), to assess the quality of medical reports generated by AI models. RaTEScore emphasizes crucial medical entities such as diagnostic outcomes and anatomical details, and is robust against complex medical synonyms and sensitive to negation expressions. Technically, we developed a comprehensive medical NER dataset, RaTE-NER, and trained an NER model specifically for this purpose. This model enables the decomposition of complex radiological reports into constituent medical entities. The metric itself is derived by comparing the similarity of entity embeddings, obtained from a language model, based on their types and relevance to clinical significance. Our evaluations demonstrate that RaTEScore aligns more closely with human preference than existing metrics, validated both on established public benchmarks and our newly proposed RaTE-Eval benchmark. | [
"Zhao, Weike",
"Wu, Chaoyi",
"Zhang, Xiaoman",
"Zhang, Ya",
"Wang, Yanfeng",
"Xie, Weidi"
] | RaTEScore: A Metric for Radiology Report Generation | emnlp-main.836 | Poster | 2406.16845 | [
"https://github.com/MAGIC-AI4Med/RaTEScore"
] | https://huggingface.co/papers/2406.16845 | 4 | 4 | 1 | 6 | [
"Angelakeke/RaTE-NER-Deberta"
] | [
"Angelakeke/RaTE-Eval",
"Angelakeke/RaTE-NER"
] | [] | [
"Angelakeke/RaTE-NER-Deberta"
] | [
"Angelakeke/RaTE-Eval",
"Angelakeke/RaTE-NER"
] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.837.bib | https://aclanthology.org/2024.emnlp-main.837/ | @inproceedings{akbar-etal-2024-hallumeasure,
title = "{H}allu{M}easure: Fine-grained Hallucination Measurement Using Chain-of-Thought Reasoning",
author = "Akbar, Shayan Ali and
Hossain, Md Mosharaf and
Wood, Tess and
Chin, Si-Chi and
Salinas, Erica M and
Alvarez, Victor and
Cornejo, Erwin",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.837",
pages = "15020--15037",
abstract = "Automating the measurement of hallucinations in LLM generated responses is a challenging task as it requires careful investigation of each factual claim in a response. In this paper, we introduce HalluMeasure, a new LLM-based hallucination detection mechanism that decomposes an LLM response into atomic claims, and evaluates each atomic claim against the provided reference context. The model uses a step-by-step reasoning process called Chain-of-Thought and can identify 3 major categories of hallucinations (e.g., contradiction) as well as 10 more specific subtypes (e.g., overgeneralization) which help to identify reasons behind the hallucination errors. Specifically, we explore four different configurations for HalluMeasure{'}s classifier: with and without CoT prompting, and using a single classifier call to classify all claims versus separate calls for each claim. The best-performing configuration (with CoT and separate calls for each claim) demonstrates significant improvements in detecting hallucinations, achieving a 10-point increase in F1 score on our TechNewsSumm dataset, and a 3-point increase in AUC ROC on the SummEval dataset, compared to three baseline models (RefChecker, AlignScore, and Vectara HHEM). We further show reasonable accuracy on detecting 10 novel error subtypes of hallucinations (where even humans struggle in classification) derived from linguistic analysis of the errors made by the LLMs.",
}
| Automating the measurement of hallucinations in LLM generated responses is a challenging task as it requires careful investigation of each factual claim in a response. In this paper, we introduce HalluMeasure, a new LLM-based hallucination detection mechanism that decomposes an LLM response into atomic claims, and evaluates each atomic claim against the provided reference context. The model uses a step-by-step reasoning process called Chain-of-Thought and can identify 3 major categories of hallucinations (e.g., contradiction) as well as 10 more specific subtypes (e.g., overgeneralization) which help to identify reasons behind the hallucination errors. Specifically, we explore four different configurations for HalluMeasure{'}s classifier: with and without CoT prompting, and using a single classifier call to classify all claims versus separate calls for each claim. The best-performing configuration (with CoT and separate calls for each claim) demonstrates significant improvements in detecting hallucinations, achieving a 10-point increase in F1 score on our TechNewsSumm dataset, and a 3-point increase in AUC ROC on the SummEval dataset, compared to three baseline models (RefChecker, AlignScore, and Vectara HHEM). We further show reasonable accuracy on detecting 10 novel error subtypes of hallucinations (where even humans struggle in classification) derived from linguistic analysis of the errors made by the LLMs. | [
"Akbar, Shayan Ali",
"Hossain, Md Mosharaf",
"Wood, Tess",
"Chin, Si-Chi",
"Salinas, Erica M",
"Alvarez, Victor",
"Cornejo, Erwin"
] | HalluMeasure: Fine-grained Hallucination Measurement Using Chain-of-Thought Reasoning | emnlp-main.837 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.838.bib | https://aclanthology.org/2024.emnlp-main.838/ | @inproceedings{sotudeh-goharian-2024-learning,
title = "Learning to Rank Salient Content for Query-focused Summarization",
author = "Sotudeh, Sajad and
Goharian, Nazli",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.838",
pages = "15038--15048",
abstract = "This study examines the potential of integrating Learning-to-Rank (LTR) with Query-focused Summarization (QFS) to enhance the summary relevance via content prioritization. Using a shared secondary decoder with the summarization decoder, we carry out the LTR task at the segment level. Compared to the state-of-the-art, our model outperforms on QMSum benchmark (all metrics) and matches on SQuALITY benchmark (2 metrics) as measured by Rouge and BertScore while offering a lower training overhead. Specifically, on the QMSum benchmark, our proposed system achieves improvements, particularly in Rouge-L (+0.42) and BertScore (+0.34), indicating enhanced understanding and relevance. While facing minor challenges in Rouge-1 and Rouge-2 scores on the SQuALITY benchmark, the model significantly excels in Rouge-L (+1.47), underscoring its capability to generate coherent summaries. Human evaluations emphasize the efficacy of our method in terms of relevance and faithfulness of the generated summaries, without sacrificing fluency. A deeper analysis reveals our model{'}s superiority over the state-of-the-art for broad queries, as opposed to specific ones, from a qualitative standpoint. We further present an error analysis of our model, pinpointing challenges faced and suggesting potential directions for future research in this field.",
}
| This study examines the potential of integrating Learning-to-Rank (LTR) with Query-focused Summarization (QFS) to enhance the summary relevance via content prioritization. Using a shared secondary decoder with the summarization decoder, we carry out the LTR task at the segment level. Compared to the state-of-the-art, our model outperforms on QMSum benchmark (all metrics) and matches on SQuALITY benchmark (2 metrics) as measured by Rouge and BertScore while offering a lower training overhead. Specifically, on the QMSum benchmark, our proposed system achieves improvements, particularly in Rouge-L (+0.42) and BertScore (+0.34), indicating enhanced understanding and relevance. While facing minor challenges in Rouge-1 and Rouge-2 scores on the SQuALITY benchmark, the model significantly excels in Rouge-L (+1.47), underscoring its capability to generate coherent summaries. Human evaluations emphasize the efficacy of our method in terms of relevance and faithfulness of the generated summaries, without sacrificing fluency. A deeper analysis reveals our model{'}s superiority over the state-of-the-art for broad queries, as opposed to specific ones, from a qualitative standpoint. We further present an error analysis of our model, pinpointing challenges faced and suggesting potential directions for future research in this field. | [
"Sotudeh, Sajad",
"Goharian, Nazli"
] | Learning to Rank Salient Content for Query-focused Summarization | emnlp-main.838 | Poster | 2411.00324 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.839.bib | https://aclanthology.org/2024.emnlp-main.839/ | @inproceedings{ruan-etal-2024-large,
title = "Are Large Language Models Good Classifiers? A Study on Edit Intent Classification in Scientific Document Revisions",
author = "Ruan, Qian and
Kuznetsov, Ilia and
Gurevych, Iryna",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.839",
pages = "15049--15067",
abstract = "Classification is a core NLP task architecture with many potential applications. While large language models (LLMs) have brought substantial advancements in text generation, their potential for enhancing classification tasks remains underexplored. To address this gap, we propose a framework for thoroughly investigating fine-tuning LLMs for classification, including both generation- and encoding-based approaches. We instantiate this framework in edit intent classification (EIC), a challenging and underexplored classification task. Our extensive experiments and systematic comparisons with various training approaches and a representative selection of LLMs yield new insights into their application for EIC. We investigate the generalizability of these findings on five further classification tasks. To demonstrate the proposed methods and address the data shortage for empirical edit analysis, we use our best-performing EIC model to create Re3-Sci2.0, a new large-scale dataset of 1,780 scientific document revisions with over 94k labeled edits. The quality of the dataset is assessed through human evaluation. The new dataset enables an in-depth empirical study of human editing behavior in academic writing. We make our experimental framework, models and data publicly available.",
}
| Classification is a core NLP task architecture with many potential applications. While large language models (LLMs) have brought substantial advancements in text generation, their potential for enhancing classification tasks remains underexplored. To address this gap, we propose a framework for thoroughly investigating fine-tuning LLMs for classification, including both generation- and encoding-based approaches. We instantiate this framework in edit intent classification (EIC), a challenging and underexplored classification task. Our extensive experiments and systematic comparisons with various training approaches and a representative selection of LLMs yield new insights into their application for EIC. We investigate the generalizability of these findings on five further classification tasks. To demonstrate the proposed methods and address the data shortage for empirical edit analysis, we use our best-performing EIC model to create Re3-Sci2.0, a new large-scale dataset of 1,780 scientific document revisions with over 94k labeled edits. The quality of the dataset is assessed through human evaluation. The new dataset enables an in-depth empirical study of human editing behavior in academic writing. We make our experimental framework, models and data publicly available. | [
"Ruan, Qian",
"Kuznetsov, Ilia",
"Gurevych, Iryna"
] | Are Large Language Models Good Classifiers? A Study on Edit Intent Classification in Scientific Document Revisions | emnlp-main.839 | Poster | 2410.02028 | [
"https://github.com/UKPLab/llm_classifier"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.840.bib | https://aclanthology.org/2024.emnlp-main.840/ | @inproceedings{ajith-etal-2024-litsearch,
title = "{L}it{S}earch: A Retrieval Benchmark for Scientific Literature Search",
author = "Ajith, Anirudh and
Xia, Mengzhou and
Chevalier, Alexis and
Goyal, Tanya and
Chen, Danqi and
Gao, Tianyu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.840",
pages = "15068--15083",
abstract = "Literature search questions, such as {``}where can I find research on the evaluation of consistency in generated summaries?{''} pose significant challenges for modern search engines and retrieval systems. These questions often require a deep understanding of research concepts and the ability to reason over entire articles. In this work, we introduce LitSearch, a retrieval benchmark comprising 597 realistic literature search queries about recent ML and NLP papers. LitSearch is constructed using a combination of (1) questions generated by GPT-4 based on paragraphs containing inline citations from research papers and (2) questions about recently published papers, manually written by their authors. All LitSearch questions were manually examined or edited by experts to ensure high quality. We extensively benchmark state-of-the-art retrieval models and also evaluate two LLM-based reranking pipelines. We find a significant performance gap between BM25 and state-of-the-art dense retrievers, with a 24.8{\%} difference in absolute recall@5. The LLM-based reranking strategies further improve the best-performing dense retriever by 4.4{\%}. Additionally, commercial search engines and research tools like Google Search perform poorly on LitSearch, lagging behind the best dense retriever by 32 points. Taken together, these results show that LitSearch is an informative new testbed for retrieval systems while catering to a real-world use case.",
}
| Literature search questions, such as {``}where can I find research on the evaluation of consistency in generated summaries?{''} pose significant challenges for modern search engines and retrieval systems. These questions often require a deep understanding of research concepts and the ability to reason over entire articles. In this work, we introduce LitSearch, a retrieval benchmark comprising 597 realistic literature search queries about recent ML and NLP papers. LitSearch is constructed using a combination of (1) questions generated by GPT-4 based on paragraphs containing inline citations from research papers and (2) questions about recently published papers, manually written by their authors. All LitSearch questions were manually examined or edited by experts to ensure high quality. We extensively benchmark state-of-the-art retrieval models and also evaluate two LLM-based reranking pipelines. We find a significant performance gap between BM25 and state-of-the-art dense retrievers, with a 24.8{\%} difference in absolute recall@5. The LLM-based reranking strategies further improve the best-performing dense retriever by 4.4{\%}. Additionally, commercial search engines and research tools like Google Search perform poorly on LitSearch, lagging behind the best dense retriever by 32 points. Taken together, these results show that LitSearch is an informative new testbed for retrieval systems while catering to a real-world use case. | [
"Ajith, Anirudh",
"Xia, Mengzhou",
"Chevalier, Alexis",
"Goyal, Tanya",
"Chen, Danqi",
"Gao, Tianyu"
] | LitSearch: A Retrieval Benchmark for Scientific Literature Search | emnlp-main.840 | Poster | 2407.18940 | [
"https://github.com/princeton-nlp/litsearch"
] | https://huggingface.co/papers/2407.18940 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.841.bib | https://aclanthology.org/2024.emnlp-main.841/ | @inproceedings{li-etal-2024-open,
title = "Open-world Multi-label Text Classification with Extremely Weak Supervision",
author = "Li, Xintong and
Jiang, Jinya and
Dharmani, Ria and
Srinivasa, Jayanth and
Liu, Gaowen and
Shang, Jingbo",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.841",
pages = "15084--15096",
abstract = "We study open-world multi-label text classification under extremely weak supervision (XWS), where the user only provides a brief description for classification objectives without any labels or ground-truth label space. Similar single-label XWS settings have been explored recently, however, these methods cannot be easily adapted for multi-label. We observe that (1) most documents have a dominant class covering the majority of content and (2) long-tail labels would appear in some documents as a dominant class. Therefore, we first utilize the user description to prompt a large language model (LLM) for dominant keyphrases of a subset of raw documents, and then construct a (initial) label space via clustering. We further apply a zero-shot multi-label classifier to locate the documents with small top predicted scores, so we can revisit their dominant keyphrases for more long-tail labels. We iterate this process to discover a comprehensive label space and construct a multi-label classifier as a novel method, X-MLClass. X-MLClass exhibits a remarkable increase in ground-truth label space coverage on various datasets, for example, a 40{\%} improvement on the AAPD dataset over topic modeling and keyword extraction methods. Moreover, X-MLClass achieves the best end-to-end multi-label classification accuracy.",
}
| We study open-world multi-label text classification under extremely weak supervision (XWS), where the user only provides a brief description for classification objectives without any labels or ground-truth label space. Similar single-label XWS settings have been explored recently, however, these methods cannot be easily adapted for multi-label. We observe that (1) most documents have a dominant class covering the majority of content and (2) long-tail labels would appear in some documents as a dominant class. Therefore, we first utilize the user description to prompt a large language model (LLM) for dominant keyphrases of a subset of raw documents, and then construct a (initial) label space via clustering. We further apply a zero-shot multi-label classifier to locate the documents with small top predicted scores, so we can revisit their dominant keyphrases for more long-tail labels. We iterate this process to discover a comprehensive label space and construct a multi-label classifier as a novel method, X-MLClass. X-MLClass exhibits a remarkable increase in ground-truth label space coverage on various datasets, for example, a 40{\%} improvement on the AAPD dataset over topic modeling and keyword extraction methods. Moreover, X-MLClass achieves the best end-to-end multi-label classification accuracy. | [
"Li, Xintong",
"Jiang, Jinya",
"Dharmani, Ria",
"Srinivasa, Jayanth",
"Liu, Gaowen",
"Shang, Jingbo"
] | Open-world Multi-label Text Classification with Extremely Weak Supervision | emnlp-main.841 | Poster | 2407.05609 | [
"https://github.com/Kaylee0501/X-MLClass"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.842.bib | https://aclanthology.org/2024.emnlp-main.842/ | @inproceedings{liu-etal-2024-llms-learn,
title = "{LLM}s learn governing principles of dynamical systems, revealing an in-context neural scaling law",
author = {Liu, Toni J.b. and
Boulle, Nicolas and
Sarfati, Rapha{\"e}l and
Earls, Christopher},
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.842",
pages = "15097--15117",
abstract = "We study LLMs{'} ability to extrapolate the behavior of various dynamical systems, including stochastic, chaotic, continuous, and discrete systems, whose evolution is governed by principles of physical interest. Our results show that LLaMA-2, a language model trained on text, achieves accurate predictions of dynamical system time series without fine-tuning or prompt engineering. Moreover, the accuracy of the learned physical rules increases with the length of the input context window, revealing an in-context version of a neural scaling law. Along the way, we present a flexible and efficient algorithm for extracting probability density functions of multi-digit numbers directly from LLMs.",
}
| We study LLMs{'} ability to extrapolate the behavior of various dynamical systems, including stochastic, chaotic, continuous, and discrete systems, whose evolution is governed by principles of physical interest. Our results show that LLaMA-2, a language model trained on text, achieves accurate predictions of dynamical system time series without fine-tuning or prompt engineering. Moreover, the accuracy of the learned physical rules increases with the length of the input context window, revealing an in-context version of a neural scaling law. Along the way, we present a flexible and efficient algorithm for extracting probability density functions of multi-digit numbers directly from LLMs. | [
"Liu, Toni J.b.",
"Boulle, Nicolas",
"Sarfati, Rapha{\\\"e}l",
"Earls, Christopher"
] | LLMs learn governing principles of dynamical systems, revealing an in-context neural scaling law | emnlp-main.842 | Poster | 2402.00795 | [
"https://github.com/AntonioLiu97/llmICL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.843.bib | https://aclanthology.org/2024.emnlp-main.843/ | @inproceedings{wu-etal-2024-akew,
title = "{AKEW}: Assessing Knowledge Editing in the Wild",
author = "Wu, Xiaobao and
Pan, Liangming and
Wang, William Yang and
Luu, Anh Tuan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.843",
pages = "15118--15133",
abstract = "Knowledge editing injects knowledge updates into language models to keep them correct and up-to-date. However, its current evaluations deviate significantly from practice: their knowledge updates solely consist of structured facts derived from meticulously crafted datasets, instead of practical sources{---}unstructured texts like news articles, and they often overlook practical real-world knowledge updates. To address these issues, in this paper we propose AKEW (Assessing Knowledge Editing in the Wild), a new practical benchmark for knowledge editing. AKEW fully covers three editing settings of knowledge updates: structured facts, unstructured texts as facts, and extracted triplets. It further introduces new datasets featuring both counterfactual and real-world knowledge updates. Through extensive experiments, we demonstrate the considerable gap between state-of-the-art knowledge-editing methods and practical scenarios. Our analyses further highlight key insights to motivate future research for practical knowledge editing.",
}
| Knowledge editing injects knowledge updates into language models to keep them correct and up-to-date. However, its current evaluations deviate significantly from practice: their knowledge updates solely consist of structured facts derived from meticulously crafted datasets, instead of practical sources{---}unstructured texts like news articles, and they often overlook practical real-world knowledge updates. To address these issues, in this paper we propose AKEW (Assessing Knowledge Editing in the Wild), a new practical benchmark for knowledge editing. AKEW fully covers three editing settings of knowledge updates: structured facts, unstructured texts as facts, and extracted triplets. It further introduces new datasets featuring both counterfactual and real-world knowledge updates. Through extensive experiments, we demonstrate the considerable gap between state-of-the-art knowledge-editing methods and practical scenarios. Our analyses further highlight key insights to motivate future research for practical knowledge editing. | [
"Wu, Xiaobao",
"Pan, Liangming",
"Wang, William Yang",
"Luu, Anh Tuan"
] | AKEW: Assessing Knowledge Editing in the Wild | emnlp-main.843 | Poster | 2402.18909 | [
"https://github.com/BobXWu/AKEW"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.844.bib | https://aclanthology.org/2024.emnlp-main.844/ | @inproceedings{chen-etal-2024-copybench,
title = "{C}opy{B}ench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation",
author = "Chen, Tong and
Asai, Akari and
Mireshghallah, Niloofar and
Min, Sewon and
Grimmelmann, James and
Choi, Yejin and
Hajishirzi, Hannaneh and
Zettlemoyer, Luke and
Koh, Pang Wei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.844",
pages = "15134--15158",
abstract = "Evaluating the degree of reproduction of copyright-protected content by language models (LMs) is of significant interest to the AI and legal communities. Although both literal and non-literal similarities are considered by courts when assessing the degree of reproduction, prior research has focused only on literal similarities. To bridge this gap, we introduce CopyBench, a benchmark designed to measure both literal and non-literal copying in LM generations. Using copyrighted fiction books as text sources, we provide automatic evaluation protocols to assess literal and non-literal copying, balanced against the model utility in terms of the ability to recall facts from the copyrighted works and generate fluent completions. We find that, although literal copying is relatively rare, two types of non-literal copying{---}event copying and character copying{---}occur even in models as small as 7B parameters. Larger models demonstrate significantly more copying, with literal copying rates increasing from 0.2{\%} to 10.5{\%} and non-literal copying from 2.3{\%} to 5.9{\%} when comparing Llama3-8B and 70B models, respectively. We further evaluate the effectiveness of current strategies for mitigating copying and show that (1) training-time alignment can reduce literal copying but may increase non-literal copying, and (2) current inference-time mitigation methods primarily reduce literal but not non-literal copying.",
}
| Evaluating the degree of reproduction of copyright-protected content by language models (LMs) is of significant interest to the AI and legal communities. Although both literal and non-literal similarities are considered by courts when assessing the degree of reproduction, prior research has focused only on literal similarities. To bridge this gap, we introduce CopyBench, a benchmark designed to measure both literal and non-literal copying in LM generations. Using copyrighted fiction books as text sources, we provide automatic evaluation protocols to assess literal and non-literal copying, balanced against the model utility in terms of the ability to recall facts from the copyrighted works and generate fluent completions. We find that, although literal copying is relatively rare, two types of non-literal copying{---}event copying and character copying{---}occur even in models as small as 7B parameters. Larger models demonstrate significantly more copying, with literal copying rates increasing from 0.2{\%} to 10.5{\%} and non-literal copying from 2.3{\%} to 5.9{\%} when comparing Llama3-8B and 70B models, respectively. We further evaluate the effectiveness of current strategies for mitigating copying and show that (1) training-time alignment can reduce literal copying but may increase non-literal copying, and (2) current inference-time mitigation methods primarily reduce literal but not non-literal copying. | [
"Chen, Tong",
"Asai, Akari",
"Mireshghallah, Niloofar",
"Min, Sewon",
"Grimmelmann, James",
"Choi, Yejin",
"Hajishirzi, Hannaneh",
"Zettlemoyer, Luke",
"Koh, Pang Wei"
] | CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation | emnlp-main.844 | Poster | 2407.07087 | [
"https://github.com/chentong0/copy-bench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.845.bib | https://aclanthology.org/2024.emnlp-main.845/ | @inproceedings{chen-etal-2024-dense,
title = "Dense {X} Retrieval: What Retrieval Granularity Should We Use?",
author = "Chen, Tong and
Wang, Hongwei and
Chen, Sihao and
Yu, Wenhao and
Ma, Kaixin and
Zhao, Xinran and
Zhang, Hongming and
Yu, Dong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.845",
pages = "15159--15177",
abstract = "Dense retrieval has become a prominent method to obtain relevant context or world knowledge in open-domain NLP tasks. When we use a learned dense retriever on a retrieval corpus at inference time, an often-overlooked design choice is the retrieval unit in which the corpus is indexed, e.g. document, passage, or sentence. We discover that the retrieval unit choice significantly impacts the performance of both retrieval and downstream tasks. Distinct from the typical approach of using passages or sentences, we introduce a novel retrieval unit, proposition, for dense retrieval. Propositions are defined as atomic expressions within text, each encapsulating a distinct factoid and presented in a concise, self-contained natural language format. We conduct an empirical comparison of different retrieval granularity. Our experiments reveal that indexing a corpus by fine-grained units such as propositions significantly outperforms passage-level units in retrieval tasks. Moreover, constructing prompts with fine-grained retrieved units for retrieval-augmented language models improves the performance of downstream QA tasks given a specific computation budget.",
}
| Dense retrieval has become a prominent method to obtain relevant context or world knowledge in open-domain NLP tasks. When we use a learned dense retriever on a retrieval corpus at inference time, an often-overlooked design choice is the retrieval unit in which the corpus is indexed, e.g. document, passage, or sentence. We discover that the retrieval unit choice significantly impacts the performance of both retrieval and downstream tasks. Distinct from the typical approach of using passages or sentences, we introduce a novel retrieval unit, proposition, for dense retrieval. Propositions are defined as atomic expressions within text, each encapsulating a distinct factoid and presented in a concise, self-contained natural language format. We conduct an empirical comparison of different retrieval granularity. Our experiments reveal that indexing a corpus by fine-grained units such as propositions significantly outperforms passage-level units in retrieval tasks. Moreover, constructing prompts with fine-grained retrieved units for retrieval-augmented language models improves the performance of downstream QA tasks given a specific computation budget. | [
"Chen, Tong",
"Wang, Hongwei",
"Chen, Sihao",
"Yu, Wenhao",
"Ma, Kaixin",
"Zhao, Xinran",
"Zhang, Hongming",
"Yu, Dong"
] | Dense X Retrieval: What Retrieval Granularity Should We Use? | emnlp-main.845 | Poster | 2312.06648 | [
"https://github.com/ct123098/factoid-wiki"
] | https://huggingface.co/papers/2312.06648 | 0 | 1 | 0 | 8 | [
"chentong00/propositionizer-wiki-flan-t5-large"
] | [
"LumberChunker/GutenQA_Propositions",
"LumberChunker/GutenQA_Paragraphs"
] | [] | [
"chentong00/propositionizer-wiki-flan-t5-large"
] | [
"LumberChunker/GutenQA_Propositions",
"LumberChunker/GutenQA_Paragraphs"
] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.846.bib | https://aclanthology.org/2024.emnlp-main.846/ | @inproceedings{liu-etal-2024-decoding,
title = "Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach",
author = "Liu, Yanchen and
Ma, Mingyu Derek and
Qin, Wenna and
Zhou, Azure and
Chen, Jiaao and
Shi, Weiyan and
Wang, Wei and
Yang, Diyi",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.846",
pages = "15178--15194",
abstract = "Susceptibility to misinformation describes the degree of belief in unverifiable claims, a latent aspect of individuals{'} mental processes that is not observable. Existing susceptibility studies heavily rely on self-reported beliefs, which can be subject to bias, expensive to collect, and challenging to scale for downstream applications. To address these limitations, in this work, we propose a computational approach to efficiently model users{'} latent susceptibility levels. As shown in previous work, susceptibility is influenced by various factors (e.g., demographic factors, political ideology), and directly influences people{'}s reposting behavior on social media. To represent the underlying mental process, our susceptibility modeling incorporates these factors as inputs, guided by the supervision of people{'}s sharing behavior. Using COVID-19 as a testbed, our experiments demonstrate a significant alignment between the susceptibility scores estimated by our computational modeling and human judgments, confirming the effectiveness of this latent modeling approach. Furthermore, we apply our model to annotate susceptibility scores on a large-scale dataset and analyze the relationships between susceptibility with various factors. Our analysis reveals that political leanings and other psychological factors exhibit varying degrees of association with susceptibility to COVID-19 misinformation, and shows that susceptibility is unevenly distributed across different professional and geographical backgrounds.",
}
| Susceptibility to misinformation describes the degree of belief in unverifiable claims, a latent aspect of individuals{'} mental processes that is not observable. Existing susceptibility studies heavily rely on self-reported beliefs, which can be subject to bias, expensive to collect, and challenging to scale for downstream applications. To address these limitations, in this work, we propose a computational approach to efficiently model users{'} latent susceptibility levels. As shown in previous work, susceptibility is influenced by various factors (e.g., demographic factors, political ideology), and directly influences people{'}s reposting behavior on social media. To represent the underlying mental process, our susceptibility modeling incorporates these factors as inputs, guided by the supervision of people{'}s sharing behavior. Using COVID-19 as a testbed, our experiments demonstrate a significant alignment between the susceptibility scores estimated by our computational modeling and human judgments, confirming the effectiveness of this latent modeling approach. Furthermore, we apply our model to annotate susceptibility scores on a large-scale dataset and analyze the relationships between susceptibility with various factors. Our analysis reveals that political leanings and other psychological factors exhibit varying degrees of association with susceptibility to COVID-19 misinformation, and shows that susceptibility is unevenly distributed across different professional and geographical backgrounds. | [
"Liu, Yanchen",
"Ma, Mingyu Derek",
"Qin, Wenna",
"Zhou, Azure",
"Chen, Jiaao",
"Shi, Weiyan",
"Wang, Wei",
"Yang, Diyi"
] | Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach | emnlp-main.846 | Poster | 2311.09630 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.847.bib | https://aclanthology.org/2024.emnlp-main.847/ | @inproceedings{zhao-etal-2024-layer,
title = "Layer by Layer: Uncovering Where Multi-Task Learning Happens in Instruction-Tuned Large Language Models",
author = "Zhao, Zheng and
Ziser, Yftah and
Cohen, Shay B",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.847",
pages = "15195--15214",
abstract = "Fine-tuning pre-trained large language models (LLMs) on a diverse array of tasks has become a common approach for building models that can solve various natural language processing (NLP) tasks. However, where and to what extent these models retain task-specific knowledge remains largely unexplored. This study investigates the task-specific information encoded in pre-trained LLMs and the effects of instruction tuning on their representations across a diverse set of over 60 NLP tasks. We use a set of matrix analysis tools to examine the differences between the way pre-trained and instruction-tuned LLMs store task-specific information. Our findings reveal that while some tasks are already encoded within the pre-trained LLMs, others greatly benefit from instruction tuning. Additionally, we pinpointed the layers in which the model transitions from high-level general representations to more task-oriented representations. This finding extends our understanding of the governing mechanisms of LLMs and facilitates future research in the fields of parameter-efficient transfer learning and multi-task learning. Our code is available at: https://github.com/zsquaredz/layer{\_}by{\_}layer/",
}
| Fine-tuning pre-trained large language models (LLMs) on a diverse array of tasks has become a common approach for building models that can solve various natural language processing (NLP) tasks. However, where and to what extent these models retain task-specific knowledge remains largely unexplored. This study investigates the task-specific information encoded in pre-trained LLMs and the effects of instruction tuning on their representations across a diverse set of over 60 NLP tasks. We use a set of matrix analysis tools to examine the differences between the way pre-trained and instruction-tuned LLMs store task-specific information. Our findings reveal that while some tasks are already encoded within the pre-trained LLMs, others greatly benefit from instruction tuning. Additionally, we pinpointed the layers in which the model transitions from high-level general representations to more task-oriented representations. This finding extends our understanding of the governing mechanisms of LLMs and facilitates future research in the fields of parameter-efficient transfer learning and multi-task learning. Our code is available at: https://github.com/zsquaredz/layer{\_}by{\_}layer/ | [
"Zhao, Zheng",
"Ziser, Yftah",
"Cohen, Shay B"
] | Layer by Layer: Uncovering Where Multi-Task Learning Happens in Instruction-Tuned Large Language Models | emnlp-main.847 | Poster | 2410.20008 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.848.bib | https://aclanthology.org/2024.emnlp-main.848/ | @inproceedings{lee-etal-2024-xdetox,
title = "{XD}etox: Text Detoxification with Token-Level Toxicity Explanations",
author = "Lee, Beomseok and
Kim, Hyunwoo and
Kim, Keon and
Choi, Yong Suk",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.848",
pages = "15215--15226",
abstract = "Methods for mitigating toxic content through masking and infilling often overlook the decision-making process, leading to either insufficient or excessive modifications of toxic tokens. To address this challenge, we propose XDetox, a novel method that integrates token-level toxicity explanations with the masking and infilling detoxification process. We utilized this approach with two strategies to enhance the performance of detoxification. First, identifying toxic tokens to improve the quality of masking. Second, selecting the regenerated sentence by re-ranking the least toxic sentence among candidates. Our experimental results show state-of-the-art performance across four datasets compared to existing detoxification methods. Furthermore, human evaluations indicate that our method outperforms baselines in both fluency and toxicity reduction. These results demonstrate the effectiveness of our method in text detoxification.",
}
| Methods for mitigating toxic content through masking and infilling often overlook the decision-making process, leading to either insufficient or excessive modifications of toxic tokens. To address this challenge, we propose XDetox, a novel method that integrates token-level toxicity explanations with the masking and infilling detoxification process. We utilized this approach with two strategies to enhance the performance of detoxification. First, identifying toxic tokens to improve the quality of masking. Second, selecting the regenerated sentence by re-ranking the least toxic sentence among candidates. Our experimental results show state-of-the-art performance across four datasets compared to existing detoxification methods. Furthermore, human evaluations indicate that our method outperforms baselines in both fluency and toxicity reduction. These results demonstrate the effectiveness of our method in text detoxification. | [
"Lee, Beomseok",
"Kim, Hyunwoo",
"Kim, Keon",
"Choi, Yong Suk"
] | XDetox: Text Detoxification with Token-Level Toxicity Explanations | emnlp-main.848 | Oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.849.bib | https://aclanthology.org/2024.emnlp-main.849/ | @inproceedings{xiao-etal-2024-optimizing,
title = "Optimizing {C}hinese Lexical Simplification Across Word Types: A Hybrid Approach",
author = "Xiao, ZiHao and
Gong, Jiefu and
Wang, Shijin and
Song, Wei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.849",
pages = "15227--15239",
abstract = "This paper addresses the task of Chinese Lexical Simplification (CLS). A key challenge in CLS is the scarcity of data resources. We begin by evaluating the performance of various language models at different scales in unsupervised and few-shot settings, finding that their effectiveness is sensitive to word types. Expensive large language models (LLMs), such as GPT-4, outperform small models in simplifying complex content words and Chinese idioms from the dictionary.To take advantage of this, we propose an automatic knowledge distillation framework called PivotKD for generating training data to fine-tune small models.In addition, all models face difficulties with out-of-dictionary (OOD) words such as internet slang.To address this, we implement a retrieval-based interpretation augmentation (RIA) strategy, injecting word interpretations from external resources into the context.Experimental results demonstrate that fine-tuned small models outperform GPT-4 in simplifying complex content words and Chinese idioms. Additionally, the RIA strategy enhances the performance of most models, particularly in handling OOD words. Our findings suggest that a hybrid approach could optimize CLS performance while managing inference costs. This would involve configuring choices such as model scale, linguistic resources, and the use of RIA based on specific word types to strike an ideal balance.",
}
| This paper addresses the task of Chinese Lexical Simplification (CLS). A key challenge in CLS is the scarcity of data resources. We begin by evaluating the performance of various language models at different scales in unsupervised and few-shot settings, finding that their effectiveness is sensitive to word types. Expensive large language models (LLMs), such as GPT-4, outperform small models in simplifying complex content words and Chinese idioms from the dictionary.To take advantage of this, we propose an automatic knowledge distillation framework called PivotKD for generating training data to fine-tune small models.In addition, all models face difficulties with out-of-dictionary (OOD) words such as internet slang.To address this, we implement a retrieval-based interpretation augmentation (RIA) strategy, injecting word interpretations from external resources into the context.Experimental results demonstrate that fine-tuned small models outperform GPT-4 in simplifying complex content words and Chinese idioms. Additionally, the RIA strategy enhances the performance of most models, particularly in handling OOD words. Our findings suggest that a hybrid approach could optimize CLS performance while managing inference costs. This would involve configuring choices such as model scale, linguistic resources, and the use of RIA based on specific word types to strike an ideal balance. | [
"Xiao, ZiHao",
"Gong, Jiefu",
"Wang, Shijin",
"Song, Wei"
] | Optimizing Chinese Lexical Simplification Across Word Types: A Hybrid Approach | emnlp-main.849 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.850.bib | https://aclanthology.org/2024.emnlp-main.850/ | @inproceedings{li-etal-2024-control,
title = "Control Large Language Models via Divide and Conquer",
author = "Li, Bingxuan and
Wang, Yiwei and
Meng, Tao and
Chang, Kai-Wei and
Peng, Nanyun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.850",
pages = "15240--15256",
abstract = "This paper investigates the capability of LLMs on controllable generation with prompt-based controlling, focusing on Lexically Constrained Generation (LCG). We systematically evaluate the performance of LLMs on satisfying lexical constraints with prompt-based controlling, as well as their efficacy in downstream applications. We identified three key reasons that highlight the limitations of LLMs in LCG, including (1) position bias, where LLMs tend to satisfy constraints that appear in specific positions within the input; (2) low responsiveness to control decoding parameters, which minimally impact the performance of LLMs; and (3) struggle with handling the inherent complexity of certain constraints (e.g. compound word). We conclude that black-box LLMs face significant challenges in consistently satisfying lexical constraints with prompt-based controlling. To address this bottleneck, we introduce the Divide and Conquer Generation strategy, effective for both white-box and black-box LLMs, to enhance LLMs performance in LCG tasks, which demonstrates over 90{\%} improvement on success rate in the most challenging LCG task. Our analysis aims to provide valuable insights into the performance of LLMs in LCG with prompt-based controlling, and our proposed strategy offers a pathway to more sophisticated and customized text generation applications.",
}
| This paper investigates the capability of LLMs on controllable generation with prompt-based controlling, focusing on Lexically Constrained Generation (LCG). We systematically evaluate the performance of LLMs on satisfying lexical constraints with prompt-based controlling, as well as their efficacy in downstream applications. We identified three key reasons that highlight the limitations of LLMs in LCG, including (1) position bias, where LLMs tend to satisfy constraints that appear in specific positions within the input; (2) low responsiveness to control decoding parameters, which minimally impact the performance of LLMs; and (3) struggle with handling the inherent complexity of certain constraints (e.g. compound word). We conclude that black-box LLMs face significant challenges in consistently satisfying lexical constraints with prompt-based controlling. To address this bottleneck, we introduce the Divide and Conquer Generation strategy, effective for both white-box and black-box LLMs, to enhance LLMs performance in LCG tasks, which demonstrates over 90{\%} improvement on success rate in the most challenging LCG task. Our analysis aims to provide valuable insights into the performance of LLMs in LCG with prompt-based controlling, and our proposed strategy offers a pathway to more sophisticated and customized text generation applications. | [
"Li, Bingxuan",
"Wang, Yiwei",
"Meng, Tao",
"Chang, Kai-Wei",
"Peng, Nanyun"
] | Control Large Language Models via Divide and Conquer | emnlp-main.850 | Poster | 2410.04628 | [
""
] | https://huggingface.co/papers/2410.04628 | 1 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.851.bib | https://aclanthology.org/2024.emnlp-main.851/ | @inproceedings{qiu-etal-2024-joint,
title = "Joint Pre-Encoding Representation and Structure Embedding for Efficient and Low-Resource Knowledge Graph Completion",
author = "Qiu, Chenyu and
Qian, Pengjiang and
Wang, Chuang and
Yao, Jian and
Liu, Li and
Wei, Fang and
Eddie, Eddie Y.k.",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.851",
pages = "15257--15269",
abstract = "Knowledge graph completion (KGC) aims to infer missing or incomplete parts in knowledge graph. The existing models are generally divided into structure-based and description-based models, among description-based models often require longer training and inference times as well as increased memory usage. In this paper, we propose Pre-Encoded Masked Language Model (PEMLM) to efficiently solve KGC problem. By encoding textual descriptions into semantic representations before training, the necessary resources are significantly reduced. Furthermore, we introduce a straightforward but effective fusion framework to integrate structural embedding with pre-encoded semantic description, which enhances the model{'}s prediction performance on 1-N relations. The experimental results demonstrate that our proposed strategy attains state-of-the-art performance on the WN18RR (MRR+5.4{\%} and Hits@1+6.4{\%}) and UMLS datasets. Compared to existing models, we have increased inference speed by 30x and reduced training memory by approximately 60{\%}.",
}
| Knowledge graph completion (KGC) aims to infer missing or incomplete parts in knowledge graph. The existing models are generally divided into structure-based and description-based models, among description-based models often require longer training and inference times as well as increased memory usage. In this paper, we propose Pre-Encoded Masked Language Model (PEMLM) to efficiently solve KGC problem. By encoding textual descriptions into semantic representations before training, the necessary resources are significantly reduced. Furthermore, we introduce a straightforward but effective fusion framework to integrate structural embedding with pre-encoded semantic description, which enhances the model{'}s prediction performance on 1-N relations. The experimental results demonstrate that our proposed strategy attains state-of-the-art performance on the WN18RR (MRR+5.4{\%} and Hits@1+6.4{\%}) and UMLS datasets. Compared to existing models, we have increased inference speed by 30x and reduced training memory by approximately 60{\%}. | [
"Qiu, Chenyu",
"Qian, Pengjiang",
"Wang, Chuang",
"Yao, Jian",
"Liu, Li",
"Wei, Fang",
"Eddie, Eddie Y.k."
] | Joint Pre-Encoding Representation and Structure Embedding for Efficient and Low-Resource Knowledge Graph Completion | emnlp-main.851 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.852.bib | https://aclanthology.org/2024.emnlp-main.852/ | @inproceedings{chen-etal-2024-improving-discriminative,
title = "Improving Discriminative Capability of Reward Models in {RLHF} Using Contrastive Learning",
author = "Chen, Lu and
Zheng, Rui and
Wang, Binghai and
Jin, Senjie and
Huang, Caishuang and
Ye, Junjie and
Zhang, Zhihao and
Zhou, Yuhao and
Xi, Zhiheng and
Gui, Tao and
Zhang, Qi and
Huang, Xuanjing",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.852",
pages = "15270--15283",
abstract = "Reinforcement Learning from Human Feedback (RLHF) is a crucial approach to aligning language models with human values and intentions. A fundamental challenge in this method lies in ensuring that the reward model accurately understands and evaluates human preferences. Current methods rely on ranking losses to teach the reward model to assess preferences, but they are susceptible to noise and ambiguous data, often failing to deeply understand human intentions. To address this issue, we introduce contrastive learning into the reward modeling process. In addition to supervised ranking loss, we introduce an unsupervised contrastive loss to enable the reward model to fully capture the distinctions in contrastive data. Experimental results demonstrate that the proposed contrastive learning-based reward modeling method effectively enhances the generalization of the reward model, stabilizes the reinforcement learning training process, and improves the final alignment with human preferences.",
}
| Reinforcement Learning from Human Feedback (RLHF) is a crucial approach to aligning language models with human values and intentions. A fundamental challenge in this method lies in ensuring that the reward model accurately understands and evaluates human preferences. Current methods rely on ranking losses to teach the reward model to assess preferences, but they are susceptible to noise and ambiguous data, often failing to deeply understand human intentions. To address this issue, we introduce contrastive learning into the reward modeling process. In addition to supervised ranking loss, we introduce an unsupervised contrastive loss to enable the reward model to fully capture the distinctions in contrastive data. Experimental results demonstrate that the proposed contrastive learning-based reward modeling method effectively enhances the generalization of the reward model, stabilizes the reinforcement learning training process, and improves the final alignment with human preferences. | [
"Chen, Lu",
"Zheng, Rui",
"Wang, Binghai",
"Jin, Senjie",
"Huang, Caishuang",
"Ye, Junjie",
"Zhang, Zhihao",
"Zhou, Yuhao",
"Xi, Zhiheng",
"Gui, Tao",
"Zhang, Qi",
"Huang, Xuanjing"
] | Improving Discriminative Capability of Reward Models in RLHF Using Contrastive Learning | emnlp-main.852 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.853.bib | https://aclanthology.org/2024.emnlp-main.853/ | @inproceedings{wang-etal-2024-rocel,
title = "{R}o{CEL}: Advancing Table Entity Linking through Distinctive Row and Column Contexts",
author = "Wang, Yuanzheng and
Fan, Yixing and
Guo, Jiafeng and
Zhang, Ruqing and
Cheng, Xueqi",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.853",
pages = "15284--15298",
abstract = "Table entity linking (TEL) aims to map entity mentions in the table to their corresponding entities in a knowledge base (KB). The core of this task is to leverage structured contexts, specifically row and column contexts, to enhance the semantics of mentions in entity disambiguation. Most entity linking (EL) methods primarily focus on understanding sequential text contexts, making it difficult to adapt to the row and column structure of tables. Additionally, existing methods for TEL indiscriminately mix row and column contexts together, overlooking their semantic differences. In this paper, we explicitly distinguish the modeling of row and column contexts, and propose a method called RoCEL to capture their distinct semantics. Specifically, for row contexts in tables, we take the attention mechanism to learn the implicit relational dependencies between each cell and the mention. For column contexts in tables, we employ a set-wise encoder to learn the categorical information about the group of mentions. At last, we merge both contexts to obtain the final mention embedding for link prediction. Experiments on four benchmarks show that our approach outperforms the state-of-the-art (SOTA) baseline by about 1.5{\%} on the in-domain dataset, and by 3.7{\%} on average across three out-of-domain datasets.",
}
| Table entity linking (TEL) aims to map entity mentions in the table to their corresponding entities in a knowledge base (KB). The core of this task is to leverage structured contexts, specifically row and column contexts, to enhance the semantics of mentions in entity disambiguation. Most entity linking (EL) methods primarily focus on understanding sequential text contexts, making it difficult to adapt to the row and column structure of tables. Additionally, existing methods for TEL indiscriminately mix row and column contexts together, overlooking their semantic differences. In this paper, we explicitly distinguish the modeling of row and column contexts, and propose a method called RoCEL to capture their distinct semantics. Specifically, for row contexts in tables, we take the attention mechanism to learn the implicit relational dependencies between each cell and the mention. For column contexts in tables, we employ a set-wise encoder to learn the categorical information about the group of mentions. At last, we merge both contexts to obtain the final mention embedding for link prediction. Experiments on four benchmarks show that our approach outperforms the state-of-the-art (SOTA) baseline by about 1.5{\%} on the in-domain dataset, and by 3.7{\%} on average across three out-of-domain datasets. | [
"Wang, Yuanzheng",
"Fan, Yixing",
"Guo, Jiafeng",
"Zhang, Ruqing",
"Cheng, Xueqi"
] | RoCEL: Advancing Table Entity Linking through Distinctive Row and Column Contexts | emnlp-main.853 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.854.bib | https://aclanthology.org/2024.emnlp-main.854/ | @inproceedings{zheng-etal-2024-exploring,
title = "Exploring the Role of Reasoning Structures for Constructing Proofs in Multi-Step Natural Language Reasoning with Large Language Models",
author = "Zheng, Zi{'}ou and
Malon, Christopher and
Min, Martin Renqiang and
Zhu, Xiaodan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.854",
pages = "15299--15312",
abstract = "When performing complex multi-step reasoning tasks, the ability of Large Language Models (LLMs) to derive structured intermediate proof steps is important for ensuring that the models truly perform the desired reasoning and for improving models{'} explainability. This paper is centred around a focused study: whether the current state-of-the-art generalist LLMs can leverage the structures in a few examples to better construct the proof structures with in-context learning. Our study specifically focuses on structure-aware demonstration and structure-aware pruning. We demonstrate that they both help improve performance. A detailed analysis is provided to help understand the results.",
}
| When performing complex multi-step reasoning tasks, the ability of Large Language Models (LLMs) to derive structured intermediate proof steps is important for ensuring that the models truly perform the desired reasoning and for improving models{'} explainability. This paper is centred around a focused study: whether the current state-of-the-art generalist LLMs can leverage the structures in a few examples to better construct the proof structures with in-context learning. Our study specifically focuses on structure-aware demonstration and structure-aware pruning. We demonstrate that they both help improve performance. A detailed analysis is provided to help understand the results. | [
"Zheng, Zi{'}ou",
"Malon, Christopher",
"Min, Martin Renqiang",
"Zhu, Xiaodan"
] | Exploring the Role of Reasoning Structures for Constructing Proofs in Multi-Step Natural Language Reasoning with Large Language Models | emnlp-main.854 | Poster | 2410.08436 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.855.bib | https://aclanthology.org/2024.emnlp-main.855/ | @inproceedings{tasawong-etal-2024-efficient,
title = "Efficient Overshadowed Entity Disambiguation by Mitigating Shortcut Learning",
author = "Tasawong, Panuthep and
Limkonchotiwat, Peerat and
Manakul, Potsawee and
Udomcharoenchaikit, Can and
Chuangsuwanich, Ekapol and
Nutanong, Sarana",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.855",
pages = "15313--15321",
abstract = "Entity disambiguation (ED) is crucial in natural language processing (NLP) for tasks such as question-answering and information extraction. A major challenge in ED is handling overshadowed entities{---}uncommon entities sharing mention surfaces with common entities. The current approach to enhance performance on these entities involves reasoning over facts in a knowledge base (KB), increasing computational overhead during inference. We argue that the ED performance on overshadowed entities can be enhanced during training by addressing shortcut learning, which does not add computational overhead at inference. We propose a simple yet effective debiasing technique to prevent models from shortcut learning during training. Experiments on a range of ED datasets show that our method achieves state-of-the-art performance without compromising inference speed. Our findings suggest a new research direction for improving entity disambiguation via shortcut learning mitigation.",
}
| Entity disambiguation (ED) is crucial in natural language processing (NLP) for tasks such as question-answering and information extraction. A major challenge in ED is handling overshadowed entities{---}uncommon entities sharing mention surfaces with common entities. The current approach to enhance performance on these entities involves reasoning over facts in a knowledge base (KB), increasing computational overhead during inference. We argue that the ED performance on overshadowed entities can be enhanced during training by addressing shortcut learning, which does not add computational overhead at inference. We propose a simple yet effective debiasing technique to prevent models from shortcut learning during training. Experiments on a range of ED datasets show that our method achieves state-of-the-art performance without compromising inference speed. Our findings suggest a new research direction for improving entity disambiguation via shortcut learning mitigation. | [
"Tasawong, Panuthep",
"Limkonchotiwat, Peerat",
"Manakul, Potsawee",
"Udomcharoenchaikit, Can",
"Chuangsuwanich, Ekapol",
"Nutanong, Sarana"
] | Efficient Overshadowed Entity Disambiguation by Mitigating Shortcut Learning | emnlp-main.855 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.856.bib | https://aclanthology.org/2024.emnlp-main.856/ | @inproceedings{wang-etal-2024-appbench,
title = "{A}pp{B}ench: Planning of Multiple {API}s from Various {APP}s for Complex User Instruction",
author = "Wang, Hongru and
Wang, Rui and
Xue, Boyang and
Xia, Heming and
Cao, Jingtao and
Liu, Zeming and
Pan, Jeff Z. and
Wong, Kam-Fai",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.856",
pages = "15322--15336",
abstract = "Large Language Models (LLMs) can interact with the real world by connecting with versatile external APIs, resulting in better problem-solving and task automation capabilities. Previous research primarily either focuses on APIs with limited arguments from a single source or overlooks the complex dependency relationship between different APIs. However, it is essential to utilize multiple APIs collaboratively from various sources, especially for complex user instructions. In this paper, we introduce MetaBench, the first benchmark to evaluate LLMs{'} ability to plan and execute multiple APIs from various sources in order to complete the user{'}s task. Specifically, we consider two significant challenges in multiple APIs: 1) graph structures: some APIs can be executed independently while others need to be executed one by one, resulting in graph-like execution order; and 2) permission constraints: which source is authorized to execute the API call. We have experimental results on 9 distinct LLMs; e.g., GPT-4o achieves only a 2.0{\%} success rate at the most complex instruction, revealing that the existing state-of-the-art LLMs still cannot perform well in this situation even with the help of in-context learning and finetuning. Our code and data are publicly available at \url{https://github.com/ruleGreen/AppBench}.",
}
| Large Language Models (LLMs) can interact with the real world by connecting with versatile external APIs, resulting in better problem-solving and task automation capabilities. Previous research primarily either focuses on APIs with limited arguments from a single source or overlooks the complex dependency relationship between different APIs. However, it is essential to utilize multiple APIs collaboratively from various sources, especially for complex user instructions. In this paper, we introduce MetaBench, the first benchmark to evaluate LLMs{'} ability to plan and execute multiple APIs from various sources in order to complete the user{'}s task. Specifically, we consider two significant challenges in multiple APIs: 1) graph structures: some APIs can be executed independently while others need to be executed one by one, resulting in graph-like execution order; and 2) permission constraints: which source is authorized to execute the API call. We have experimental results on 9 distinct LLMs; e.g., GPT-4o achieves only a 2.0{\%} success rate at the most complex instruction, revealing that the existing state-of-the-art LLMs still cannot perform well in this situation even with the help of in-context learning and finetuning. Our code and data are publicly available at \url{https://github.com/ruleGreen/AppBench}. | [
"Wang, Hongru",
"Wang, Rui",
"Xue, Boyang",
"Xia, Heming",
"Cao, Jingtao",
"Liu, Zeming",
"Pan, Jeff Z.",
"Wong, Kam-Fai"
] | AppBench: Planning of Multiple APIs from Various APPs for Complex User Instruction | emnlp-main.856 | Poster | 2410.19743 | [
"https://github.com/ruleGreen/AppBench"
] | https://huggingface.co/papers/2410.19743 | 1 | 1 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.857.bib | https://aclanthology.org/2024.emnlp-main.857/ | @inproceedings{chen-etal-2024-everything,
title = "Not Everything is All You Need: Toward Low-Redundant Optimization for Large Language Model Alignment",
author = "Chen, Zhipeng and
Zhou, Kun and
Zhao, Xin and
Wang, Jingyuan and
Wen, Ji-Rong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.857",
pages = "15337--15351",
abstract = "Large language models (LLMs) are still struggling in aligning with human preference in complex tasks and scenarios. They are prone to overfit into the unexpected patterns or superficial styles in the training data. We conduct an empirical study that only selects the top-10{\%} most updated parameters in LLMs for alignment training, and see improvements in the convergence process and final performance. It indicates the existence of redundant neurons in LLMs for alignment training. To reduce its influence, we propose a low-redundant alignment method named **ALLO**, focusing on optimizing the most related neurons with the most useful supervised signals. Concretely, we first identify the neurons that are related to the human preference data by a gradient-based strategy, then identify the alignment-related key tokens by reward models for computing loss. Besides, we also decompose the alignment process into the forgetting and learning stages, where we first forget the tokens with unaligned knowledge and then learn aligned knowledge, by updating different ratios of neurons, respectively. Experimental results on 10 datasets have shown the effectiveness of ALLO. Our code and data will be publicly released.",
}
| Large language models (LLMs) are still struggling in aligning with human preference in complex tasks and scenarios. They are prone to overfit into the unexpected patterns or superficial styles in the training data. We conduct an empirical study that only selects the top-10{\%} most updated parameters in LLMs for alignment training, and see improvements in the convergence process and final performance. It indicates the existence of redundant neurons in LLMs for alignment training. To reduce its influence, we propose a low-redundant alignment method named **ALLO**, focusing on optimizing the most related neurons with the most useful supervised signals. Concretely, we first identify the neurons that are related to the human preference data by a gradient-based strategy, then identify the alignment-related key tokens by reward models for computing loss. Besides, we also decompose the alignment process into the forgetting and learning stages, where we first forget the tokens with unaligned knowledge and then learn aligned knowledge, by updating different ratios of neurons, respectively. Experimental results on 10 datasets have shown the effectiveness of ALLO. Our code and data will be publicly released. | [
"Chen, Zhipeng",
"Zhou, Kun",
"Zhao, Xin",
"Wang, Jingyuan",
"Wen, Ji-Rong"
] | Not Everything is All You Need: Toward Low-Redundant Optimization for Large Language Model Alignment | emnlp-main.857 | Poster | 2406.12606 | [
"https://github.com/rucaibox/allo"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.858.bib | https://aclanthology.org/2024.emnlp-main.858/ | @inproceedings{yang-etal-2024-audiovsr,
title = "{A}udio{VSR}: Enhancing Video Speech Recognition with Audio Data",
author = "Yang, Xiaoda and
Cheng, Xize and
Duan, Jiaqi and
Qiu, Hongshun and
Hong, Minjie and
Fang, Minghui and
Ji, Shengpeng and
Zuo, Jialong and
Hong, Zhiqing and
Zhang, Zhimeng and
Jin, Tao",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.858",
pages = "15352--15361",
abstract = "Visual Speech Recognition (VSR) aims to predict spoken content by analyzing lip movements in videos. Recently reported state-of-the-art results in VSR often rely on increasingly large amounts of video data, while the publicly available transcribed video datasets are insufficient compared to the audio data. To further enhance the VSR model using the audio data, we employed a generative model for data inflation, integrating the synthetic data with the authentic visual data. Essentially, the generative model incorporates another insight, which enhances the capabilities of the recognition model. For the cross-language issue, previous work has shown poor performance with non-Indo-European languages. We trained a multi-language-family modal fusion model, AudioVSR. Leveraging the concept of modal transfer, we achieved significant results in downstream VSR tasks under conditions of data scarcity. To the best of our knowledge, AudioVSR represents the first work on cross-language-family audio-lip alignment, achieving a new SOTA in the cross-language scenario.",
}
| Visual Speech Recognition (VSR) aims to predict spoken content by analyzing lip movements in videos. Recently reported state-of-the-art results in VSR often rely on increasingly large amounts of video data, while the publicly available transcribed video datasets are insufficient compared to the audio data. To further enhance the VSR model using the audio data, we employed a generative model for data inflation, integrating the synthetic data with the authentic visual data. Essentially, the generative model incorporates another insight, which enhances the capabilities of the recognition model. For the cross-language issue, previous work has shown poor performance with non-Indo-European languages. We trained a multi-language-family modal fusion model, AudioVSR. Leveraging the concept of modal transfer, we achieved significant results in downstream VSR tasks under conditions of data scarcity. To the best of our knowledge, AudioVSR represents the first work on cross-language-family audio-lip alignment, achieving a new SOTA in the cross-language scenario. | [
"Yang, Xiaoda",
"Cheng, Xize",
"Duan, Jiaqi",
"Qiu, Hongshun",
"Hong, Minjie",
"Fang, Minghui",
"Ji, Shengpeng",
"Zuo, Jialong",
"Hong, Zhiqing",
"Zhang, Zhimeng",
"Jin, Tao"
] | AudioVSR: Enhancing Video Speech Recognition with Audio Data | emnlp-main.858 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.859.bib | https://aclanthology.org/2024.emnlp-main.859/ | @inproceedings{waghjale-etal-2024-ecco,
title = "{ECCO}: Can We Improve Model-Generated Code Efficiency Without Sacrificing Functional Correctness?",
author = "Waghjale, Siddhant and
Veerendranath, Vishruth and
Wang, Zhiruo and
Fried, Daniel",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.859",
pages = "15362--15376",
abstract = "Although large language models (LLMs) have been largely successful in generating functionally correct programs, conditioning models to produce efficient solutions while ensuring correctness remains a challenge. Further, unreliability in benchmarking code efficiency is a hurdle across varying hardware specifications for popular interpreted languages such as Python. In this paper, we present ECCO, a reproducible benchmark for evaluating program efficiency via two paradigms: natural language (NL) based code generation and history-based code editing. On ECCO, we adapt and thoroughly investigate the three most promising existing LLM-based approaches: in-context learning, iterative refinement with execution or NL feedback, and fine-tuning conditioned on execution and editing history. While most methods degrade functional correctness and moderately increase program efficiency, we find that adding execution information often helps maintain functional correctness, and NL feedback enhances more on efficiency. We release our benchmark to support future work on LLM-based generation of efficient code.",
}
| Although large language models (LLMs) have been largely successful in generating functionally correct programs, conditioning models to produce efficient solutions while ensuring correctness remains a challenge. Further, unreliability in benchmarking code efficiency is a hurdle across varying hardware specifications for popular interpreted languages such as Python. In this paper, we present ECCO, a reproducible benchmark for evaluating program efficiency via two paradigms: natural language (NL) based code generation and history-based code editing. On ECCO, we adapt and thoroughly investigate the three most promising existing LLM-based approaches: in-context learning, iterative refinement with execution or NL feedback, and fine-tuning conditioned on execution and editing history. While most methods degrade functional correctness and moderately increase program efficiency, we find that adding execution information often helps maintain functional correctness, and NL feedback enhances more on efficiency. We release our benchmark to support future work on LLM-based generation of efficient code. | [
"Waghjale, Siddhant",
"Veerendranath, Vishruth",
"Wang, Zhiruo",
"Fried, Daniel"
] | ECCO: Can We Improve Model-Generated Code Efficiency Without Sacrificing Functional Correctness? | emnlp-main.859 | Poster | 2407.14044 | [
"https://github.com/codeeff/ecco"
] | https://huggingface.co/papers/2407.14044 | 0 | 0 | 0 | 4 | [] | [
"CodeEff/ECCO"
] | [] | [] | [
"CodeEff/ECCO"
] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.860.bib | https://aclanthology.org/2024.emnlp-main.860/ | @inproceedings{feng-etal-2024-ladder,
title = "Ladder: A Model-Agnostic Framework Boosting {LLM}-based Machine Translation to the Next Level",
author = "Feng, Zhaopeng and
Chen, Ruizhe and
Zhang, Yan and
Meng, Zijie and
Liu, Zuozhu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.860",
pages = "15377--15393",
abstract = "General-purpose Large Language Models (LLMs) like GPT-4 have achieved remarkable advancements in machine translation (MT) by leveraging extensive web content. On the other hand, translation-specific LLMs are built by pre-training on domain-specific monolingual corpora and fine-tuning with human-annotated translation data. Despite the superior performance, these methods either demand an unprecedented scale of computing and data or substantial human editing and annotation efforts. In this paper, we develop MT-Ladder, a novel model-agnostic and cost-effective tool to refine the performance of general LLMs for MT. MT-Ladder is trained on pseudo-refinement triplets which can be easily obtained from existing LLMs without additional human cost. During training, we propose a hierarchical fine-tuning strategy with an easy-to-hard schema, improving MT-Ladder{'}s refining performance progressively. The trained MT-Ladder can be seamlessly integrated with any general-purpose LLMs to boost their translation performance. By utilizing Gemma-2B/7B as the backbone, MT-Ladder-2B can elevate raw translations to the level of top-tier open-source models (e.g., refining BigTranslate-13B with +6.91 BLEU and +3.52 COMET for XXâEn), and MT-Ladder-7B can further enhance model performance to be on par with the state-of-the-art GPT-4. Extensive ablation and analysis corroborate the effectiveness of MT-Ladder in diverse settings.",
}
| General-purpose Large Language Models (LLMs) like GPT-4 have achieved remarkable advancements in machine translation (MT) by leveraging extensive web content. On the other hand, translation-specific LLMs are built by pre-training on domain-specific monolingual corpora and fine-tuning with human-annotated translation data. Despite the superior performance, these methods either demand an unprecedented scale of computing and data or substantial human editing and annotation efforts. In this paper, we develop MT-Ladder, a novel model-agnostic and cost-effective tool to refine the performance of general LLMs for MT. MT-Ladder is trained on pseudo-refinement triplets which can be easily obtained from existing LLMs without additional human cost. During training, we propose a hierarchical fine-tuning strategy with an easy-to-hard schema, improving MT-Ladder{'}s refining performance progressively. The trained MT-Ladder can be seamlessly integrated with any general-purpose LLMs to boost their translation performance. By utilizing Gemma-2B/7B as the backbone, MT-Ladder-2B can elevate raw translations to the level of top-tier open-source models (e.g., refining BigTranslate-13B with +6.91 BLEU and +3.52 COMET for XXâEn), and MT-Ladder-7B can further enhance model performance to be on par with the state-of-the-art GPT-4. Extensive ablation and analysis corroborate the effectiveness of MT-Ladder in diverse settings. | [
"Feng, Zhaopeng",
"Chen, Ruizhe",
"Zhang, Yan",
"Meng, Zijie",
"Liu, Zuozhu"
] | Ladder: A Model-Agnostic Framework Boosting LLM-based Machine Translation to the Next Level | emnlp-main.860 | Poster | 2406.15741 | [
"https://github.com/fzp0424/ladder"
] | https://huggingface.co/papers/2406.15741 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.861.bib | https://aclanthology.org/2024.emnlp-main.861/ | @inproceedings{dou-etal-2024-rest,
title = "Re-{R}e{ST}: Reflection-Reinforced Self-Training for Language Agents",
author = "Dou, Zi-Yi and
Yang, Cheng-Fu and
Wu, Xueqing and
Chang, Kai-Wei and
Peng, Nanyun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.861",
pages = "15394--15411",
abstract = "Finetuning language agents with reasoning-action trajectories is effective, but obtaining these trajectories from human annotations or stronger models is costly and sometimes impractical. In this paper, we investigate the use of self-training in language agents, which can generate supervision from the agent itself, offering a promising alternative without relying on human or stronger model demonstrations. Self-training, however, requires high-quality model-generated samples, which are hard to obtain for challenging language agent tasks. To address this, we present Reflection-Reinforced Self-Training (Re-ReST), which uses a \textit{reflector} to refine low-quality generated samples during self-training. The reflector takes the agent{'}s output and feedback from an external environment (e.g., unit test results in code generation) to produce improved samples. This technique enhances the quality of inferior samples and efficiently enriches the self-training dataset with higher-quality samples. We conduct extensive experiments on open-source language agents across tasks, including multi-hop question answering, sequential decision-making, code generation, visual question answering, and text-to-image generation. The results demonstrate the effectiveness of self-training and Re-ReST in language agent tasks, with self-training improving baselines by 7.6{\%} on HotpotQA and 28.4{\%} on AlfWorld, and Re-ReST further boosting performance by 2.0{\%} and 14.1{\%}, respectively. Our studies also confirm the efficiency of using a reflector to generate high-quality samples for self-training. Moreover, we demonstrate a method to employ reflection during inference without ground-truth feedback, addressing the limitation of previous reflection work.",
}
| Finetuning language agents with reasoning-action trajectories is effective, but obtaining these trajectories from human annotations or stronger models is costly and sometimes impractical. In this paper, we investigate the use of self-training in language agents, which can generate supervision from the agent itself, offering a promising alternative without relying on human or stronger model demonstrations. Self-training, however, requires high-quality model-generated samples, which are hard to obtain for challenging language agent tasks. To address this, we present Reflection-Reinforced Self-Training (Re-ReST), which uses a \textit{reflector} to refine low-quality generated samples during self-training. The reflector takes the agent{'}s output and feedback from an external environment (e.g., unit test results in code generation) to produce improved samples. This technique enhances the quality of inferior samples and efficiently enriches the self-training dataset with higher-quality samples. We conduct extensive experiments on open-source language agents across tasks, including multi-hop question answering, sequential decision-making, code generation, visual question answering, and text-to-image generation. The results demonstrate the effectiveness of self-training and Re-ReST in language agent tasks, with self-training improving baselines by 7.6{\%} on HotpotQA and 28.4{\%} on AlfWorld, and Re-ReST further boosting performance by 2.0{\%} and 14.1{\%}, respectively. Our studies also confirm the efficiency of using a reflector to generate high-quality samples for self-training. Moreover, we demonstrate a method to employ reflection during inference without ground-truth feedback, addressing the limitation of previous reflection work. | [
"Dou, Zi-Yi",
"Yang, Cheng-Fu",
"Wu, Xueqing",
"Chang, Kai-Wei",
"Peng, Nanyun"
] | Re-ReST: Reflection-Reinforced Self-Training for Language Agents | emnlp-main.861 | Poster | 2406.01495 | [
"https://github.com/PlusLabNLP/Re-ReST"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.862.bib | https://aclanthology.org/2024.emnlp-main.862/ | @inproceedings{guan-etal-2024-effective,
title = "Effective Synthetic Data and Test-Time Adaptation for {OCR} Correction",
author = "Guan, Shuhao and
Xu, Cheng and
Lin, Moule and
Greene, Derek",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.862",
pages = "15412--15425",
abstract = "Post-OCR technology is used to correct errors in the text produced by OCR systems. This study introduces a method for constructing post-OCR synthetic data with different noise levels using weak supervision. We define Character Error Rate (CER) thresholds for {``}effective{''} and {``}ineffective{''} synthetic data, allowing us to create more useful multi-noise level synthetic datasets. Furthermore, we propose Self-Correct-Noise Test-Time Adaptation (SCN-TTA), which combines self-correction and noise generation mechanisms. SCN-TTA allows a model to dynamically adjust to test data without relying on labels, effectively handling proper nouns in long texts and further reducing CER. In our experiments we evaluate a range of models, including multiple PLMs and LLMs. Results indicate that our method yields models that are effective across diverse text types. Notably, the ByT5 model achieves a CER reduction of 68.67{\%} without relying on manually annotated data",
}
| Post-OCR technology is used to correct errors in the text produced by OCR systems. This study introduces a method for constructing post-OCR synthetic data with different noise levels using weak supervision. We define Character Error Rate (CER) thresholds for {``}effective{''} and {``}ineffective{''} synthetic data, allowing us to create more useful multi-noise level synthetic datasets. Furthermore, we propose Self-Correct-Noise Test-Time Adaptation (SCN-TTA), which combines self-correction and noise generation mechanisms. SCN-TTA allows a model to dynamically adjust to test data without relying on labels, effectively handling proper nouns in long texts and further reducing CER. In our experiments we evaluate a range of models, including multiple PLMs and LLMs. Results indicate that our method yields models that are effective across diverse text types. Notably, the ByT5 model achieves a CER reduction of 68.67{\%} without relying on manually annotated data | [
"Guan, Shuhao",
"Xu, Cheng",
"Lin, Moule",
"Greene, Derek"
] | Effective Synthetic Data and Test-Time Adaptation for OCR Correction | emnlp-main.862 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.863.bib | https://aclanthology.org/2024.emnlp-main.863/ | @inproceedings{zhang-etal-2024-srf,
title = "{SRF}: Enhancing Document-Level Relation Extraction with a Novel Secondary Reasoning Framework",
author = "Zhang, Fu and
Miao, Qi and
Cheng, Jingwei and
Yu, Hongsen and
Yan, Yi and
Li, Xin and
Wu, Yongxue",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.863",
pages = "15426--15439",
abstract = "Document-level Relation Extraction (DocRE) aims to extract relations between entity pairs in a document and poses many challenges as it involves multiple mentions of entities and cross-sentence inference. However, several aspects that are important for DocRE have not been considered and explored. Existing work ignores bidirectional mention interaction when generating relational features for entity pairs. Also, sophisticated neural networks are typically designed for cross-sentence evidence extraction to further enhance DocRE. More interestingly, we reveal a noteworthy finding: If a model has predicted a relation between an entity and other entities, this relation information may help infer and predict more relations between the entity{'}s adjacent entities and these other entities. Nonetheless, none of existing methods leverage secondary reasoning to exploit results of relation prediction. To this end, we propose a novel Secondary Reasoning Framework (SRF) for DocRE. In SRF, we initially propose a DocRE model that incorporates bidirectional mention fusion and a simple yet effective evidence extraction module (incurring only an additional learnable parameter overhead) for relation prediction. Further, for the first time, we elaborately design and propose a novel secondary reasoning method to discover more relations by exploring the results of the first relation prediction. Extensive experiments show that SRF achieves SOTA performance and our secondary reasoning method is both effective and general when integrated into existing models.",
}
| Document-level Relation Extraction (DocRE) aims to extract relations between entity pairs in a document and poses many challenges as it involves multiple mentions of entities and cross-sentence inference. However, several aspects that are important for DocRE have not been considered and explored. Existing work ignores bidirectional mention interaction when generating relational features for entity pairs. Also, sophisticated neural networks are typically designed for cross-sentence evidence extraction to further enhance DocRE. More interestingly, we reveal a noteworthy finding: If a model has predicted a relation between an entity and other entities, this relation information may help infer and predict more relations between the entity{'}s adjacent entities and these other entities. Nonetheless, none of existing methods leverage secondary reasoning to exploit results of relation prediction. To this end, we propose a novel Secondary Reasoning Framework (SRF) for DocRE. In SRF, we initially propose a DocRE model that incorporates bidirectional mention fusion and a simple yet effective evidence extraction module (incurring only an additional learnable parameter overhead) for relation prediction. Further, for the first time, we elaborately design and propose a novel secondary reasoning method to discover more relations by exploring the results of the first relation prediction. Extensive experiments show that SRF achieves SOTA performance and our secondary reasoning method is both effective and general when integrated into existing models. | [
"Zhang, Fu",
"Miao, Qi",
"Cheng, Jingwei",
"Yu, Hongsen",
"Yan, Yi",
"Li, Xin",
"Wu, Yongxue"
] | SRF: Enhancing Document-Level Relation Extraction with a Novel Secondary Reasoning Framework | emnlp-main.863 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.864.bib | https://aclanthology.org/2024.emnlp-main.864/ | @inproceedings{liu-etal-2024-finecops,
title = "{F}ine{C}ops-Ref: A new Dataset and Task for Fine-Grained Compositional Referring Expression Comprehension",
author = "Liu, Junzhuo and
Yang, Xuzheng and
Li, Weiwei and
Wang, Peng",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.864",
pages = "15440--15457",
abstract = "Referring Expression Comprehension (REC) is a crucial cross-modal task that objectively evaluates the capabilities of language understanding, image comprehension, and language-to-image grounding. Consequently, it serves as an ideal testing ground for Multi-modal Large Language Models (MLLMs). In pursuit of this goal, we have established a new REC dataset characterized by two key features: Firstly, it is designed with controllable varying levels of difficulty, necessitating multi-level fine-grained reasoning across object categories, attributes, and multi-hop relationships. Secondly, it includes negative text and images created through fine-grained editing and generation based on existing data, thereby testing the model{'}s ability to correctly reject scenarios where the target object is not visible in the image{---}an essential aspect often overlooked in existing datasets and approaches. Utilizing this high-quality dataset, we conducted comprehensive evaluations of both state-of-the-art specialist models and MLLMs. Our findings indicate that there remains a significant gap in achieving satisfactory grounding performance. We anticipate that our dataset will inspire new approaches to enhance visual reasoning and develop more advanced cross-modal interaction strategies, ultimately unlocking the full potential of MLLMs.",
}
| Referring Expression Comprehension (REC) is a crucial cross-modal task that objectively evaluates the capabilities of language understanding, image comprehension, and language-to-image grounding. Consequently, it serves as an ideal testing ground for Multi-modal Large Language Models (MLLMs). In pursuit of this goal, we have established a new REC dataset characterized by two key features: Firstly, it is designed with controllable varying levels of difficulty, necessitating multi-level fine-grained reasoning across object categories, attributes, and multi-hop relationships. Secondly, it includes negative text and images created through fine-grained editing and generation based on existing data, thereby testing the model{'}s ability to correctly reject scenarios where the target object is not visible in the image{---}an essential aspect often overlooked in existing datasets and approaches. Utilizing this high-quality dataset, we conducted comprehensive evaluations of both state-of-the-art specialist models and MLLMs. Our findings indicate that there remains a significant gap in achieving satisfactory grounding performance. We anticipate that our dataset will inspire new approaches to enhance visual reasoning and develop more advanced cross-modal interaction strategies, ultimately unlocking the full potential of MLLMs. | [
"Liu, Junzhuo",
"Yang, Xuzheng",
"Li, Weiwei",
"Wang, Peng"
] | FineCops-Ref: A new Dataset and Task for Fine-Grained Compositional Referring Expression Comprehension | emnlp-main.864 | Poster | 2409.14750 | [
"https://github.com/liujunzhuo/FineCops-Ref"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.865.bib | https://aclanthology.org/2024.emnlp-main.865/ | @inproceedings{wagner-etal-2024-exploring,
title = "Exploring the Learning Capabilities of Language Models using {LEVERWORLDS}",
author = "Wagner, Eitan and
Feder, Amir and
Abend, Omri",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.865",
pages = "15458--15468",
abstract = "Learning a model of a stochastic setting often involves learning both general structure rules and specific properties of the instance. This paper investigates the interplay between learning the general and the specific in various learning methods, with emphasis on sample efficiency. We design a framework called LEVERWORLDS, which allows the generation of simple physics-inspired worlds that follow a similar generative process with different distributions, and their instances can be expressed in natural language. These worlds allow for controlled experiments to assess the sample complexity of different learning methods. We experiment with classic learning algorithms as well as Transformer language models, both with fine-tuning and In-Context Learning (ICL). Our general finding is that (1) Transformers generally succeed in the task; but (2) they are considerably less sample efficient than classic methods that make stronger assumptions about the structure, such as Maximum Likelihood Estimation and Logistic Regression. This finding is in tension with the recent tendency to use Transformers as general-purpose estimators. We propose an approach that leverages the ICL capabilities of contemporary language models to apply simple algorithms for this type of data. Our experiments show that models currently struggle with the task but show promising potential.",
}
| Learning a model of a stochastic setting often involves learning both general structure rules and specific properties of the instance. This paper investigates the interplay between learning the general and the specific in various learning methods, with emphasis on sample efficiency. We design a framework called LEVERWORLDS, which allows the generation of simple physics-inspired worlds that follow a similar generative process with different distributions, and their instances can be expressed in natural language. These worlds allow for controlled experiments to assess the sample complexity of different learning methods. We experiment with classic learning algorithms as well as Transformer language models, both with fine-tuning and In-Context Learning (ICL). Our general finding is that (1) Transformers generally succeed in the task; but (2) they are considerably less sample efficient than classic methods that make stronger assumptions about the structure, such as Maximum Likelihood Estimation and Logistic Regression. This finding is in tension with the recent tendency to use Transformers as general-purpose estimators. We propose an approach that leverages the ICL capabilities of contemporary language models to apply simple algorithms for this type of data. Our experiments show that models currently struggle with the task but show promising potential. | [
"Wagner, Eitan",
"Feder, Amir",
"Abend, Omri"
] | Exploring the Learning Capabilities of Language Models using LEVERWORLDS | emnlp-main.865 | Poster | 2410.00519 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.866.bib | https://aclanthology.org/2024.emnlp-main.866/ | @inproceedings{wagner-etal-2024-contests,
title = "{CONTESTS}: a Framework for Consistency Testing of Span Probabilities in Language Models",
author = "Wagner, Eitan and
Slavutsky, Yuli and
Abend, Omri",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.866",
pages = "15469--15484",
abstract = "Although language model scores are often treated as probabilities, their reliability as probability estimators has mainly been studied through calibration, overlooking other aspects. In particular, it is unclear whether language models produce the same value for different ways of assigning joint probabilities to word spans. Our work introduces a novel framework, ConTestS (Consistency Testing over Spans), involving statistical tests to assess score consistency across interchangeable completion and conditioning orders. We conduct experiments on post-release real and synthetic data to eliminate training effects. Our findings reveal that both Masked Language Models (MLMs) and autoregressive models exhibit inconsistent predictions, with autoregressive models showing larger discrepancies. Larger MLMs tend to produce more consistent predictions, while autoregressive models show the opposite trend. Moreover, for both model types, prediction entropies offer insights into the true word span likelihood and therefore can aid in selecting optimal decoding strategies. The inconsistencies revealed by our analysis, as well their connection to prediction entropies and differences between model types, can serve as useful guides for future research on addressing these limitations.",
}
| Although language model scores are often treated as probabilities, their reliability as probability estimators has mainly been studied through calibration, overlooking other aspects. In particular, it is unclear whether language models produce the same value for different ways of assigning joint probabilities to word spans. Our work introduces a novel framework, ConTestS (Consistency Testing over Spans), involving statistical tests to assess score consistency across interchangeable completion and conditioning orders. We conduct experiments on post-release real and synthetic data to eliminate training effects. Our findings reveal that both Masked Language Models (MLMs) and autoregressive models exhibit inconsistent predictions, with autoregressive models showing larger discrepancies. Larger MLMs tend to produce more consistent predictions, while autoregressive models show the opposite trend. Moreover, for both model types, prediction entropies offer insights into the true word span likelihood and therefore can aid in selecting optimal decoding strategies. The inconsistencies revealed by our analysis, as well their connection to prediction entropies and differences between model types, can serve as useful guides for future research on addressing these limitations. | [
"Wagner, Eitan",
"Slavutsky, Yuli",
"Abend, Omri"
] | CONTESTS: a Framework for Consistency Testing of Span Probabilities in Language Models | emnlp-main.866 | Poster | 2409.19984 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.867.bib | https://aclanthology.org/2024.emnlp-main.867/ | @inproceedings{suri-etal-2024-docedit,
title = "{D}oc{E}dit-v2: Document Structure Editing Via Multimodal {LLM} Grounding",
author = "Suri, Manan and
Mathur, Puneet and
Dernoncourt, Franck and
Jain, Rajiv and
Morariu, Vlad I and
Sawhney, Ramit and
Nakov, Preslav and
Manocha, Dinesh",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.867",
pages = "15485--15505",
abstract = "Document structure editing involves manipulating localized textual, visual, and layout components in document images based on the user{'}s requests. Past works have shown that multimodal grounding of user requests in the document image and identifying the accurate structural components and their associated attributes remain key challenges for this task. To address these, we introduce the DocEditAgent, a novel framework that performs end-to-end document editing by leveraging Large Multimodal Models (LMMs). It consists of three novel components {--} (1) Doc2Command to simultaneously localize edit regions of interest (RoI) and disambiguate user edit requests into edit commands. (2) LLM-based Command Reformulation prompting to tailor edit commands originally intended for specialized software into edit instructions suitable for generalist LMMs. (3) Moreover, DocEditAgent processes these outputs via Large Multimodal Models like GPT-4V and Gemini, to parse the document layout, execute edits on grounded Region of Interest (RoI), and generate the edited document image. Extensive experiments on the DocEdit dataset show that DocEditAgent significantly outperforms strong baselines on edit command generation (2-33{\%}), RoI bounding box detection (12-31{\%}), and overall document editing (1-12{\%}) tasks.",
}
| Document structure editing involves manipulating localized textual, visual, and layout components in document images based on the user{'}s requests. Past works have shown that multimodal grounding of user requests in the document image and identifying the accurate structural components and their associated attributes remain key challenges for this task. To address these, we introduce the DocEditAgent, a novel framework that performs end-to-end document editing by leveraging Large Multimodal Models (LMMs). It consists of three novel components {--} (1) Doc2Command to simultaneously localize edit regions of interest (RoI) and disambiguate user edit requests into edit commands. (2) LLM-based Command Reformulation prompting to tailor edit commands originally intended for specialized software into edit instructions suitable for generalist LMMs. (3) Moreover, DocEditAgent processes these outputs via Large Multimodal Models like GPT-4V and Gemini, to parse the document layout, execute edits on grounded Region of Interest (RoI), and generate the edited document image. Extensive experiments on the DocEdit dataset show that DocEditAgent significantly outperforms strong baselines on edit command generation (2-33{\%}), RoI bounding box detection (12-31{\%}), and overall document editing (1-12{\%}) tasks. | [
"Suri, Manan",
"Mathur, Puneet",
"Dernoncourt, Franck",
"Jain, Rajiv",
"Morariu, Vlad I",
"Sawhney, Ramit",
"Nakov, Preslav",
"Manocha, Dinesh"
] | DocEdit-v2: Document Structure Editing Via Multimodal LLM Grounding | emnlp-main.867 | Poster | 2410.16472 | [
""
] | https://huggingface.co/papers/2410.16472 | 2 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.868.bib | https://aclanthology.org/2024.emnlp-main.868/ | @inproceedings{lin-etal-2024-dogerm,
title = "{D}oge{RM}: Equipping Reward Models with Domain Knowledge through Model Merging",
author = "Lin, Tzu-Han and
Li, Chen-An and
Lee, Hung-yi and
Chen, Yun-Nung",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.868",
pages = "15506--15524",
abstract = "Reinforcement learning from human feedback (RLHF) is a popular strategy for aligning large language models (LLMs) with desired behaviors. Reward modeling is a crucial step in RLHF. However, collecting paired preference data for training reward models is often costly and time-consuming, especially for domain-specific preferences requiring expert annotation. To address this challenge, we propose the **Do**main knowled**ge** merged **R**eward **M**odel (**DogeRM**), a novel framework that integrates domain-specific knowledge into a general reward model by model merging. The experiments demonstrate that DogeRM enhances performance across different benchmarks and provide a detailed analysis showcasing the effects of model merging, showing the great potential of facilitating model alignment.",
}
| Reinforcement learning from human feedback (RLHF) is a popular strategy for aligning large language models (LLMs) with desired behaviors. Reward modeling is a crucial step in RLHF. However, collecting paired preference data for training reward models is often costly and time-consuming, especially for domain-specific preferences requiring expert annotation. To address this challenge, we propose the **Do**main knowled**ge** merged **R**eward **M**odel (**DogeRM**), a novel framework that integrates domain-specific knowledge into a general reward model by model merging. The experiments demonstrate that DogeRM enhances performance across different benchmarks and provide a detailed analysis showcasing the effects of model merging, showing the great potential of facilitating model alignment. | [
"Lin, Tzu-Han",
"Li, Chen-An",
"Lee, Hung-yi",
"Chen, Yun-Nung"
] | DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging | emnlp-main.868 | Poster | 2407.01470 | [
"https://github.com/miulab/dogerm"
] | https://huggingface.co/papers/2407.01470 | 3 | 5 | 1 | 4 | [
"miulab/llama2-7b-ultrafeedback-rm",
"miulab/llama2-7b-magicoder-evol-instruct",
"miulab/llama2-7b-alpaca-sft-10k",
"miulab/llama2-7b-oss-instruct",
"MachoMaheen/devdock4bit"
] | [] | [] | [
"miulab/llama2-7b-ultrafeedback-rm",
"miulab/llama2-7b-magicoder-evol-instruct",
"miulab/llama2-7b-alpaca-sft-10k",
"miulab/llama2-7b-oss-instruct",
"MachoMaheen/devdock4bit"
] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.869.bib | https://aclanthology.org/2024.emnlp-main.869/ | @inproceedings{wuraola-etal-2024-understanding,
title = "Understanding Slang with {LLM}s: Modelling Cross-Cultural Nuances through Paraphrasing",
author = "Wuraola, Ifeoluwa and
Dethlefs, Nina and
Marciniak, Daniel",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.869",
pages = "15525--15531",
abstract = "In the realm of social media discourse, the integration of slang enriches communication, reflecting the sociocultural identities of users. This study investigates the capability of large language models (LLMs) to paraphrase slang within climate-related tweets from Nigeria and the UK, with a focus on identifying emotional nuances. Using DistilRoBERTa as the base-line model, we observe its limited comprehension of slang. To improve cross-cultural understanding, we gauge the effectiveness of leading LLMs ChatGPT 4, Gemini, and LLaMA3 in slang paraphrasing. While ChatGPT 4 and Gemini demonstrate comparable effectiveness in slang paraphrasing, LLaMA3 shows less coverage, with all LLMs exhibiting limitations in coverage, especially of Nigerian slang. Our findings underscore the necessity for culturally sensitive LLM development in emotion classification, particularly in non-anglocentric regions.",
}
| In the realm of social media discourse, the integration of slang enriches communication, reflecting the sociocultural identities of users. This study investigates the capability of large language models (LLMs) to paraphrase slang within climate-related tweets from Nigeria and the UK, with a focus on identifying emotional nuances. Using DistilRoBERTa as the base-line model, we observe its limited comprehension of slang. To improve cross-cultural understanding, we gauge the effectiveness of leading LLMs ChatGPT 4, Gemini, and LLaMA3 in slang paraphrasing. While ChatGPT 4 and Gemini demonstrate comparable effectiveness in slang paraphrasing, LLaMA3 shows less coverage, with all LLMs exhibiting limitations in coverage, especially of Nigerian slang. Our findings underscore the necessity for culturally sensitive LLM development in emotion classification, particularly in non-anglocentric regions. | [
"Wuraola, Ifeoluwa",
"Dethlefs, Nina",
"Marciniak, Daniel"
] | Understanding Slang with LLMs: Modelling Cross-Cultural Nuances through Paraphrasing | emnlp-main.869 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.870.bib | https://aclanthology.org/2024.emnlp-main.870/ | @inproceedings{tu-etal-2024-unlocking,
title = "Unlocking Anticipatory Text Generation: A Constrained Approach for Large Language Models Decoding",
author = "Tu, Lifu and
Yavuz, Semih and
Qu, Jin and
Xu, Jiacheng and
Meng, Rui and
Xiong, Caiming and
Zhou, Yingbo",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.870",
pages = "15532--15548",
abstract = "Large Language Models (LLMs) have demonstrated a powerful ability for text generation. However, achieving optimal results with a given prompt or instruction can be challenging, especially for billion-sized models. Additionally, undesired behaviors such as toxicity or hallucinations can manifest. While much larger models (e.g., ChatGPT) may demonstrate strength in mitigating these issues, there is still no guarantee of complete prevention. In this work, we propose formalizing text generation as a future-constrained generation problem to minimize undesirable behaviors and enforce faithfulness to instructions. The estimation of future constraint satisfaction, accomplished using LLMs, guides the text generation process. Our extensive experiments demonstrate the effectiveness of the proposed approach across three distinct text generation tasks: keyword-constrained generation (Lin et al., 2020), toxicity reduction (Gehman et al., 2020), and factual correctness in question-answering (Gao et al., 2023).",
}
| Large Language Models (LLMs) have demonstrated a powerful ability for text generation. However, achieving optimal results with a given prompt or instruction can be challenging, especially for billion-sized models. Additionally, undesired behaviors such as toxicity or hallucinations can manifest. While much larger models (e.g., ChatGPT) may demonstrate strength in mitigating these issues, there is still no guarantee of complete prevention. In this work, we propose formalizing text generation as a future-constrained generation problem to minimize undesirable behaviors and enforce faithfulness to instructions. The estimation of future constraint satisfaction, accomplished using LLMs, guides the text generation process. Our extensive experiments demonstrate the effectiveness of the proposed approach across three distinct text generation tasks: keyword-constrained generation (Lin et al., 2020), toxicity reduction (Gehman et al., 2020), and factual correctness in question-answering (Gao et al., 2023). | [
"Tu, Lifu",
"Yavuz, Semih",
"Qu, Jin",
"Xu, Jiacheng",
"Meng, Rui",
"Xiong, Caiming",
"Zhou, Yingbo"
] | Unlocking Anticipatory Text Generation: A Constrained Approach for Large Language Models Decoding | emnlp-main.870 | Poster | 2312.06149 | [
"https://github.com/SalesforceAIResearch/Unlocking-TextGen"
] | https://huggingface.co/papers/2312.06149 | 7 | 2 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.871.bib | https://aclanthology.org/2024.emnlp-main.871/ | @inproceedings{xu-etal-2024-reading,
title = "Re-Reading Improves Reasoning in Large Language Models",
author = "Xu, Xiaohan and
Tao, Chongyang and
Shen, Tao and
Xu, Can and
Xu, Hongbo and
Long, Guodong and
Lou, Jian-Guang and
Ma, Shuai",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.871",
pages = "15549--15575",
abstract = "To enhance the reasoning capabilities of off-the-shelf Large Language Models (LLMs), we introduce a simple, yet general and effective prompting method, RE2, i.e., Re-Reading the question as input. Unlike most thought-eliciting prompting methods, such as Chain-of-Thought (CoT), which aim to elicit the reasoning process in the output, RE2 shifts the focus to the input by processing questions twice, thereby enhancing the understanding process. Consequently, RE2 demonstrates strong generality and compatibility with most thought-eliciting prompting methods, including CoT. Crucially, RE2 facilitates a {``}bidirectional{''} encoding in unidirectional decoder-only LLMs because the first pass could provide global information for the second pass. We begin with a preliminary empirical study as the foundation of RE2, illustrating its potential to enable {``}bidirectional{''} attention mechanisms. We then evaluate RE2 on extensive reasoning benchmarks across 14 datasets, spanning 112 experiments, to validate its effectiveness and generality. Our findings indicate that, with the exception of a few scenarios on vanilla ChatGPT, RE2 consistently enhances the reasoning performance of LLMs through a simple re-reading strategy. Further analyses reveal RE2{'}s adaptability, showing how it can be effectively integrated with different LLMs, thought-eliciting prompting, and ensemble strategies.",
}
| To enhance the reasoning capabilities of off-the-shelf Large Language Models (LLMs), we introduce a simple, yet general and effective prompting method, RE2, i.e., Re-Reading the question as input. Unlike most thought-eliciting prompting methods, such as Chain-of-Thought (CoT), which aim to elicit the reasoning process in the output, RE2 shifts the focus to the input by processing questions twice, thereby enhancing the understanding process. Consequently, RE2 demonstrates strong generality and compatibility with most thought-eliciting prompting methods, including CoT. Crucially, RE2 facilitates a {``}bidirectional{''} encoding in unidirectional decoder-only LLMs because the first pass could provide global information for the second pass. We begin with a preliminary empirical study as the foundation of RE2, illustrating its potential to enable {``}bidirectional{''} attention mechanisms. We then evaluate RE2 on extensive reasoning benchmarks across 14 datasets, spanning 112 experiments, to validate its effectiveness and generality. Our findings indicate that, with the exception of a few scenarios on vanilla ChatGPT, RE2 consistently enhances the reasoning performance of LLMs through a simple re-reading strategy. Further analyses reveal RE2{'}s adaptability, showing how it can be effectively integrated with different LLMs, thought-eliciting prompting, and ensemble strategies. | [
"Xu, Xiaohan",
"Tao, Chongyang",
"Shen, Tao",
"Xu, Can",
"Xu, Hongbo",
"Long, Guodong",
"Lou, Jian-Guang",
"Ma, Shuai"
] | Re-Reading Improves Reasoning in Large Language Models | emnlp-main.871 | Poster | 2309.06275 | [
"https://github.com/tebmer/rereading-llm-reasoning"
] | https://huggingface.co/papers/2309.06275 | 0 | 3 | 1 | 7 | [] | [] | [
"codelion/optillm",
"fabiodr/optillm"
] | [] | [] | [
"codelion/optillm",
"fabiodr/optillm"
] | 1 |
https://aclanthology.org/2024.emnlp-main.872.bib | https://aclanthology.org/2024.emnlp-main.872/ | @inproceedings{zeng-etal-2024-adaptive,
title = "Adaptive Axes: A Pipeline for In-domain Social Stereotype Analysis",
author = "Zeng, Qingcheng and
Jin, Mingyu and
Voigt, Rob",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.872",
pages = "15576--15593",
abstract = "Prior work has explored the possibility of using the semantic information obtained from embedding representations to quantify social stereotypes, leveraging techniques such as word embeddings combined with a list of traits (Garg et al., 2018; Charlesworth et al., 2022) or semantic axes (An et al., 2018; Lucy et al., 2022). However, these approaches have struggled to fully capture the variability in stereotypes across different conceptual domains for the same social group (e.g., black in science, health, and art), in part because the identity of a word and the associations formed during pre-training can dominate its contextual representation (Field and Tsvetkov, 2019). This study explores the ability to recover stereotypes from the contexts surrounding targeted entities by utilizing state-of-the-art text embedding models and adaptive semantic axes enhanced by large language models (LLMs). Our results indicate that the proposed pipeline not only surpasses token-based methods in capturing in-domain framing but also effectively tracks stereotypes over time and along domain-specific semantic axes for in-domain texts. Our research highlights the potential of employing text embedding models to achieve a deeper understanding of nuanced social stereotypes.",
}
| Prior work has explored the possibility of using the semantic information obtained from embedding representations to quantify social stereotypes, leveraging techniques such as word embeddings combined with a list of traits (Garg et al., 2018; Charlesworth et al., 2022) or semantic axes (An et al., 2018; Lucy et al., 2022). However, these approaches have struggled to fully capture the variability in stereotypes across different conceptual domains for the same social group (e.g., black in science, health, and art), in part because the identity of a word and the associations formed during pre-training can dominate its contextual representation (Field and Tsvetkov, 2019). This study explores the ability to recover stereotypes from the contexts surrounding targeted entities by utilizing state-of-the-art text embedding models and adaptive semantic axes enhanced by large language models (LLMs). Our results indicate that the proposed pipeline not only surpasses token-based methods in capturing in-domain framing but also effectively tracks stereotypes over time and along domain-specific semantic axes for in-domain texts. Our research highlights the potential of employing text embedding models to achieve a deeper understanding of nuanced social stereotypes. | [
"Zeng, Qingcheng",
"Jin, Mingyu",
"Voigt, Rob"
] | Adaptive Axes: A Pipeline for In-domain Social Stereotype Analysis | emnlp-main.872 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.873.bib | https://aclanthology.org/2024.emnlp-main.873/ | @inproceedings{ray-etal-2024-ervqa,
title = "{ERVQA}: A Dataset to Benchmark the Readiness of Large Vision Language Models in Hospital Environments",
author = "Ray, Sourjyadip and
Gupta, Kushal and
Kundu, Soumi and
Kasat, Dr Payal Arvind and
Aditya, Somak and
Goyal, Pawan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.873",
pages = "15594--15608",
abstract = "The global shortage of healthcare workers has demanded the development of smart healthcare assistants, which can help monitor and alert healthcare workers when necessary. We examine the healthcare knowledge of existing Large Vision Language Models (LVLMs) via the Visual Question Answering (VQA) task in hospital settings through expert annotated open-ended questions. We introduce the Emergency Room Visual Question Answering (ERVQA) dataset, consisting of {\textless}image, question, answer{\textgreater} triplets covering diverse emergency room scenarios, a seminal benchmark for LVLMs. By developing a detailed error taxonomy and analyzing answer trends, we reveal the nuanced nature of the task. We benchmark state-of-the-art open-source and closed LVLMs using traditional and adapted VQA metrics: Entailment Score and CLIPScore Confidence. Analyzing errors across models, we infer trends based on properties like decoder type, model size, and in-context examples. Our findings suggest the ERVQA dataset presents a highly complex task, highlighting the need for specialized, domain-specific solutions.",
}
| The global shortage of healthcare workers has demanded the development of smart healthcare assistants, which can help monitor and alert healthcare workers when necessary. We examine the healthcare knowledge of existing Large Vision Language Models (LVLMs) via the Visual Question Answering (VQA) task in hospital settings through expert annotated open-ended questions. We introduce the Emergency Room Visual Question Answering (ERVQA) dataset, consisting of {\textless}image, question, answer{\textgreater} triplets covering diverse emergency room scenarios, a seminal benchmark for LVLMs. By developing a detailed error taxonomy and analyzing answer trends, we reveal the nuanced nature of the task. We benchmark state-of-the-art open-source and closed LVLMs using traditional and adapted VQA metrics: Entailment Score and CLIPScore Confidence. Analyzing errors across models, we infer trends based on properties like decoder type, model size, and in-context examples. Our findings suggest the ERVQA dataset presents a highly complex task, highlighting the need for specialized, domain-specific solutions. | [
"Ray, Sourjyadip",
"Gupta, Kushal",
"Kundu, Soumi",
"Kasat, Dr Payal Arvind",
"Aditya, Somak",
"Goyal, Pawan"
] | ERVQA: A Dataset to Benchmark the Readiness of Large Vision Language Models in Hospital Environments | emnlp-main.873 | Poster | 2410.06420 | [
"https://github.com/sourjyadip/ervqa-data"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.874.bib | https://aclanthology.org/2024.emnlp-main.874/ | @inproceedings{li-2024-human,
title = "Human-{LLM} Hybrid Text Answer Aggregation for Crowd Annotations",
author = "Li, Jiyi",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.874",
pages = "15609--15622",
abstract = "The quality is a crucial issue for crowd annotations. Answer aggregation is an important type of solution. The aggregated answers estimated from multiple crowd answers to the same instance are the eventually collected annotations, rather than the individual crowd answers themselves. Recently, the capability of Large Language Models (LLMs) on data annotation tasks has attracted interest from researchers. Most of the existing studies mainly focus on the average performance of individual crowd workers; several recent works studied the scenarios of aggregation on categorical labels and LLMs used as label creators. However, the scenario of aggregation on text answers and the role of LLMs as aggregators are not yet well-studied. In this paper, we investigate the capability of LLMs as aggregators in the scenario of close-ended crowd text answer aggregation. We propose a human-LLM hybrid text answer aggregation method with a Creator-Aggregator Multi-Stage (CAMS) crowdsourcing framework. We make the experiments based on public crowdsourcing datasets. The results show the effectiveness of our approach based on the collaboration of crowd workers and LLMs.",
}
| The quality is a crucial issue for crowd annotations. Answer aggregation is an important type of solution. The aggregated answers estimated from multiple crowd answers to the same instance are the eventually collected annotations, rather than the individual crowd answers themselves. Recently, the capability of Large Language Models (LLMs) on data annotation tasks has attracted interest from researchers. Most of the existing studies mainly focus on the average performance of individual crowd workers; several recent works studied the scenarios of aggregation on categorical labels and LLMs used as label creators. However, the scenario of aggregation on text answers and the role of LLMs as aggregators are not yet well-studied. In this paper, we investigate the capability of LLMs as aggregators in the scenario of close-ended crowd text answer aggregation. We propose a human-LLM hybrid text answer aggregation method with a Creator-Aggregator Multi-Stage (CAMS) crowdsourcing framework. We make the experiments based on public crowdsourcing datasets. The results show the effectiveness of our approach based on the collaboration of crowd workers and LLMs. | [
"Li, Jiyi"
] | Human-LLM Hybrid Text Answer Aggregation for Crowd Annotations | emnlp-main.874 | Poster | 2410.17099 | [
"https://github.com/garfieldpigljy/ljycrowd"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.875.bib | https://aclanthology.org/2024.emnlp-main.875/ | @inproceedings{dai-etal-2024-improve-students,
title = "Improve Student{'}s Reasoning Generalizability through Cascading Decomposed {C}o{T}s Distillation",
author = "Dai, Chengwei and
Li, Kun and
Zhou, Wei and
Hu, Songlin",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.875",
pages = "15623--15643",
abstract = "Large language models (LLMs) exhibit enhanced reasoning at larger scales, driving efforts to distill these capabilities into smaller models via teacher-student learning.Previous works simply fine-tune student models on teachers{'} generated Chain-of-Thoughts (CoTs) data. Although these methods enhance in-domain (IND) reasoning performance, they struggle to generalize to out-of-domain (OOD) tasks.We believe that the widespread spurious correlations between questions and answers may lead the model to preset a specific answer which restricts the diversity and generalizability of its reasoning process.In this paper, we propose \textbf{Cas}cading Decomposed \textbf{Co}Ts \textbf{D}istillation (CasCoD) to address these issues by decomposing the traditional single-step learning process into two cascaded learning steps. Specifically, by restructuring the training objectives{---}removing the answer from outputs and concatenating the question with the rationale as input{---}CasCoD{'}s two-step learning process ensures that students focus on learning rationales without interference from the preset answers, thus improving reasoning generalizability. Extensive experiments demonstrate the effectiveness of CasCoD on both IND and OOD benchmark reasoning datasets",
}
| Large language models (LLMs) exhibit enhanced reasoning at larger scales, driving efforts to distill these capabilities into smaller models via teacher-student learning.Previous works simply fine-tune student models on teachers{'} generated Chain-of-Thoughts (CoTs) data. Although these methods enhance in-domain (IND) reasoning performance, they struggle to generalize to out-of-domain (OOD) tasks.We believe that the widespread spurious correlations between questions and answers may lead the model to preset a specific answer which restricts the diversity and generalizability of its reasoning process.In this paper, we propose \textbf{Cas}cading Decomposed \textbf{Co}Ts \textbf{D}istillation (CasCoD) to address these issues by decomposing the traditional single-step learning process into two cascaded learning steps. Specifically, by restructuring the training objectives{---}removing the answer from outputs and concatenating the question with the rationale as input{---}CasCoD{'}s two-step learning process ensures that students focus on learning rationales without interference from the preset answers, thus improving reasoning generalizability. Extensive experiments demonstrate the effectiveness of CasCoD on both IND and OOD benchmark reasoning datasets | [
"Dai, Chengwei",
"Li, Kun",
"Zhou, Wei",
"Hu, Songlin"
] | Improve Student's Reasoning Generalizability through Cascading Decomposed CoTs Distillation | emnlp-main.875 | Poster | 2405.19842 | [
"https://github.com/c-w-d/cascod"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.876.bib | https://aclanthology.org/2024.emnlp-main.876/ | @inproceedings{huang-usbeck-2024-revisiting,
title = "Revisiting Supervised Contrastive Learning for Microblog Classification",
author = "Huang, Junbo and
Usbeck, Ricardo",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.876",
pages = "15644--15653",
abstract = "Microblog content (e.g., Tweets) is noisy due to its informal use of language and its lack of contextual information within each post. To tackle these challenges, state-of-the-art microblog classification models rely on pre-training language models (LMs). However, pre-training dedicated LMs is resource-intensive and not suitable for small labs. Supervised contrastive learning (SCL) has shown its effectiveness with small, available resources. In this work, we examine the effectiveness of fine-tuning transformer-based language models, regularized with a SCL loss for English microblog classification. Despite its simplicity, the evaluation on two English microblog classification benchmarks (TweetEval and Tweet Topic Classification) shows an improvement over baseline models. The result shows that, across all subtasks, our proposed method has a performance gain of up to 11.9 percentage points. All our models are open source.",
}
| Microblog content (e.g., Tweets) is noisy due to its informal use of language and its lack of contextual information within each post. To tackle these challenges, state-of-the-art microblog classification models rely on pre-training language models (LMs). However, pre-training dedicated LMs is resource-intensive and not suitable for small labs. Supervised contrastive learning (SCL) has shown its effectiveness with small, available resources. In this work, we examine the effectiveness of fine-tuning transformer-based language models, regularized with a SCL loss for English microblog classification. Despite its simplicity, the evaluation on two English microblog classification benchmarks (TweetEval and Tweet Topic Classification) shows an improvement over baseline models. The result shows that, across all subtasks, our proposed method has a performance gain of up to 11.9 percentage points. All our models are open source. | [
"Huang, Junbo",
"Usbeck, Ricardo"
] | Revisiting Supervised Contrastive Learning for Microblog Classification | emnlp-main.876 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.877.bib | https://aclanthology.org/2024.emnlp-main.877/ | @inproceedings{pu-etal-2024-baitattack,
title = "{B}ait{A}ttack: Alleviating Intention Shift in Jailbreak Attacks via Adaptive Bait Crafting",
author = "Pu, Rui and
Li, Chaozhuo and
Ha, Rui and
Zhang, Litian and
Qiu, Lirong and
Zhang, Xi",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.877",
pages = "15654--15668",
abstract = "Jailbreak attacks enable malicious queries to evade detection by LLMs. Existing attacks focus on meticulously constructing prompts to disguise harmful intentions. However, the incorporation of sophisticated disguising prompts may incur the challenge of {``}intention shift{''}. Intention shift occurs when the additional semantics within the prompt distract the LLMs, causing the responses to deviate significantly from the original harmful intentions. In this paper, we propose a novel component, {``}bait{''}, to alleviate the effects of intention shift. Bait comprises an initial response to the harmful query, prompting LLMs to rectify or supplement the knowledge within the bait. By furnishing rich semantics relevant to the query, the bait helps LLMs focus on the original intention. To conceal the harmful content within the bait, we further propose a novel attack paradigm, BaitAttack. BaitAttack adaptively generates necessary components to persuade targeted LLMs that they are engaging with a legitimate inquiry in a safe context. Our proposal is evaluated on a popular dataset, demonstrating state-of-the-art attack performance and an exceptional capability for mitigating intention shift. The implementation of BaitAttack is accessible at: https://anonymous.4open.science/r/BaitAttack-D1F5.",
}
| Jailbreak attacks enable malicious queries to evade detection by LLMs. Existing attacks focus on meticulously constructing prompts to disguise harmful intentions. However, the incorporation of sophisticated disguising prompts may incur the challenge of {``}intention shift{''}. Intention shift occurs when the additional semantics within the prompt distract the LLMs, causing the responses to deviate significantly from the original harmful intentions. In this paper, we propose a novel component, {``}bait{''}, to alleviate the effects of intention shift. Bait comprises an initial response to the harmful query, prompting LLMs to rectify or supplement the knowledge within the bait. By furnishing rich semantics relevant to the query, the bait helps LLMs focus on the original intention. To conceal the harmful content within the bait, we further propose a novel attack paradigm, BaitAttack. BaitAttack adaptively generates necessary components to persuade targeted LLMs that they are engaging with a legitimate inquiry in a safe context. Our proposal is evaluated on a popular dataset, demonstrating state-of-the-art attack performance and an exceptional capability for mitigating intention shift. The implementation of BaitAttack is accessible at: https://anonymous.4open.science/r/BaitAttack-D1F5. | [
"Pu, Rui",
"Li, Chaozhuo",
"Ha, Rui",
"Zhang, Litian",
"Qiu, Lirong",
"Zhang, Xi"
] | BaitAttack: Alleviating Intention Shift in Jailbreak Attacks via Adaptive Bait Crafting | emnlp-main.877 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.878.bib | https://aclanthology.org/2024.emnlp-main.878/ | @inproceedings{weng-etal-2024-images,
title = "Images Speak Louder than Words: Understanding and Mitigating Bias in Vision-Language Model from a Causal Mediation Perspective",
author = "Weng, Zhaotian and
Gao, Zijun and
Andrews, Jerone and
Zhao, Jieyu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.878",
pages = "15669--15680",
abstract = "Vision-language models (VLMs) pre-trained on extensive datasets can inadvertently learn biases by correlating gender information with specific objects or scenarios. Current methods, which focus on modifying inputs and monitoring changes in the model{'}s output probability scores, often struggle to comprehensively understand bias from the perspective of model components. We propose a framework that incorporates causal mediation analysis to measure and map the pathways of bias generation and propagation within VLMs. Our framework is applicable to a wide range of vision-language and multimodal tasks. In this work, we apply it to the object detection task and implement it on the GLIP model. This approach allows us to identify the direct effects of interventions on model bias and the indirect effects of interventions on bias mediated through different model components. Our results show that image features are the primary contributors to bias, with significantly higher impacts than text features, specifically accounting for 32.57{\%} and 12.63{\%} of the bias in the MSCOCO and PASCAL-SENTENCE datasets, respectively. Notably, the image encoder{'}s contribution surpasses that of the text encoder and the deep fusion encoder. Further experimentation confirms that contributions from both language and vision modalities are aligned and non-conflicting. Consequently, focusing on blurring gender representations within the image encoder which contributes most to the model bias, reduces bias efficiently by 22.03{\%} and 9.04{\%} in the MSCOCO and PASCAL-SENTENCE datasets, respectively, with minimal performance loss or increased computational demands.",
}
| Vision-language models (VLMs) pre-trained on extensive datasets can inadvertently learn biases by correlating gender information with specific objects or scenarios. Current methods, which focus on modifying inputs and monitoring changes in the model{'}s output probability scores, often struggle to comprehensively understand bias from the perspective of model components. We propose a framework that incorporates causal mediation analysis to measure and map the pathways of bias generation and propagation within VLMs. Our framework is applicable to a wide range of vision-language and multimodal tasks. In this work, we apply it to the object detection task and implement it on the GLIP model. This approach allows us to identify the direct effects of interventions on model bias and the indirect effects of interventions on bias mediated through different model components. Our results show that image features are the primary contributors to bias, with significantly higher impacts than text features, specifically accounting for 32.57{\%} and 12.63{\%} of the bias in the MSCOCO and PASCAL-SENTENCE datasets, respectively. Notably, the image encoder{'}s contribution surpasses that of the text encoder and the deep fusion encoder. Further experimentation confirms that contributions from both language and vision modalities are aligned and non-conflicting. Consequently, focusing on blurring gender representations within the image encoder which contributes most to the model bias, reduces bias efficiently by 22.03{\%} and 9.04{\%} in the MSCOCO and PASCAL-SENTENCE datasets, respectively, with minimal performance loss or increased computational demands. | [
"Weng, Zhaotian",
"Gao, Zijun",
"Andrews, Jerone",
"Zhao, Jieyu"
] | Images Speak Louder than Words: Understanding and Mitigating Bias in Vision-Language Model from a Causal Mediation Perspective | emnlp-main.878 | Poster | 2407.02814 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.879.bib | https://aclanthology.org/2024.emnlp-main.879/ | @inproceedings{wang-etal-2024-mitigating-language,
title = "Mitigating the Language Mismatch and Repetition Issues in {LLM}-based Machine Translation via Model Editing",
author = "Wang, Weichuan and
Li, Zhaoyi and
Lian, Defu and
Ma, Chen and
Song, Linqi and
Wei, Ying",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.879",
pages = "15681--15700",
abstract = "Large Language Models (LLMs) have recently revolutionized the NLP field, while they still fall short in some specific down-stream tasks. In the work, we focus on utilizing LLMs to perform machine translation, where we observe that two patterns of errors frequently occur and drastically affect the translation quality: language mismatch and repetition. The work sets out to explore the potential for mitigating these two issues by leveraging model editing methods, e.g., by locating Feed-Forward Network (FFN) neurons or something that are responsible for the errors and deactivating them in the inference time.We find that directly applying such methods either limited effect on the targeted errors or has significant negative side-effect on the general translation quality, indicating that the located components may also be crucial for ensuring machine translation with LLMs on the rails.To this end, we propose to refine the located components by fetching the intersection of the locating results under different language settings, filtering out the aforementioned information that is irrelevant to targeted errors. The experiment results empirically demonstrate that our methods can effectively reduce the language mismatch and repetition ratios and meanwhile enhance or keep the general translation quality in most cases.",
}
| Large Language Models (LLMs) have recently revolutionized the NLP field, while they still fall short in some specific down-stream tasks. In the work, we focus on utilizing LLMs to perform machine translation, where we observe that two patterns of errors frequently occur and drastically affect the translation quality: language mismatch and repetition. The work sets out to explore the potential for mitigating these two issues by leveraging model editing methods, e.g., by locating Feed-Forward Network (FFN) neurons or something that are responsible for the errors and deactivating them in the inference time.We find that directly applying such methods either limited effect on the targeted errors or has significant negative side-effect on the general translation quality, indicating that the located components may also be crucial for ensuring machine translation with LLMs on the rails.To this end, we propose to refine the located components by fetching the intersection of the locating results under different language settings, filtering out the aforementioned information that is irrelevant to targeted errors. The experiment results empirically demonstrate that our methods can effectively reduce the language mismatch and repetition ratios and meanwhile enhance or keep the general translation quality in most cases. | [
"Wang, Weichuan",
"Li, Zhaoyi",
"Lian, Defu",
"Ma, Chen",
"Song, Linqi",
"Wei, Ying"
] | Mitigating the Language Mismatch and Repetition Issues in LLM-based Machine Translation via Model Editing | emnlp-main.879 | Poster | 2410.07054 | [
"https://github.com/weichuanw/llm-based-mt-via-model-editing"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.880.bib | https://aclanthology.org/2024.emnlp-main.880/ | @inproceedings{ma-etal-2024-sciagent,
title = "{S}ci{A}gent: Tool-augmented Language Models for Scientific Reasoning",
author = "Ma, Yubo and
Gou, Zhibin and
Hao, Junheng and
Xu, Ruochen and
Wang, Shuohang and
Pan, Liangming and
Yang, Yujiu and
Cao, Yixin and
Sun, Aixin",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.880",
pages = "15701--15736",
abstract = "Scientific reasoning poses an excessive challenge for even the most advanced Large Language Models (LLMs). To make this task more practical and solvable for LLMs, we introduce a new task setting named tool-augmented scientific reasoning. This setting supplements LLMs with scalable toolsets, and shifts the focus from pursuing an omniscient problem solver to a proficient tool-user. To facilitate the research of such setting, we construct a tool-augmented training corpus named MathFunc which encompasses over 30,000 samples and roughly 6,000 tools. Building on MathFunc, we develop SciAgent to retrieve, understand and, if necessary, use tools for scientific problem solving. Additionally, we craft a benchmark, SciToolBench, spanning five scientific domains to evaluate LLMs{'} abilities with tool assistance. Extensive experiments on SciToolBench confirm the effectiveness of SciAgent. Notably, SciAgent-Llama3-8B surpasses other LLMs with the comparable size by more than 8.0{\%} in absolute accuracy. Furthermore, SciAgent-DeepMath-7B shows much superior performance than ChatGPT.",
}
| Scientific reasoning poses an excessive challenge for even the most advanced Large Language Models (LLMs). To make this task more practical and solvable for LLMs, we introduce a new task setting named tool-augmented scientific reasoning. This setting supplements LLMs with scalable toolsets, and shifts the focus from pursuing an omniscient problem solver to a proficient tool-user. To facilitate the research of such setting, we construct a tool-augmented training corpus named MathFunc which encompasses over 30,000 samples and roughly 6,000 tools. Building on MathFunc, we develop SciAgent to retrieve, understand and, if necessary, use tools for scientific problem solving. Additionally, we craft a benchmark, SciToolBench, spanning five scientific domains to evaluate LLMs{'} abilities with tool assistance. Extensive experiments on SciToolBench confirm the effectiveness of SciAgent. Notably, SciAgent-Llama3-8B surpasses other LLMs with the comparable size by more than 8.0{\%} in absolute accuracy. Furthermore, SciAgent-DeepMath-7B shows much superior performance than ChatGPT. | [
"Ma, Yubo",
"Gou, Zhibin",
"Hao, Junheng",
"Xu, Ruochen",
"Wang, Shuohang",
"Pan, Liangming",
"Yang, Yujiu",
"Cao, Yixin",
"Sun, Aixin"
] | SciAgent: Tool-augmented Language Models for Scientific Reasoning | emnlp-main.880 | Poster | 2402.11451 | [
""
] | https://huggingface.co/papers/2402.11451 | 1 | 0 | 0 | 11 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.881.bib | https://aclanthology.org/2024.emnlp-main.881/ | @inproceedings{lee-etal-2024-global,
title = "Global Reward to Local Rewards: Multimodal-Guided Decomposition for Improving Dialogue Agents",
author = "Lee, Dong Won and
Park, Hae Won and
Kim, Yoon and
Breazeal, Cynthia and
Morency, Louis-Philippe",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.881",
pages = "15737--15762",
abstract = "We describe an approach for aligning an LLM based dialogue agent for long-term social dialogue, where there is only a single global score given by the user at the end of the session. In this paper, we propose the usage of denser naturally-occurring multimodal communicative signals as local implicit feedback to improve the turn-level utterance generation. Therefore, our approach (dubbed GELI) learns a local, turn-level reward model by decomposing the human-provided Global Explicit (GE) session level reward, using Local Implicit (LI) multimodal reward signals to crossmodally shape the reward decomposition step. This decomposed reward model is then used as part of the RLHF pipeline to improve an LLM-based dialog agent. We run quantitative and qualitative human studies on two large-scale datasets to evaluate the performance of our GELI approach, and find that it shows consistent improvements across various conversational metrics compared to baseline methods.",
}
| We describe an approach for aligning an LLM based dialogue agent for long-term social dialogue, where there is only a single global score given by the user at the end of the session. In this paper, we propose the usage of denser naturally-occurring multimodal communicative signals as local implicit feedback to improve the turn-level utterance generation. Therefore, our approach (dubbed GELI) learns a local, turn-level reward model by decomposing the human-provided Global Explicit (GE) session level reward, using Local Implicit (LI) multimodal reward signals to crossmodally shape the reward decomposition step. This decomposed reward model is then used as part of the RLHF pipeline to improve an LLM-based dialog agent. We run quantitative and qualitative human studies on two large-scale datasets to evaluate the performance of our GELI approach, and find that it shows consistent improvements across various conversational metrics compared to baseline methods. | [
"Lee, Dong Won",
"Park, Hae Won",
"Kim, Yoon",
"Breazeal, Cynthia",
"Morency, Louis-Philippe"
] | Global Reward to Local Rewards: Multimodal-Guided Decomposition for Improving Dialogue Agents | emnlp-main.881 | Oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.882.bib | https://aclanthology.org/2024.emnlp-main.882/ | @inproceedings{adilazuarda-etal-2024-towards,
title = "Towards Measuring and Modeling {``}Culture{''} in {LLM}s: A Survey",
author = "Adilazuarda, Muhammad Farid and
Mukherjee, Sagnik and
Lavania, Pradhyumna and
Singh, Siddhant Shivdutt and
Aji, Alham Fikri and
O{'}Neill, Jacki and
Modi, Ashutosh and
Choudhury, Monojit",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.882",
pages = "15763--15784",
abstract = "We present a survey of more than 90 recent papers that aim to study cultural representation and inclusion in large language models (LLMs). We observe that none of the studies explicitly define {``}culture, which is a complex, multifaceted concept; instead, they probe the models on some specially designed datasets which represent certain aspects of {``}culture{''}. We call these aspects the proxies of culture, and organize them across two dimensions of demographic and semantic proxies. We also categorize the probing methods employed. Our analysis indicates that only certain aspects of {``}culture,{''} such as values and objectives, have been studied, leaving several other interesting and important facets, especially the multitude of semantic domains (Thompson et al., 2020) and aboutness (Hershcovich et al., 2022), unexplored. Two other crucial gaps are the lack of robustness of probing techniques and situated studies on the impact of cultural mis- and under-representation in LLM-based applications.",
}
| We present a survey of more than 90 recent papers that aim to study cultural representation and inclusion in large language models (LLMs). We observe that none of the studies explicitly define {``}culture, which is a complex, multifaceted concept; instead, they probe the models on some specially designed datasets which represent certain aspects of {``}culture{''}. We call these aspects the proxies of culture, and organize them across two dimensions of demographic and semantic proxies. We also categorize the probing methods employed. Our analysis indicates that only certain aspects of {``}culture,{''} such as values and objectives, have been studied, leaving several other interesting and important facets, especially the multitude of semantic domains (Thompson et al., 2020) and aboutness (Hershcovich et al., 2022), unexplored. Two other crucial gaps are the lack of robustness of probing techniques and situated studies on the impact of cultural mis- and under-representation in LLM-based applications. | [
"Adilazuarda, Muhammad Farid",
"Mukherjee, Sagnik",
"Lavania, Pradhyumna",
"Singh, Siddhant Shivdutt",
"Aji, Alham Fikri",
"O{'}Neill, Jacki",
"Modi, Ashutosh",
"Choudhury, Monojit"
] | Towards Measuring and Modeling “Culture” in LLMs: A Survey | emnlp-main.882 | Poster | [
"https://github.com/faridlazuarda/cultural-llm-papers"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.883.bib | https://aclanthology.org/2024.emnlp-main.883/ | @inproceedings{zhao-etal-2024-esc,
title = "{ESC}-Eval: Evaluating Emotion Support Conversations in Large Language Models",
author = "Zhao, Haiquan and
Li, Lingyu and
Chen, Shisong and
Kong, Shuqi and
Wang, Jiaan and
Huang, Kexin and
Gu, Tianle and
Wang, Yixu and
Wang, Jian and
Dandan, Liang and
Li, Zhixu and
Teng, Yan and
Xiao, Yanghua and
Wang, Yingchun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.883",
pages = "15785--15810",
abstract = "Emotion Support Conversation (ESC) is a crucial application, which aims to reduce human stress, offer emotional guidance, and ultimately enhance human mental and physical well-being. With the advancement of Large Language Models (LLMs), many researchers have employed LLMs as the ESC models. However, the evaluation of these LLM-based ESCs remains uncertain. In detail, we first re-organize 2,801 role-playing cards from seven existing datasets to define the roles of the role-playing agent. Second, we train a specific role-playing model called ESC-Role which behaves more like a confused person than GPT-4. Third, through ESC-Role and organized role cards, we systematically conduct experiments using 14 LLMs as the ESC models, including general AI-assistant LLMs (e.g., ChatGPT) and ESC-oriented LLMs (e.g., ExTES-Llama). We conduct comprehensive human annotations on interactive multi-turn dialogues of different ESC models. The results show that ESC-oriented LLMs exhibit superior ESC abilities compared to general AI-assistant LLMs, but there is still a gap behind human performance. Moreover, to automate the scoring process for future ESC models, we developed ESC-RANK, which trained on the annotated data, achieving a scoring performance surpassing 35 points of GPT-4.",
}
| Emotion Support Conversation (ESC) is a crucial application, which aims to reduce human stress, offer emotional guidance, and ultimately enhance human mental and physical well-being. With the advancement of Large Language Models (LLMs), many researchers have employed LLMs as the ESC models. However, the evaluation of these LLM-based ESCs remains uncertain. In detail, we first re-organize 2,801 role-playing cards from seven existing datasets to define the roles of the role-playing agent. Second, we train a specific role-playing model called ESC-Role which behaves more like a confused person than GPT-4. Third, through ESC-Role and organized role cards, we systematically conduct experiments using 14 LLMs as the ESC models, including general AI-assistant LLMs (e.g., ChatGPT) and ESC-oriented LLMs (e.g., ExTES-Llama). We conduct comprehensive human annotations on interactive multi-turn dialogues of different ESC models. The results show that ESC-oriented LLMs exhibit superior ESC abilities compared to general AI-assistant LLMs, but there is still a gap behind human performance. Moreover, to automate the scoring process for future ESC models, we developed ESC-RANK, which trained on the annotated data, achieving a scoring performance surpassing 35 points of GPT-4. | [
"Zhao, Haiquan",
"Li, Lingyu",
"Chen, Shisong",
"Kong, Shuqi",
"Wang, Jiaan",
"Huang, Kexin",
"Gu, Tianle",
"Wang, Yixu",
"Wang, Jian",
"D",
"an, Liang",
"Li, Zhixu",
"Teng, Yan",
"Xiao, Yanghua",
"Wang, Yingchun"
] | ESC-Eval: Evaluating Emotion Support Conversations in Large Language Models | emnlp-main.883 | Poster | 2406.14952 | [
"https://github.com/haidequanbu/esc-eval"
] | https://huggingface.co/papers/2406.14952 | 1 | 0 | 0 | 13 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.884.bib | https://aclanthology.org/2024.emnlp-main.884/ | @inproceedings{mukherjee-etal-2024-cultural,
title = "Cultural Conditioning or Placebo? On the Effectiveness of Socio-Demographic Prompting",
author = "Mukherjee, Sagnik and
Adilazuarda, Muhammad Farid and
Sitaram, Sunayana and
Bali, Kalika and
Aji, Alham Fikri and
Choudhury, Monojit",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.884",
pages = "15811--15837",
abstract = "Socio-demographic prompting is a commonly employed approach to study cultural biases in LLMs as well as for aligning models to certain cultures. In this paper, we systematically probe four LLMs (Llama 3, Mistral v0.2, GPT-3.5 Turbo and GPT4) with prompts that are conditioned on culturally sensitive and non-sensitive cues, on datasets that are supposed to be culturally sensitive (EtiCor and CALI) or neutral (MMLU and ETHICS). We observe that all models except GPT4 show significant variations in their responses on both kinds of datasets for both kinds of prompts, casting doubt on the robustness of the culturally-conditioned prompting as a method for eliciting cultural bias in models that are not sufficiently stable with respect to arbitrary prompting cues. Further, we also show that some of the supposedly culturally neutral datasets have a non-trivial fraction of culturally sensitive questions/tasks.",
}
| Socio-demographic prompting is a commonly employed approach to study cultural biases in LLMs as well as for aligning models to certain cultures. In this paper, we systematically probe four LLMs (Llama 3, Mistral v0.2, GPT-3.5 Turbo and GPT4) with prompts that are conditioned on culturally sensitive and non-sensitive cues, on datasets that are supposed to be culturally sensitive (EtiCor and CALI) or neutral (MMLU and ETHICS). We observe that all models except GPT4 show significant variations in their responses on both kinds of datasets for both kinds of prompts, casting doubt on the robustness of the culturally-conditioned prompting as a method for eliciting cultural bias in models that are not sufficiently stable with respect to arbitrary prompting cues. Further, we also show that some of the supposedly culturally neutral datasets have a non-trivial fraction of culturally sensitive questions/tasks. | [
"Mukherjee, Sagnik",
"Adilazuarda, Muhammad Farid",
"Sitaram, Sunayana",
"Bali, Kalika",
"Aji, Alham Fikri",
"Choudhury, Monojit"
] | Cultural Conditioning or Placebo? On the Effectiveness of Socio-Demographic Prompting | emnlp-main.884 | Poster | 2406.11661 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.885.bib | https://aclanthology.org/2024.emnlp-main.885/ | @inproceedings{yu-etal-2024-text,
title = "Text Fluoroscopy: Detecting {LLM}-Generated Text through Intrinsic Features",
author = "Yu, Xiao and
Chen, Kejiang and
Yang, Qi and
Zhang, Weiming and
Yu, Nenghai",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.885",
pages = "15838--15846",
abstract = "Large language models (LLMs) have revolutionized the domain of natural language processing because of their excellent performance on various tasks. Despite their impressive capabilities, LLMs also have the potential to generate texts that pose risks of misuse. Consequently, detecting LLM-generated text has become increasingly important.Previous LLM-generated text detection methods use semantic features, which are stored in the last layer. This leads to methods that overfit the training set domain and exhibit shortcomings in generalization. Therefore, We argue that utilizing intrinsic features rather than semantic features for detection results in better performance.In this work, we design Text Fluoroscopy, a black-box method with better generalizability for detecting LLM-generated text by mining the intrinsic features of the text to be detected. Our method captures the text{'}s intrinsic features by identifying the layer with the largest distribution difference from the last and first layers when projected to the vocabulary space.Our method achieves 7.36{\%} and 2.84{\%} average improvement in detection performance compared to the baselines in detecting texts from different domains generated by GPT-4 and Claude3, respectively.",
}
| Large language models (LLMs) have revolutionized the domain of natural language processing because of their excellent performance on various tasks. Despite their impressive capabilities, LLMs also have the potential to generate texts that pose risks of misuse. Consequently, detecting LLM-generated text has become increasingly important.Previous LLM-generated text detection methods use semantic features, which are stored in the last layer. This leads to methods that overfit the training set domain and exhibit shortcomings in generalization. Therefore, We argue that utilizing intrinsic features rather than semantic features for detection results in better performance.In this work, we design Text Fluoroscopy, a black-box method with better generalizability for detecting LLM-generated text by mining the intrinsic features of the text to be detected. Our method captures the text{'}s intrinsic features by identifying the layer with the largest distribution difference from the last and first layers when projected to the vocabulary space.Our method achieves 7.36{\%} and 2.84{\%} average improvement in detection performance compared to the baselines in detecting texts from different domains generated by GPT-4 and Claude3, respectively. | [
"Yu, Xiao",
"Chen, Kejiang",
"Yang, Qi",
"Zhang, Weiming",
"Yu, Nenghai"
] | Text Fluoroscopy: Detecting LLM-Generated Text through Intrinsic Features | emnlp-main.885 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.886.bib | https://aclanthology.org/2024.emnlp-main.886/ | @inproceedings{masud-etal-2024-hate,
title = "Hate Personified: Investigating the role of {LLM}s in content moderation",
author = "Masud, Sarah and
Singh, Sahajpreet and
Hangya, Viktor and
Fraser, Alexander and
Chakraborty, Tanmoy",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.886",
pages = "15847--15863",
abstract = "For subjective tasks such as hate detection, where people perceive hate differently, the Large Language Model{'}s (LLM) ability to represent diverse groups is unclear. By including additional context in prompts, we comprehensively analyze LLM{'}s sensitivity to geographical priming, persona attributes, and numerical information to assess how well the needs of various groups are reflected. Our findings on two LLMs, five languages, and six datasets reveal that mimicking persona-based attributes leads to annotation variability. Meanwhile, incorporating geographical signals leads to better regional alignment. We also find that the LLMs are sensitive to numerical anchors, indicating the ability to leverage community-based flagging efforts and exposure to adversaries. Our work provides preliminary guidelines and highlights the nuances of applying LLMs in culturally sensitive cases.",
}
| For subjective tasks such as hate detection, where people perceive hate differently, the Large Language Model{'}s (LLM) ability to represent diverse groups is unclear. By including additional context in prompts, we comprehensively analyze LLM{'}s sensitivity to geographical priming, persona attributes, and numerical information to assess how well the needs of various groups are reflected. Our findings on two LLMs, five languages, and six datasets reveal that mimicking persona-based attributes leads to annotation variability. Meanwhile, incorporating geographical signals leads to better regional alignment. We also find that the LLMs are sensitive to numerical anchors, indicating the ability to leverage community-based flagging efforts and exposure to adversaries. Our work provides preliminary guidelines and highlights the nuances of applying LLMs in culturally sensitive cases. | [
"Masud, Sarah",
"Singh, Sahajpreet",
"Hangya, Viktor",
"Fraser, Alex",
"er",
"Chakraborty, Tanmoy"
] | Hate Personified: Investigating the role of LLMs in content moderation | emnlp-main.886 | Poster | 2410.02657 | [
"https://github.com/sahajps/Hate-Personified"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.887.bib | https://aclanthology.org/2024.emnlp-main.887/ | @inproceedings{bajpai-etal-2024-temporally,
title = "Temporally Consistent Factuality Probing for Large Language Models",
author = "Bajpai, Ashutosh and
Goyal, Aaryan and
Anwer, Atif and
Chakraborty, Tanmoy",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.887",
pages = "15864--15881",
abstract = "The prolific use of Large Language Models (LLMs) as an alternate knowledge base requires them to be factually consistent, necessitating both correctness and consistency traits for paraphrased queries. Recently, significant attempts have been made to benchmark datasets and metrics to evaluate LLMs for these traits. However, structural simplicity (subject-relation-object) and contemporary association in their query formulation limit the broader definition of factuality and consistency. In this study, we introduce TeCFaP, a novel Temporally Consistent Factuality Probe task to expand the consistent factuality probe in the temporal dimension. To this end, we propose TEMP-COFAC, a high-quality dataset of prefix-style English query paraphrases. Subsequently, we extend the definitions of existing metrics to represent consistent factuality across temporal dimension. We experiment with a diverse set of LLMs and find most of them performing poorly on TeCFaP. Next, we propose a novel solution CoTSeLF (Consistent-Time-Sensitive Learning Framework) combining multi-task instruction tuning (MT-IT) with consistent-time-sensitive reinforcement learning (CTSRL) to improve temporally consistent factuality in LLMs. Our experiments demonstrate the efficacy of CoTSeLF over several baselines.",
}
| The prolific use of Large Language Models (LLMs) as an alternate knowledge base requires them to be factually consistent, necessitating both correctness and consistency traits for paraphrased queries. Recently, significant attempts have been made to benchmark datasets and metrics to evaluate LLMs for these traits. However, structural simplicity (subject-relation-object) and contemporary association in their query formulation limit the broader definition of factuality and consistency. In this study, we introduce TeCFaP, a novel Temporally Consistent Factuality Probe task to expand the consistent factuality probe in the temporal dimension. To this end, we propose TEMP-COFAC, a high-quality dataset of prefix-style English query paraphrases. Subsequently, we extend the definitions of existing metrics to represent consistent factuality across temporal dimension. We experiment with a diverse set of LLMs and find most of them performing poorly on TeCFaP. Next, we propose a novel solution CoTSeLF (Consistent-Time-Sensitive Learning Framework) combining multi-task instruction tuning (MT-IT) with consistent-time-sensitive reinforcement learning (CTSRL) to improve temporally consistent factuality in LLMs. Our experiments demonstrate the efficacy of CoTSeLF over several baselines. | [
"Bajpai, Ashutosh",
"Goyal, Aaryan",
"Anwer, Atif",
"Chakraborty, Tanmoy"
] | Temporally Consistent Factuality Probing for Large Language Models | emnlp-main.887 | Oral | 2409.14065 | [
"https://github.com/ab-iitd/tecfap"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.888.bib | https://aclanthology.org/2024.emnlp-main.888/ | @inproceedings{li-etal-2024-comparison,
title = "A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives",
author = {Li, Zihao and
Ji, Shaoxiong and
Mickus, Timothee and
Segonne, Vincent and
Tiedemann, J{\"o}rg},
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.888",
pages = "15882--15894",
abstract = "Pretrained language models (PLMs) display impressive performances and have captured the attention of the NLP community.Establishing best practices in pretraining has, therefore, become a major focus of NLP research, especially since insights gained from monolingual English models may not necessarily apply to more complex multilingual models.One significant caveat of the current state of the art is that different works are rarely comparable: they often discuss different parameter counts, training data, and evaluation methodology.This paper proposes a comparison of multilingual pretraining objectives in a controlled methodological environment. We ensure that training data and model architectures are comparable, and discuss the downstream performances across 6 languages that we observe in probing and fine-tuning scenarios.We make two key observations: (1) the architecture dictates which pretraining objective is optimal; (2) multilingual translation is a very effective pretraining objective under the right conditions.We make our code, data, and model weights available at https://github.com/Helsinki-NLP/lm-vs-mt.",
}
| Pretrained language models (PLMs) display impressive performances and have captured the attention of the NLP community.Establishing best practices in pretraining has, therefore, become a major focus of NLP research, especially since insights gained from monolingual English models may not necessarily apply to more complex multilingual models.One significant caveat of the current state of the art is that different works are rarely comparable: they often discuss different parameter counts, training data, and evaluation methodology.This paper proposes a comparison of multilingual pretraining objectives in a controlled methodological environment. We ensure that training data and model architectures are comparable, and discuss the downstream performances across 6 languages that we observe in probing and fine-tuning scenarios.We make two key observations: (1) the architecture dictates which pretraining objective is optimal; (2) multilingual translation is a very effective pretraining objective under the right conditions.We make our code, data, and model weights available at https://github.com/Helsinki-NLP/lm-vs-mt. | [
"Li, Zihao",
"Ji, Shaoxiong",
"Mickus, Timothee",
"Segonne, Vincent",
"Tiedemann, J{\\\"o}rg"
] | A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives | emnlp-main.888 | Poster | 2407.15489 | [
"https://github.com/helsinki-nlp/lm-vs-mt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.889.bib | https://aclanthology.org/2024.emnlp-main.889/ | @inproceedings{bajpai-etal-2024-llms,
title = "Can {LLM}s replace Neil de{G}rasse Tyson? Evaluating the Reliability of {LLM}s as Science Communicators",
author = "Bajpai, Prasoon and
Chatterjee, Niladri and
Dutta, Subhabrata and
Chakraborty, Tanmoy",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.889",
pages = "15895--15912",
abstract = "Large Language Models (LLMs) and AI assistants driven by these models are experiencing exponential growth in usage among both expert and amateur users. In this work, we focus on evaluating the reliability of current LLMs as science communicators. Unlike existing benchmarks, our approach emphasizes assessing these models on scientific question-answering tasks that require a nuanced understanding and awareness of answerability. We introduce a novel dataset, SCiPS-QA, comprising 742 Yes/No queries embedded in complex scientific concepts, along with a benchmarking suite that evaluates LLMs for correctness and consistency across various criteria. We benchmark three proprietary LLMs from the OpenAI GPT family and 13 open-access LLMs from the Meta Llama-2, Llama-3, and Mistral families. While most open-access models significantly underperform compared to GPT-4 Turbo, our experiments identify Llama-3-70B as a strong competitor, often surpassing GPT-4 Turbo in various evaluation aspects. We also find that even the GPT models exhibit a general incompetence in reliably verifying LLM responses. Moreover, we observe an alarming trend where human evaluators are deceived by incorrect responses from GPT-4 Turbo.",
}
| Large Language Models (LLMs) and AI assistants driven by these models are experiencing exponential growth in usage among both expert and amateur users. In this work, we focus on evaluating the reliability of current LLMs as science communicators. Unlike existing benchmarks, our approach emphasizes assessing these models on scientific question-answering tasks that require a nuanced understanding and awareness of answerability. We introduce a novel dataset, SCiPS-QA, comprising 742 Yes/No queries embedded in complex scientific concepts, along with a benchmarking suite that evaluates LLMs for correctness and consistency across various criteria. We benchmark three proprietary LLMs from the OpenAI GPT family and 13 open-access LLMs from the Meta Llama-2, Llama-3, and Mistral families. While most open-access models significantly underperform compared to GPT-4 Turbo, our experiments identify Llama-3-70B as a strong competitor, often surpassing GPT-4 Turbo in various evaluation aspects. We also find that even the GPT models exhibit a general incompetence in reliably verifying LLM responses. Moreover, we observe an alarming trend where human evaluators are deceived by incorrect responses from GPT-4 Turbo. | [
"Bajpai, Prasoon",
"Chatterjee, Niladri",
"Dutta, Subhabrata",
"Chakraborty, Tanmoy"
] | Can LLMs replace Neil deGrasse Tyson? Evaluating the Reliability of LLMs as Science Communicators | emnlp-main.889 | Poster | 2409.14037 | [
"https://github.com/Prasoon1207/llm-science-miscommunication"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.890.bib | https://aclanthology.org/2024.emnlp-main.890/ | @inproceedings{zhu-etal-2024-llama,
title = "{LL}a{MA}-{M}o{E}: Building Mixture-of-Experts from {LL}a{MA} with Continual Pre-Training",
author = "Zhu, Tong and
Qu, Xiaoye and
Dong, Daize and
Ruan, Jiacheng and
Tong, Jingqi and
He, Conghui and
Cheng, Yu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.890",
pages = "15913--15923",
abstract = "Mixture-of-Experts (MoE) has gained increasing popularity as a promising framework for scaling up large language models (LLMs). However, training MoE from scratch in a large-scale setting still suffers from data-hungry and instability problems. Motivated by this limit, we investigate building MoE models from existing dense large language models. Specifically, based on the well-known LLaMA-2 7B model, we obtain an MoE model by: (1) Expert Construction, which partitions the parameters of original Feed-Forward Networks (FFNs) into multiple experts; (2) Continual pre-training, which further trains the transformed MoE model and additional gate networks. In this paper, we comprehensively explore different methods for expert construction and various data sampling strategies for continual pre-training. After these stages, our LLaMA-MoE models could maintain language abilities and route the input tokens to specific experts with part of the parameters activated. Empirically, by training 200B tokens, LLaMA-MoE-3.5B models significantly outperform dense models that contain similar activation parameters.",
}
| Mixture-of-Experts (MoE) has gained increasing popularity as a promising framework for scaling up large language models (LLMs). However, training MoE from scratch in a large-scale setting still suffers from data-hungry and instability problems. Motivated by this limit, we investigate building MoE models from existing dense large language models. Specifically, based on the well-known LLaMA-2 7B model, we obtain an MoE model by: (1) Expert Construction, which partitions the parameters of original Feed-Forward Networks (FFNs) into multiple experts; (2) Continual pre-training, which further trains the transformed MoE model and additional gate networks. In this paper, we comprehensively explore different methods for expert construction and various data sampling strategies for continual pre-training. After these stages, our LLaMA-MoE models could maintain language abilities and route the input tokens to specific experts with part of the parameters activated. Empirically, by training 200B tokens, LLaMA-MoE-3.5B models significantly outperform dense models that contain similar activation parameters. | [
"Zhu, Tong",
"Qu, Xiaoye",
"Dong, Daize",
"Ruan, Jiacheng",
"Tong, Jingqi",
"He, Conghui",
"Cheng, Yu"
] | LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-Training | emnlp-main.890 | Poster | 2406.16554 | [
"https://github.com/pjlab-sys4nlp/llama-moe"
] | https://huggingface.co/papers/2406.16554 | 4 | 1 | 0 | 7 | [
"llama-moe/LLaMA-MoE-v1-3_5B-4_16",
"llama-moe/LLaMA-MoE-v1-3_5B-2_8",
"llama-moe/LLaMA-MoE-v1-3_0B-2_16",
"llama-moe/LLaMA-MoE-v1-3_5B-2_8-sft",
"llama-moe/LLaMA-MoE-v1-3_0B-2_16-sft",
"llama-moe/LLaMA-MoE-v1-3_5B-4_16-sft"
] | [] | [] | [
"llama-moe/LLaMA-MoE-v1-3_5B-4_16",
"llama-moe/LLaMA-MoE-v1-3_5B-2_8",
"llama-moe/LLaMA-MoE-v1-3_0B-2_16",
"llama-moe/LLaMA-MoE-v1-3_5B-2_8-sft",
"llama-moe/LLaMA-MoE-v1-3_0B-2_16-sft",
"llama-moe/LLaMA-MoE-v1-3_5B-4_16-sft"
] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.891.bib | https://aclanthology.org/2024.emnlp-main.891/ | @inproceedings{hu-etal-2024-themis,
title = "Themis: A Reference-free {NLG} Evaluation Language Model with Flexibility and Interpretability",
author = "Hu, Xinyu and
Lin, Li and
Gao, Mingqi and
Yin, Xunjian and
Wan, Xiaojun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.891",
pages = "15924--15951",
abstract = "The evaluation of natural language generation (NLG) tasks is a significant and longstanding research area. With the recent emergence of powerful large language models (LLMs), some studies have turned to LLM-based automatic evaluation methods, which demonstrate great potential to become a new evaluation paradigm following traditional string-based and model-based metrics. However, despite the improved performance of existing methods, they still possess some deficiencies, such as dependency on references and limited evaluation flexibility. Therefore, in this paper, we meticulously construct a large-scale NLG evaluation corpus **NLG-Eval** with annotations from both human and GPT-4 to alleviate the lack of relevant data in this field. Furthermore, we propose **Themis**, an LLM dedicated to NLG evaluation, which has been trained with our designed multi-perspective consistency verification and rating-oriented preference alignment methods. Themis can conduct flexible and interpretable evaluations without references, and it exhibits superior evaluation performance on various NLG tasks, simultaneously generalizing well to unseen tasks and surpassing other evaluation models, including GPT-4.",
}
| The evaluation of natural language generation (NLG) tasks is a significant and longstanding research area. With the recent emergence of powerful large language models (LLMs), some studies have turned to LLM-based automatic evaluation methods, which demonstrate great potential to become a new evaluation paradigm following traditional string-based and model-based metrics. However, despite the improved performance of existing methods, they still possess some deficiencies, such as dependency on references and limited evaluation flexibility. Therefore, in this paper, we meticulously construct a large-scale NLG evaluation corpus **NLG-Eval** with annotations from both human and GPT-4 to alleviate the lack of relevant data in this field. Furthermore, we propose **Themis**, an LLM dedicated to NLG evaluation, which has been trained with our designed multi-perspective consistency verification and rating-oriented preference alignment methods. Themis can conduct flexible and interpretable evaluations without references, and it exhibits superior evaluation performance on various NLG tasks, simultaneously generalizing well to unseen tasks and surpassing other evaluation models, including GPT-4. | [
"Hu, Xinyu",
"Lin, Li",
"Gao, Mingqi",
"Yin, Xunjian",
"Wan, Xiaojun"
] | Themis: A Reference-free NLG Evaluation Language Model with Flexibility and Interpretability | emnlp-main.891 | Poster | 2406.18365 | [
"https://github.com/PKU-ONELab/Themis"
] | https://huggingface.co/papers/2406.18365 | 2 | 0 | 0 | 5 | [
"PKU-ONELab/Themis",
"RichardErkhov/PKU-ONELab_-_Themis-gguf"
] | [] | [] | [
"PKU-ONELab/Themis",
"RichardErkhov/PKU-ONELab_-_Themis-gguf"
] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.892.bib | https://aclanthology.org/2024.emnlp-main.892/ | @inproceedings{ju-etal-2024-mitigating,
title = "Mitigating Training Imbalance in {LLM} Fine-Tuning via Selective Parameter Merging",
author = "Ju, Yiming and
Ni, Ziyi and
Xing, Xingrun and
Zeng, Zhixiong and
Zhao, Hanyu and
Fan, Siqi and
Zhang, Zheng",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.892",
pages = "15952--15959",
abstract = "Supervised fine-tuning (SFT) is crucial for adapting Large Language Models (LLMs) to specific tasks. In this work, we demonstrate that the order of training data can lead to significant training imbalances, potentially resulting in performance degradation. Consequently, we propose to mitigate this imbalance by merging SFT models fine-tuned with different data orders, thereby enhancing the overall effectiveness of SFT. Additionally, we introduce a novel technique, {``}parameter-selection merging,{''} which outperforms traditional weighted-average methods on five datasets. Further, through analysis and ablation studies, we validate the effectiveness of our method and identify the sources of performance improvements.",
}
| Supervised fine-tuning (SFT) is crucial for adapting Large Language Models (LLMs) to specific tasks. In this work, we demonstrate that the order of training data can lead to significant training imbalances, potentially resulting in performance degradation. Consequently, we propose to mitigate this imbalance by merging SFT models fine-tuned with different data orders, thereby enhancing the overall effectiveness of SFT. Additionally, we introduce a novel technique, {``}parameter-selection merging,{''} which outperforms traditional weighted-average methods on five datasets. Further, through analysis and ablation studies, we validate the effectiveness of our method and identify the sources of performance improvements. | [
"Ju, Yiming",
"Ni, Ziyi",
"Xing, Xingrun",
"Zeng, Zhixiong",
"Zhao, Hanyu",
"Fan, Siqi",
"Zhang, Zheng"
] | Mitigating Training Imbalance in LLM Fine-Tuning via Selective Parameter Merging | emnlp-main.892 | Poster | 2410.03743 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.893.bib | https://aclanthology.org/2024.emnlp-main.893/ | @inproceedings{spilsbury-etal-2024-generating,
title = "Generating Demonstrations for In-Context Compositional Generalization in Grounded Language Learning",
author = "Spilsbury, Sam and
Marttinen, Pekka and
Ilin, Alexander",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.893",
pages = "15960--15991",
abstract = "In-Context-learning and few-shot prompting are viable methods compositional output generation. However, these methods can be very sensitive to the choice of support examples used. Retrieving good supports from the training data for a given test query is already a difficult problem, but in some cases solving this may not even be enough. We consider the setting of grounded language learning problems where finding relevant supports in the same or similar states as the query may be difficult. We design an agent which instead generates possible supports inputs and targets current state of the world, then uses them in-context-learning to solve the test query. We show substantially improved performance on a previously unsolved compositional generalization test without a loss of performance in other areas. The approach is general and can even scale to instructions expressed in natural language.",
}
| In-Context-learning and few-shot prompting are viable methods compositional output generation. However, these methods can be very sensitive to the choice of support examples used. Retrieving good supports from the training data for a given test query is already a difficult problem, but in some cases solving this may not even be enough. We consider the setting of grounded language learning problems where finding relevant supports in the same or similar states as the query may be difficult. We design an agent which instead generates possible supports inputs and targets current state of the world, then uses them in-context-learning to solve the test query. We show substantially improved performance on a previously unsolved compositional generalization test without a loss of performance in other areas. The approach is general and can even scale to instructions expressed in natural language. | [
"Spilsbury, Sam",
"Marttinen, Pekka",
"Ilin, Alex",
"er"
] | Generating Demonstrations for In-Context Compositional Generalization in Grounded Language Learning | emnlp-main.893 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.894.bib | https://aclanthology.org/2024.emnlp-main.894/ | @inproceedings{zeng-etal-2024-fame,
title = "{FAME}: Towards Factual Multi-Task Model Editing",
author = "Zeng, Li and
Shan, Yingyu and
Liu, Zeming and
Yao, Jiashu and
Guo, Yuhang",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.894",
pages = "15992--16011",
abstract = "Large language models (LLMs) embed extensive knowledge and utilize it to perform exceptionally well across various tasks. Nevertheless, outdated knowledge or factual errors within LLMs can lead to misleading or incorrect responses, causing significant issues in practical applications. To rectify the fatal flaw without the necessity for costly model retraining, various model editing approaches have been proposed to correct inaccurate information within LLMs in a cost-efficient way. To evaluate these model editing methods, previous work introduced a series of datasets. However, most of the previous datasets only contain fabricated data in a single format, which diverges from real-world model editing scenarios, raising doubts about their usability in practice. To facilitate the application of model editing in real-world scenarios, we propose the challenge of practicality. To resolve such challenges and effectively enhance the capabilities of LLMs, we present FAME, an authentic, comprehensive, and multi-task dataset, which is designed to enhance the practicality of model editing. We then propose SKEME, a model editing method that uses a novel caching mechanism to ensure synchronization with the real world. The experiments demonstrate that our method performs excellently across various tasks and scenarios, confirming its practicality.",
}
| Large language models (LLMs) embed extensive knowledge and utilize it to perform exceptionally well across various tasks. Nevertheless, outdated knowledge or factual errors within LLMs can lead to misleading or incorrect responses, causing significant issues in practical applications. To rectify the fatal flaw without the necessity for costly model retraining, various model editing approaches have been proposed to correct inaccurate information within LLMs in a cost-efficient way. To evaluate these model editing methods, previous work introduced a series of datasets. However, most of the previous datasets only contain fabricated data in a single format, which diverges from real-world model editing scenarios, raising doubts about their usability in practice. To facilitate the application of model editing in real-world scenarios, we propose the challenge of practicality. To resolve such challenges and effectively enhance the capabilities of LLMs, we present FAME, an authentic, comprehensive, and multi-task dataset, which is designed to enhance the practicality of model editing. We then propose SKEME, a model editing method that uses a novel caching mechanism to ensure synchronization with the real world. The experiments demonstrate that our method performs excellently across various tasks and scenarios, confirming its practicality. | [
"Zeng, Li",
"Shan, Yingyu",
"Liu, Zeming",
"Yao, Jiashu",
"Guo, Yuhang"
] | FAME: Towards Factual Multi-Task Model Editing | emnlp-main.894 | Poster | 2410.10859 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.895.bib | https://aclanthology.org/2024.emnlp-main.895/ | @inproceedings{pi-etal-2024-mllm,
title = "{MLLM}-Protector: Ensuring {MLLM}{'}s Safety without Hurting Performance",
author = "Pi, Renjie and
Han, Tianyang and
Zhang, Jianshu and
Xie, Yueqi and
Pan, Rui and
Lian, Qing and
Dong, Hanze and
Zhang, Jipeng and
Zhang, Tong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.895",
pages = "16012--16027",
abstract = "The deployment of multimodal large language models (MLLMs) has brought forth a unique vulnerability: susceptibility to malicious attacks through visual inputs. This paper investigates the novel challenge of defending MLLMs against such attacks. Compared to large language models (LLMs), MLLMs include an additional image modality. We discover that images act as a {``}foreign language{''} that is not considered during safety alignment, making MLLMs more prone to producing harmful responses. Unfortunately, unlike the discrete tokens considered in text-based LLMs, the continuous nature of image signals presents significant alignment challenges, which poses difficulty to thoroughly cover all possible scenarios. This vulnerability is exacerbated by the fact that most state-of-the-art MLLMs are fine-tuned on limited image-text pairs that are much fewer than the extensive text-based pretraining corpus, which makes the MLLMs more prone to catastrophic forgetting of their original abilities during safety fine-tuning. To tackle these challenges, we introduce MLLM-Protector, a plug-and-play strategy that solves two subtasks: 1) identifying harmful responses via a lightweight harm detector, and 2) transforming harmful responses into harmless ones via a detoxifier. This approach effectively mitigates the risks posed by malicious visual inputs without compromising the original performance of MLLMs. Our results demonstrate that MLLM-Protector offers a robust solution to a previously unaddressed aspect of MLLM security.",
}
| The deployment of multimodal large language models (MLLMs) has brought forth a unique vulnerability: susceptibility to malicious attacks through visual inputs. This paper investigates the novel challenge of defending MLLMs against such attacks. Compared to large language models (LLMs), MLLMs include an additional image modality. We discover that images act as a {``}foreign language{''} that is not considered during safety alignment, making MLLMs more prone to producing harmful responses. Unfortunately, unlike the discrete tokens considered in text-based LLMs, the continuous nature of image signals presents significant alignment challenges, which poses difficulty to thoroughly cover all possible scenarios. This vulnerability is exacerbated by the fact that most state-of-the-art MLLMs are fine-tuned on limited image-text pairs that are much fewer than the extensive text-based pretraining corpus, which makes the MLLMs more prone to catastrophic forgetting of their original abilities during safety fine-tuning. To tackle these challenges, we introduce MLLM-Protector, a plug-and-play strategy that solves two subtasks: 1) identifying harmful responses via a lightweight harm detector, and 2) transforming harmful responses into harmless ones via a detoxifier. This approach effectively mitigates the risks posed by malicious visual inputs without compromising the original performance of MLLMs. Our results demonstrate that MLLM-Protector offers a robust solution to a previously unaddressed aspect of MLLM security. | [
"Pi, Renjie",
"Han, Tianyang",
"Zhang, Jianshu",
"Xie, Yueqi",
"Pan, Rui",
"Lian, Qing",
"Dong, Hanze",
"Zhang, Jipeng",
"Zhang, Tong"
] | MLLM-Protector: Ensuring MLLM's Safety without Hurting Performance | emnlp-main.895 | Poster | 2401.02906 | [
"https://github.com/pipilurj/mllm-protector"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.896.bib | https://aclanthology.org/2024.emnlp-main.896/ | @inproceedings{li-etal-2024-leveraging-large,
title = "Leveraging Large Language Models for {NLG} Evaluation: Advances and Challenges",
author = "Li, Zhen and
Xu, Xiaohan and
Shen, Tao and
Xu, Can and
Gu, Jia-Chen and
Lai, Yuxuan and
Tao, Chongyang and
Ma, Shuai",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.896",
pages = "16028--16045",
abstract = "In the rapidly evolving domain of Natural Language Generation (NLG) evaluation, introducing Large Language Models (LLMs) has opened new avenues for assessing generated content quality, e.g., coherence, creativity, and context relevance. This paper aims to provide a thorough overview of leveraging LLMs for NLG evaluation, a burgeoning area that lacks a systematic analysis. We propose a coherent taxonomy for organizing existing LLM-based evaluation metrics, offering a structured framework to understand and compare these methods. Our detailed exploration includes critically assessing various LLM-based methodologies, as well as comparing their strengths and limitations in evaluating NLG outputs. By discussing unresolved challenges, including bias, robustness, domain-specificity, and unified evaluation, this paper seeks to offer insights to researchers and advocate for fairer and more advanced NLG evaluation techniques.",
}
| In the rapidly evolving domain of Natural Language Generation (NLG) evaluation, introducing Large Language Models (LLMs) has opened new avenues for assessing generated content quality, e.g., coherence, creativity, and context relevance. This paper aims to provide a thorough overview of leveraging LLMs for NLG evaluation, a burgeoning area that lacks a systematic analysis. We propose a coherent taxonomy for organizing existing LLM-based evaluation metrics, offering a structured framework to understand and compare these methods. Our detailed exploration includes critically assessing various LLM-based methodologies, as well as comparing their strengths and limitations in evaluating NLG outputs. By discussing unresolved challenges, including bias, robustness, domain-specificity, and unified evaluation, this paper seeks to offer insights to researchers and advocate for fairer and more advanced NLG evaluation techniques. | [
"Li, Zhen",
"Xu, Xiaohan",
"Shen, Tao",
"Xu, Can",
"Gu, Jia-Chen",
"Lai, Yuxuan",
"Tao, Chongyang",
"Ma, Shuai"
] | Leveraging Large Language Models for NLG Evaluation: Advances and Challenges | emnlp-main.896 | Poster | 2401.07103 | [
"https://github.com/chongyangtao/LLMs-for-NLG-Evaluation"
] | https://huggingface.co/papers/2401.07103 | 0 | 4 | 1 | 6 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.897.bib | https://aclanthology.org/2024.emnlp-main.897/ | @inproceedings{kim-etal-2024-infinipot,
title = "{I}nfini{P}ot: Infinite Context Processing on Memory-Constrained {LLM}s",
author = "Kim, Minsoo and
Shim, Kyuhong and
Choi, Jungwook and
Chang, Simyung",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.897",
pages = "16046--16060",
abstract = "Handling long input contexts remains a significant challenge for Large Language Models (LLMs), particularly in resource-constrained environments such as mobile devices. Our work aims to address this limitation by introducing InfiniPot, a novel KV cache control framework designed to enable pre-trained LLMs to manage extensive sequences within fixed memory constraints efficiently, without requiring additional training. InfiniPot leverages Continual Context Distillation (CCD), an iterative process that compresses and retains essential information through novel importance metrics, effectively maintaining critical data even without access to future context. Our comprehensive evaluations indicate that InfiniPot significantly outperforms models trained for long contexts in various NLP tasks, establishing its efficacy and versatility. This work represents a substantial advancement toward making LLMs applicable to a broader range of real-world scenarios.",
}
| Handling long input contexts remains a significant challenge for Large Language Models (LLMs), particularly in resource-constrained environments such as mobile devices. Our work aims to address this limitation by introducing InfiniPot, a novel KV cache control framework designed to enable pre-trained LLMs to manage extensive sequences within fixed memory constraints efficiently, without requiring additional training. InfiniPot leverages Continual Context Distillation (CCD), an iterative process that compresses and retains essential information through novel importance metrics, effectively maintaining critical data even without access to future context. Our comprehensive evaluations indicate that InfiniPot significantly outperforms models trained for long contexts in various NLP tasks, establishing its efficacy and versatility. This work represents a substantial advancement toward making LLMs applicable to a broader range of real-world scenarios. | [
"Kim, Minsoo",
"Shim, Kyuhong",
"Choi, Jungwook",
"Chang, Simyung"
] | InfiniPot: Infinite Context Processing on Memory-Constrained LLMs | emnlp-main.897 | Poster | 2410.01518 | [
""
] | https://huggingface.co/papers/2410.01518 | 1 | 2 | 2 | 4 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.898.bib | https://aclanthology.org/2024.emnlp-main.898/ | @inproceedings{wang-etal-2024-videoclip,
title = "{V}ideo{CLIP}-{XL}: Advancing Long Description Understanding for Video {CLIP} Models",
author = "Wang, Jiapeng and
Wang, Chengyu and
Huang, Kunzhe and
Huang, Jun and
Jin, Lianwen",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.898",
pages = "16061--16075",
abstract = "Contrastive Language-Image Pre-training (CLIP) has been widely studied and applied in numerous applications. However, the emphasis on brief summary texts during pre-training prevents CLIP from understanding long descriptions. This issue is particularly acute regarding videos given that videos often contain abundant detailed contents. In this paper, we propose the VideoCLIP-XL (eXtra Length) model, which aims to unleash the long-description understanding capability of video CLIP models. Firstly, we establish an automatic data collection system and gather a large-scale VILD pre-training dataset with VIdeo and Long-Description pairs. Then, we propose Text-similarity-guided Primary Component Matching (TPCM) to better learn the distribution of feature space while expanding the long description capability. We also introduce two new tasks namely Detail-aware Description Ranking (DDR) and Hallucination-aware Description Ranking (HDR) for further understanding improvement. Finally, we construct a Long Video Description Ranking (LVDR) benchmark for evaluating the long-description capability more comprehensively. Extensive experimental results on widely-used text-video retrieval benchmarks with both short and long descriptions and our LVDR benchmark can fully demonstrate the effectiveness of our method.",
}
| Contrastive Language-Image Pre-training (CLIP) has been widely studied and applied in numerous applications. However, the emphasis on brief summary texts during pre-training prevents CLIP from understanding long descriptions. This issue is particularly acute regarding videos given that videos often contain abundant detailed contents. In this paper, we propose the VideoCLIP-XL (eXtra Length) model, which aims to unleash the long-description understanding capability of video CLIP models. Firstly, we establish an automatic data collection system and gather a large-scale VILD pre-training dataset with VIdeo and Long-Description pairs. Then, we propose Text-similarity-guided Primary Component Matching (TPCM) to better learn the distribution of feature space while expanding the long description capability. We also introduce two new tasks namely Detail-aware Description Ranking (DDR) and Hallucination-aware Description Ranking (HDR) for further understanding improvement. Finally, we construct a Long Video Description Ranking (LVDR) benchmark for evaluating the long-description capability more comprehensively. Extensive experimental results on widely-used text-video retrieval benchmarks with both short and long descriptions and our LVDR benchmark can fully demonstrate the effectiveness of our method. | [
"Wang, Jiapeng",
"Wang, Chengyu",
"Huang, Kunzhe",
"Huang, Jun",
"Jin, Lianwen"
] | VideoCLIP-XL: Advancing Long Description Understanding for Video CLIP Models | emnlp-main.898 | Poster | 2410.00741 | [
""
] | https://huggingface.co/papers/2410.00741 | 0 | 1 | 0 | 5 | [
"alibaba-pai/VILD",
"alibaba-pai/LVDR",
"alibaba-pai/VideoCLIP-XL"
] | [] | [] | [
"alibaba-pai/VILD",
"alibaba-pai/LVDR",
"alibaba-pai/VideoCLIP-XL"
] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.899.bib | https://aclanthology.org/2024.emnlp-main.899/ | @inproceedings{kowshik-etal-2024-corrsynth,
title = "{C}orr{S}ynth - A Correlated Sampling Method for Diverse Dataset Generation from {LLM}s",
author = "Kowshik, Suhas S and
Divekar, Abhishek and
Malik, Vijit",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.899",
pages = "16076--16095",
abstract = "Large language models (LLMs) have demonstrated remarkable performance in diverse tasks using zero-shot and few-shot prompting. Even though their capabilities of data synthesis have been studied well in recent years, the generated data suffers from a lack of diversity, less adherence to the prompt, and potential biases that creep into the data from the generator model. In this work, we tackle the challenge of generating datasets with high diversity, upon which a student model is trained for downstream tasks. Taking the route of decoding-time guidance-based approaches, we propose CorrSynth, which generates data that is more diverse and faithful to the input prompt using a correlated sampling strategy. Further, our method overcomes the complexity drawbacks of some other guidance-based techniques like classifier-based guidance. With extensive experiments, we show the effectiveness of our approach and substantiate our claims. In particular, we perform intrinsic evaluation to show the improvements in diversity. Our experiments show that CorrSynth improves both student metrics and intrinsic metrics upon competitive baselines across four datasets, showing the innate advantage of our method.",
}
| Large language models (LLMs) have demonstrated remarkable performance in diverse tasks using zero-shot and few-shot prompting. Even though their capabilities of data synthesis have been studied well in recent years, the generated data suffers from a lack of diversity, less adherence to the prompt, and potential biases that creep into the data from the generator model. In this work, we tackle the challenge of generating datasets with high diversity, upon which a student model is trained for downstream tasks. Taking the route of decoding-time guidance-based approaches, we propose CorrSynth, which generates data that is more diverse and faithful to the input prompt using a correlated sampling strategy. Further, our method overcomes the complexity drawbacks of some other guidance-based techniques like classifier-based guidance. With extensive experiments, we show the effectiveness of our approach and substantiate our claims. In particular, we perform intrinsic evaluation to show the improvements in diversity. Our experiments show that CorrSynth improves both student metrics and intrinsic metrics upon competitive baselines across four datasets, showing the innate advantage of our method. | [
"Kowshik, Suhas S",
"Divekar, Abhishek",
"Malik, Vijit"
] | CorrSynth - A Correlated Sampling Method for Diverse Dataset Generation from LLMs | emnlp-main.899 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.900.bib | https://aclanthology.org/2024.emnlp-main.900/ | @inproceedings{fierro-etal-2024-defining,
title = "Defining Knowledge: Bridging Epistemology and Large Language Models",
author = "Fierro, Constanza and
Dhar, Ruchira and
Stamatiou, Filippos and
Garneau, Nicolas and
S{\o}gaard, Anders",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.900",
pages = "16096--16111",
abstract = "Knowledge claims are abundant in the literature on large language models (LLMs); but can we say that GPT-4 truly {``}knows{''} the Earth is round? To address this question, we review standard definitions of knowledge in epistemology and we formalize interpretations applicable to LLMs. In doing so, we identify inconsistencies and gaps in how current NLP research conceptualizes knowledge with respect to epistemological frameworks. Additionally, we conduct a survey of 100 professional philosophers and computer scientists to compare their preferences in knowledge definitions and their views on whether LLMs can really be said to know. Finally, we suggest evaluation protocols for testing knowledge in accordance to the most relevant definitions.",
}
| Knowledge claims are abundant in the literature on large language models (LLMs); but can we say that GPT-4 truly {``}knows{''} the Earth is round? To address this question, we review standard definitions of knowledge in epistemology and we formalize interpretations applicable to LLMs. In doing so, we identify inconsistencies and gaps in how current NLP research conceptualizes knowledge with respect to epistemological frameworks. Additionally, we conduct a survey of 100 professional philosophers and computer scientists to compare their preferences in knowledge definitions and their views on whether LLMs can really be said to know. Finally, we suggest evaluation protocols for testing knowledge in accordance to the most relevant definitions. | [
"Fierro, Constanza",
"Dhar, Ruchira",
"Stamatiou, Filippos",
"Garneau, Nicolas",
"S{\\o}gaard, Anders"
] | Defining Knowledge: Bridging Epistemology and Large Language Models | emnlp-main.900 | Poster | 2410.02499 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.