bibtex_url
stringlengths 41
53
| proceedings
stringlengths 38
50
| bibtext
stringlengths 566
3.75k
| abstract
stringlengths 4
3.1k
| authors
sequencelengths 1
66
| title
stringlengths 12
172
| id
stringlengths 7
19
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringlengths 0
40
| n_linked_authors
int64 -1
21
| upvotes
int64 -1
116
| num_comments
int64 -1
11
| n_authors
int64 -1
61
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
100
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
100
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.emnlp-main.901.bib | https://aclanthology.org/2024.emnlp-main.901/ | @inproceedings{jiang-etal-2024-tkgt,
title = "{TKGT}: Redefinition and A New Way of Text-to-Table Tasks Based on Real World Demands and Knowledge Graphs Augmented {LLM}s",
author = "Jiang, Peiwen and
Lin, Xinbo and
Zhao, Zibo and
Ma, Ruhui and
Chen, Yvonne Jie and
Cheng, Jinhua",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.901",
pages = "16112--16126",
abstract = "The task of text-to-table receives widespread attention, yet its importance and difficulty are underestimated. Existing works use simple datasets similar to table-to-text tasks and employ methods that ignore domain structures. As a bridge between raw text and statistical analysis, the text-to-table task often deals with complex semi-structured texts that refer to specific domain topics in the real world with entities and events, especially from those of social sciences. In this paper, we analyze the limitations of benchmark datasets and methods used in the text-to-table literature and redefine the text-to-table task to improve its compatibility with long text-processing tasks. Based on this redefinition, we propose a new dataset called CPL (Chinese Private Lending), which consists of judgments from China and is derived from a real-world legal academic project. We further propose TKGT (Text-KG-Table), a two stages domain-aware pipeline, which firstly generates domain knowledge graphs (KGs) classes semi-automatically from raw text with the mixed information extraction (Mixed-IE) method, then adopts the hybrid retrieval augmented generation (Hybird-RAG) method to transform it to tables for downstream needs under the guidance of KGs classes. Experiment results show that TKGT achieves state-of-the-art (SOTA) performance on both traditional datasets and the CPL. Our data and main code are available at https://github.com/jiangpw41/TKGT.",
}
| The task of text-to-table receives widespread attention, yet its importance and difficulty are underestimated. Existing works use simple datasets similar to table-to-text tasks and employ methods that ignore domain structures. As a bridge between raw text and statistical analysis, the text-to-table task often deals with complex semi-structured texts that refer to specific domain topics in the real world with entities and events, especially from those of social sciences. In this paper, we analyze the limitations of benchmark datasets and methods used in the text-to-table literature and redefine the text-to-table task to improve its compatibility with long text-processing tasks. Based on this redefinition, we propose a new dataset called CPL (Chinese Private Lending), which consists of judgments from China and is derived from a real-world legal academic project. We further propose TKGT (Text-KG-Table), a two stages domain-aware pipeline, which firstly generates domain knowledge graphs (KGs) classes semi-automatically from raw text with the mixed information extraction (Mixed-IE) method, then adopts the hybrid retrieval augmented generation (Hybird-RAG) method to transform it to tables for downstream needs under the guidance of KGs classes. Experiment results show that TKGT achieves state-of-the-art (SOTA) performance on both traditional datasets and the CPL. Our data and main code are available at https://github.com/jiangpw41/TKGT. | [
"Jiang, Peiwen",
"Lin, Xinbo",
"Zhao, Zibo",
"Ma, Ruhui",
"Chen, Yvonne Jie",
"Cheng, Jinhua"
] | TKGT: Redefinition and A New Way of Text-to-Table Tasks Based on Real World Demands and Knowledge Graphs Augmented LLMs | emnlp-main.901 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.902.bib | https://aclanthology.org/2024.emnlp-main.902/ | @inproceedings{rao-etal-2024-free,
title = "Free your mouse! Command Large Language Models to Generate Code to Format Word Documents",
author = "Rao, Shihao and
Li, Liang and
Liu, Jiapeng and
Weixin, Guan and
Gao, Xiyan and
Lim, Bing and
Ma, Can",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.902",
pages = "16127--16142",
abstract = "Recently, LLMs have significantly improved code generation, making it increasingly accessible to users. As a result, LLM-powered code generation applications have sprung up, vastly boosting user productivity. This paper mainly explores how to improve the efficiency and experience of users in formatting the document. Specifically, we propose an automatic document formatting method, Text-to-Format, which is driven by various prompting strategies. Text-to-Format takes the user{'}s formatting instructions and then generates code that can be run in Microsoft Word to format the content in a document. Further, to evaluate automatic document formatting approaches and advance the document formatting task, we built an evaluation specification including a high-quality dataset DocFormEval data, a code runtime environment, and evaluation metrics. Extensive experimental results on data reveal that the prompting strategy{'}s effect positively correlates with how much knowledge it introduces related to document formatting task. We believe the constructed DocFormEval data and the exploration about Text-to-Format can help developers build more intelligent tools for automatic document formatting, especially in offline scenarios, where the data privacy is the top priority.",
}
| Recently, LLMs have significantly improved code generation, making it increasingly accessible to users. As a result, LLM-powered code generation applications have sprung up, vastly boosting user productivity. This paper mainly explores how to improve the efficiency and experience of users in formatting the document. Specifically, we propose an automatic document formatting method, Text-to-Format, which is driven by various prompting strategies. Text-to-Format takes the user{'}s formatting instructions and then generates code that can be run in Microsoft Word to format the content in a document. Further, to evaluate automatic document formatting approaches and advance the document formatting task, we built an evaluation specification including a high-quality dataset DocFormEval data, a code runtime environment, and evaluation metrics. Extensive experimental results on data reveal that the prompting strategy{'}s effect positively correlates with how much knowledge it introduces related to document formatting task. We believe the constructed DocFormEval data and the exploration about Text-to-Format can help developers build more intelligent tools for automatic document formatting, especially in offline scenarios, where the data privacy is the top priority. | [
"Rao, Shihao",
"Li, Liang",
"Liu, Jiapeng",
"Weixin, Guan",
"Gao, Xiyan",
"Lim, Bing",
"Ma, Can"
] | Free your mouse! Command Large Language Models to Generate Code to Format Word Documents | emnlp-main.902 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.903.bib | https://aclanthology.org/2024.emnlp-main.903/ | @inproceedings{gu-etal-2024-cmr,
title = "{CMR} Scaling Law: Predicting Critical Mixture Ratios for Continual Pre-training of Language Models",
author = "Gu, Jiawei and
Yang, Zacc and
Ding, Chuanghao and
Zhao, Rui and
Tan, Fei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.903",
pages = "16143--16162",
abstract = "Large Language Models (LLMs) excel in diverse tasks but often underperform in specialized fields due to limited domain-specific or proprietary corpus. Continual pre-training (CPT) enhances LLM capabilities by imbuing new domain-specific or proprietary knowledge while replaying general corpus to prevent catastrophic forgetting. The data mixture ratio of general corpus and domain-specific corpus, however, has been chosen heuristically, leading to sub-optimal training efficiency in practice. In this context, we attempt to re-visit the scaling behavior of LLMs under the hood of CPT, and discover a power-law relationship between loss, mixture ratio, and training tokens scale. We formalize the trade-off between general and domain-specific capabilities, leading to a well-defined Critical Mixture Ratio (CMR) of general and domain data. By striking the balance, CMR maintains the model{'}s general ability and achieves the desired domain transfer, ensuring the highest utilization of available resources. Considering the balance between efficiency and effectiveness, CMR can be regarded as the optimal mixture ratio. Through extensive experiments, we ascertain the predictability of CMR, propose CMR scaling law and have substantiated its generalization. These findings offer practical guidelines for optimizing LLM training in specialized domains, ensuring both general and domain-specific performance while efficiently managing training resources.",
}
| Large Language Models (LLMs) excel in diverse tasks but often underperform in specialized fields due to limited domain-specific or proprietary corpus. Continual pre-training (CPT) enhances LLM capabilities by imbuing new domain-specific or proprietary knowledge while replaying general corpus to prevent catastrophic forgetting. The data mixture ratio of general corpus and domain-specific corpus, however, has been chosen heuristically, leading to sub-optimal training efficiency in practice. In this context, we attempt to re-visit the scaling behavior of LLMs under the hood of CPT, and discover a power-law relationship between loss, mixture ratio, and training tokens scale. We formalize the trade-off between general and domain-specific capabilities, leading to a well-defined Critical Mixture Ratio (CMR) of general and domain data. By striking the balance, CMR maintains the model{'}s general ability and achieves the desired domain transfer, ensuring the highest utilization of available resources. Considering the balance between efficiency and effectiveness, CMR can be regarded as the optimal mixture ratio. Through extensive experiments, we ascertain the predictability of CMR, propose CMR scaling law and have substantiated its generalization. These findings offer practical guidelines for optimizing LLM training in specialized domains, ensuring both general and domain-specific performance while efficiently managing training resources. | [
"Gu, Jiawei",
"Yang, Zacc",
"Ding, Chuanghao",
"Zhao, Rui",
"Tan, Fei"
] | CMR Scaling Law: Predicting Critical Mixture Ratios for Continual Pre-training of Language Models | emnlp-main.903 | Poster | 2407.17467 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.904.bib | https://aclanthology.org/2024.emnlp-main.904/ | @inproceedings{han-etal-2024-instinctive,
title = "The Instinctive Bias: Spurious Images lead to Illusion in {MLLM}s",
author = "Han, Tianyang and
Lian, Qing and
Pan, Rui and
Pi, Renjie and
Zhang, Jipeng and
Diao, Shizhe and
Lin, Yong and
Zhang, Tong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.904",
pages = "16163--16177",
abstract = "Large language models (LLMs) have recently experienced remarkable progress, where the advent of multi-modal large language models (MLLMs) has endowed LLMs with visual capabilities, leading to impressive performances in various multi-modal tasks. However, those powerful MLLMs such as GPT-4V still fail spectacularly when presented with certain image and text inputs. In this paper, we identify a typical class of inputs that baffles MLLMs, which consist of images that are highly relevant but inconsistent with answers, causing MLLMs to suffer from visual illusion. To quantify the effect, we propose CorrelationQA, the first benchmark that assesses the visual illusion level given spurious images. This benchmark contains 7,308 text-image pairs across 13 categories. Based on the proposed CorrelationQA, we conduct a thorough analysis on 9 mainstream MLLMs, illustrating that they universally suffer from this instinctive bias to varying degrees. We hope that our curated benchmark and evaluation results aid in better assessments of the MLLMs{'} robustness in the presence of misleading images. The code and datasets are available at https://github.com/MasaiahHan/CorrelationQA.",
}
| Large language models (LLMs) have recently experienced remarkable progress, where the advent of multi-modal large language models (MLLMs) has endowed LLMs with visual capabilities, leading to impressive performances in various multi-modal tasks. However, those powerful MLLMs such as GPT-4V still fail spectacularly when presented with certain image and text inputs. In this paper, we identify a typical class of inputs that baffles MLLMs, which consist of images that are highly relevant but inconsistent with answers, causing MLLMs to suffer from visual illusion. To quantify the effect, we propose CorrelationQA, the first benchmark that assesses the visual illusion level given spurious images. This benchmark contains 7,308 text-image pairs across 13 categories. Based on the proposed CorrelationQA, we conduct a thorough analysis on 9 mainstream MLLMs, illustrating that they universally suffer from this instinctive bias to varying degrees. We hope that our curated benchmark and evaluation results aid in better assessments of the MLLMs{'} robustness in the presence of misleading images. The code and datasets are available at https://github.com/MasaiahHan/CorrelationQA. | [
"Han, Tianyang",
"Lian, Qing",
"Pan, Rui",
"Pi, Renjie",
"Zhang, Jipeng",
"Diao, Shizhe",
"Lin, Yong",
"Zhang, Tong"
] | The Instinctive Bias: Spurious Images lead to Illusion in MLLMs | emnlp-main.904 | Poster | 2402.03757 | [
"https://github.com/masaiahhan/correlationqa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.905.bib | https://aclanthology.org/2024.emnlp-main.905/ | @inproceedings{kawabata-sugawara-2024-rationale,
title = "Rationale-Aware Answer Verification by Pairwise Self-Evaluation",
author = "Kawabata, Akira and
Sugawara, Saku",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.905",
pages = "16178--16196",
abstract = "Answer verification identifies correct solutions among candidates generated by large language models (LLMs). Current approaches typically train verifier models by labeling solutions as correct or incorrect based solely on whether the final answer matches the gold answer. However, this approach neglects any flawed rationale in the solution yielding the correct answer, undermining the verifier{'}s ability to distinguish between sound and flawed rationales. We empirically show that in StrategyQA, only 19{\%} of LLM-generated solutions with correct answers have valid rationales, thus leading to an unreliable verifier. Furthermore, we demonstrate that training a verifier on valid rationales significantly improves its ability to distinguish valid and flawed rationale. To make a better verifier without extra human supervision, we introduce REPS (Rationale Enhancement through Pairwise Selection), a method for selecting valid rationales from candidates by iteratively applying pairwise self-evaluation using the same LLM that generates the solutions. Verifiers trained on solutions selected by REPS outperform those trained using conventional training methods on three reasoning benchmarks (ARC-Challenge, DROP, and StrategyQA). Our results suggest that training reliable verifiers requires ensuring the validity of rationales in addition to the correctness of the final answers, which would be critical for models assisting humans in solving complex reasoning tasks.",
}
| Answer verification identifies correct solutions among candidates generated by large language models (LLMs). Current approaches typically train verifier models by labeling solutions as correct or incorrect based solely on whether the final answer matches the gold answer. However, this approach neglects any flawed rationale in the solution yielding the correct answer, undermining the verifier{'}s ability to distinguish between sound and flawed rationales. We empirically show that in StrategyQA, only 19{\%} of LLM-generated solutions with correct answers have valid rationales, thus leading to an unreliable verifier. Furthermore, we demonstrate that training a verifier on valid rationales significantly improves its ability to distinguish valid and flawed rationale. To make a better verifier without extra human supervision, we introduce REPS (Rationale Enhancement through Pairwise Selection), a method for selecting valid rationales from candidates by iteratively applying pairwise self-evaluation using the same LLM that generates the solutions. Verifiers trained on solutions selected by REPS outperform those trained using conventional training methods on three reasoning benchmarks (ARC-Challenge, DROP, and StrategyQA). Our results suggest that training reliable verifiers requires ensuring the validity of rationales in addition to the correctness of the final answers, which would be critical for models assisting humans in solving complex reasoning tasks. | [
"Kawabata, Akira",
"Sugawara, Saku"
] | Rationale-Aware Answer Verification by Pairwise Self-Evaluation | emnlp-main.905 | Poster | 2410.04838 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.906.bib | https://aclanthology.org/2024.emnlp-main.906/ | @inproceedings{ma-etal-2024-robustness,
title = "On the Robustness of Editing Large Language Models",
author = "Ma, Xinbei and
Ju, Tianjie and
Qiu, Jiyang and
Zhang, Zhuosheng and
Zhao, Hai and
Liu, Lifeng and
Wang, Yulong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.906",
pages = "16197--16216",
abstract = "Large language models (LLMs) have played a pivotal role in building communicative AI, yet they encounter the challenge of efficient updates. Model editing enables the manipulation of specific knowledge memories and the behavior of language generation without retraining. However, the robustness of model editing remains an open question. This work seeks to understand the strengths and limitations of editing methods, facilitating practical applications of communicative AI. We focus on three key research questions. RQ1: Can edited LLMs behave consistently resembling communicative AI in realistic situations? RQ2: To what extent does the rephrasing of prompts lead LLMs to deviate from the edited knowledge memory? RQ3: Which knowledge features are correlated with the performance and robustness of editing? Our empirical studies uncover a substantial disparity between existing editing methods and the practical application of LLMs. On rephrased prompts that are flexible but common in realistic applications, the performance of editing experiences a significant decline. Further analysis shows that more popular knowledge is memorized better, easier to recall, and more challenging to edit effectively.",
}
| Large language models (LLMs) have played a pivotal role in building communicative AI, yet they encounter the challenge of efficient updates. Model editing enables the manipulation of specific knowledge memories and the behavior of language generation without retraining. However, the robustness of model editing remains an open question. This work seeks to understand the strengths and limitations of editing methods, facilitating practical applications of communicative AI. We focus on three key research questions. RQ1: Can edited LLMs behave consistently resembling communicative AI in realistic situations? RQ2: To what extent does the rephrasing of prompts lead LLMs to deviate from the edited knowledge memory? RQ3: Which knowledge features are correlated with the performance and robustness of editing? Our empirical studies uncover a substantial disparity between existing editing methods and the practical application of LLMs. On rephrased prompts that are flexible but common in realistic applications, the performance of editing experiences a significant decline. Further analysis shows that more popular knowledge is memorized better, easier to recall, and more challenging to edit effectively. | [
"Ma, Xinbei",
"Ju, Tianjie",
"Qiu, Jiyang",
"Zhang, Zhuosheng",
"Zhao, Hai",
"Liu, Lifeng",
"Wang, Yulong"
] | On the Robustness of Editing Large Language Models | emnlp-main.906 | Poster | 2402.05827 | [
"https://github.com/xbmxb/edit_analysis"
] | https://huggingface.co/papers/2402.05827 | 1 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.907.bib | https://aclanthology.org/2024.emnlp-main.907/ | @inproceedings{kim-etal-2024-im,
title = "{IM}-{BERT}: Enhancing Robustness of {BERT} through the Implicit Euler Method",
author = "Kim, MiHyeon and
Park, Juhyoung and
Kim, YoungBin",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.907",
pages = "16217--16229",
abstract = "Pre-trained Language Models (PLMs) have achieved remarkable performance on diverse NLP tasks through pre-training and fine-tuning. However, fine-tuning the model with a large number of parameters on limited downstream datasets often leads to vulnerability to adversarial attacks, causing overfitting of the model on standard datasets. To address these issues, we propose IM-BERT from the perspective of a dynamic system by conceptualizing a layer of BERT as a solution of Ordinary Differential Equations (ODEs). Under the situation of initial value perturbation, we analyze the numerical stability of two main numerical ODE solvers: *the explicit and implicit Euler approaches.* Based on these analyses, we introduce a numerically robust IM-connection incorporating BERT{'}s layers. This strategy enhances the robustness of PLMs against adversarial attacks, even in low-resource scenarios, without introducing additional parameters or adversarial training strategies. Experimental results on the adversarial GLUE (AdvGLUE) dataset validate the robustness of IM-BERT under various conditions. Compared to the original BERT, IM-BERT exhibits a performance improvement of approximately 8.3{\%}p on the AdvGLUE dataset. Furthermore, in low-resource scenarios, IM-BERT outperforms BERT by achieving 5.9{\%}p higher accuracy.",
}
| Pre-trained Language Models (PLMs) have achieved remarkable performance on diverse NLP tasks through pre-training and fine-tuning. However, fine-tuning the model with a large number of parameters on limited downstream datasets often leads to vulnerability to adversarial attacks, causing overfitting of the model on standard datasets. To address these issues, we propose IM-BERT from the perspective of a dynamic system by conceptualizing a layer of BERT as a solution of Ordinary Differential Equations (ODEs). Under the situation of initial value perturbation, we analyze the numerical stability of two main numerical ODE solvers: *the explicit and implicit Euler approaches.* Based on these analyses, we introduce a numerically robust IM-connection incorporating BERT{'}s layers. This strategy enhances the robustness of PLMs against adversarial attacks, even in low-resource scenarios, without introducing additional parameters or adversarial training strategies. Experimental results on the adversarial GLUE (AdvGLUE) dataset validate the robustness of IM-BERT under various conditions. Compared to the original BERT, IM-BERT exhibits a performance improvement of approximately 8.3{\%}p on the AdvGLUE dataset. Furthermore, in low-resource scenarios, IM-BERT outperforms BERT by achieving 5.9{\%}p higher accuracy. | [
"Kim, MiHyeon",
"Park, Juhyoung",
"Kim, YoungBin"
] | IM-BERT: Enhancing Robustness of BERT through the Implicit Euler Method | emnlp-main.907 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.908.bib | https://aclanthology.org/2024.emnlp-main.908/ | @inproceedings{xiao-etal-2024-distract,
title = "Distract Large Language Models for Automatic Jailbreak Attack",
author = "Xiao, Zeguan and
Yang, Yan and
Chen, Guanhua and
Chen, Yun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.908",
pages = "16230--16244",
abstract = "Extensive efforts have been made before the public release of Large language models (LLMs) to align their behaviors with human values. However, even meticulously aligned LLMs remain vulnerable to malicious manipulations such as jailbreaking, leading to unintended behaviors. In this work, we propose a novel black-box jailbreak framework for automated red teaming of LLMs. We designed malicious content concealing and memory reframing with an iterative optimization algorithm to jailbreak LLMs, motivated by the research about the distractibility and over-confidence phenomenon of LLMs. Extensive experiments of jailbreaking both open-source and proprietary LLMs demonstrate the superiority of our framework in terms of effectiveness, scalability and transferability. We also evaluate the effectiveness of existing jailbreak defense methods against our attack and highlight the crucial need to develop more effective and practical defense strategies.",
}
| Extensive efforts have been made before the public release of Large language models (LLMs) to align their behaviors with human values. However, even meticulously aligned LLMs remain vulnerable to malicious manipulations such as jailbreaking, leading to unintended behaviors. In this work, we propose a novel black-box jailbreak framework for automated red teaming of LLMs. We designed malicious content concealing and memory reframing with an iterative optimization algorithm to jailbreak LLMs, motivated by the research about the distractibility and over-confidence phenomenon of LLMs. Extensive experiments of jailbreaking both open-source and proprietary LLMs demonstrate the superiority of our framework in terms of effectiveness, scalability and transferability. We also evaluate the effectiveness of existing jailbreak defense methods against our attack and highlight the crucial need to develop more effective and practical defense strategies. | [
"Xiao, Zeguan",
"Yang, Yan",
"Chen, Guanhua",
"Chen, Yun"
] | Distract Large Language Models for Automatic Jailbreak Attack | emnlp-main.908 | Poster | 2403.08424 | [
"https://github.com/sufenlp/AttanttionShiftJailbreak"
] | https://huggingface.co/papers/2403.08424 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.909.bib | https://aclanthology.org/2024.emnlp-main.909/ | @inproceedings{lin-etal-2024-exploring-space,
title = "Exploring Space Efficiency in a Tree-based Linear Model for Extreme Multi-label Classification",
author = "Lin, He-Zhe and
Liu, Cheng-Hung and
Lin, Chih-Jen",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.909",
pages = "16245--16260",
abstract = "Extreme multi-label classification (XMC) aims to identify relevant subsets from numerous labels. Among the various approaches for XMC, tree-based linear models are effective due to their superior efficiency and simplicity. However, the space complexity of tree-based methods is not well-studied. Many past works assume that storing the model is not affordable and apply techniques such as pruning to save space, which may lead to performance loss. In this work, we conduct both theoretical and empirical analyses on the space to store a tree model under the assumption of sparse data, a condition frequently met in text data. We found that, some features may be unused when training binary classifiers in a tree method, resulting in zero values in the weight vectors. Hence, storing only non-zero elements can greatly save space. Our experimental results indicate that tree models can require less than 10{\%} of the size of the standard one-vs-rest method for multi-label text classification. Our research provides a simple procedure to estimate the size of a tree model before training any classifier in the tree nodes. Then, if the model size is already acceptable, this approach can help avoid modifying the model through weight pruning or other techniques.",
}
| Extreme multi-label classification (XMC) aims to identify relevant subsets from numerous labels. Among the various approaches for XMC, tree-based linear models are effective due to their superior efficiency and simplicity. However, the space complexity of tree-based methods is not well-studied. Many past works assume that storing the model is not affordable and apply techniques such as pruning to save space, which may lead to performance loss. In this work, we conduct both theoretical and empirical analyses on the space to store a tree model under the assumption of sparse data, a condition frequently met in text data. We found that, some features may be unused when training binary classifiers in a tree method, resulting in zero values in the weight vectors. Hence, storing only non-zero elements can greatly save space. Our experimental results indicate that tree models can require less than 10{\%} of the size of the standard one-vs-rest method for multi-label text classification. Our research provides a simple procedure to estimate the size of a tree model before training any classifier in the tree nodes. Then, if the model size is already acceptable, this approach can help avoid modifying the model through weight pruning or other techniques. | [
"Lin, He-Zhe",
"Liu, Cheng-Hung",
"Lin, Chih-Jen"
] | Exploring Space Efficiency in a Tree-based Linear Model for Extreme Multi-label Classification | emnlp-main.909 | Poster | 2410.09554 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.910.bib | https://aclanthology.org/2024.emnlp-main.910/ | @inproceedings{mohammad-2024-worrywords,
title = "{W}orry{W}ords: Norms of Anxiety Association for over 44k {E}nglish Words",
author = "Mohammad, Saif M.",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.910",
pages = "16261--16278",
abstract = "Anxiety, the anticipatory unease about a potential negative outcome, is a common and beneficial human emotion. However, there is still much that is not known about anxiety, such as how it relates to our body and how it manifests in language; especially pertinent given the increasing impact of related disorders.In this work,we introduce \textit{WorryWords}, the first large-scale repository of manually derived word{--}anxiety associations for over 44,450 English words. We show that the anxiety associations are highly reliable.We use WorryWords to study the relationship between anxiety and other emotion constructs, as well as the rate at which children acquire anxiety words with age. Finally, we show that using WorryWords alone, one can accurately track the change of anxiety in streams of text.WorryWords enables a wide variety of anxiety-related research in psychology, NLP, public health, and social sciences.WorryWords (and its translations to over 100 languages) is freely available. http://saifmohammad.com/worrywords.html",
}
| Anxiety, the anticipatory unease about a potential negative outcome, is a common and beneficial human emotion. However, there is still much that is not known about anxiety, such as how it relates to our body and how it manifests in language; especially pertinent given the increasing impact of related disorders.In this work,we introduce \textit{WorryWords}, the first large-scale repository of manually derived word{--}anxiety associations for over 44,450 English words. We show that the anxiety associations are highly reliable.We use WorryWords to study the relationship between anxiety and other emotion constructs, as well as the rate at which children acquire anxiety words with age. Finally, we show that using WorryWords alone, one can accurately track the change of anxiety in streams of text.WorryWords enables a wide variety of anxiety-related research in psychology, NLP, public health, and social sciences.WorryWords (and its translations to over 100 languages) is freely available. http://saifmohammad.com/worrywords.html | [
"Mohammad, Saif M."
] | WorryWords: Norms of Anxiety Association for over 44k English Words | emnlp-main.910 | Poster | 2411.03966 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.911.bib | https://aclanthology.org/2024.emnlp-main.911/ | @inproceedings{doddapaneni-etal-2024-finding,
title = "Finding Blind Spots in Evaluator {LLM}s with Interpretable Checklists",
author = "Doddapaneni, Sumanth and
Khan, Mohammed Safi Ur Rahman and
Verma, Sshubam and
Khapra, Mitesh M",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.911",
pages = "16279--16309",
abstract = "Large Language Models (LLMs) are increasingly relied upon to evaluate text outputs of other LLMs, thereby influencing leaderboards and development decisions. However, concerns persist over the accuracy of these assessments and the potential for misleading conclusions. In this work, we investigate the effectiveness of LLMs as evaluators for text generation tasks. We propose FBI, a novel framework designed to examine the proficiency of Evaluator LLMs in assessing four critical abilities in other LLMs: factual accuracy, instruction following, coherence in long-form writing, and reasoning proficiency. By introducing targeted perturbations in answers generated by LLMs, that clearly impact one of these key capabilities, we test whether an Evaluator LLM can detect these quality drops. By creating a total of 2400 perturbed answers covering 22 perturbation categories, we conduct a comprehensive study using different evaluation strategies on five prominent LLMs commonly used as evaluators in the literature. Our findings reveal significant shortcomings in current Evaluator LLMs, which failed to identify quality drops in over 50{\%} of cases on average. Single-answer and pairwise evaluations demonstrated notable limitations, whereas reference-based evaluations showed comparatively better performance. \textit{These results underscore the unreliable nature of current Evaluator LLMs and advocate for cautious implementation in practical applications.}",
}
| Large Language Models (LLMs) are increasingly relied upon to evaluate text outputs of other LLMs, thereby influencing leaderboards and development decisions. However, concerns persist over the accuracy of these assessments and the potential for misleading conclusions. In this work, we investigate the effectiveness of LLMs as evaluators for text generation tasks. We propose FBI, a novel framework designed to examine the proficiency of Evaluator LLMs in assessing four critical abilities in other LLMs: factual accuracy, instruction following, coherence in long-form writing, and reasoning proficiency. By introducing targeted perturbations in answers generated by LLMs, that clearly impact one of these key capabilities, we test whether an Evaluator LLM can detect these quality drops. By creating a total of 2400 perturbed answers covering 22 perturbation categories, we conduct a comprehensive study using different evaluation strategies on five prominent LLMs commonly used as evaluators in the literature. Our findings reveal significant shortcomings in current Evaluator LLMs, which failed to identify quality drops in over 50{\%} of cases on average. Single-answer and pairwise evaluations demonstrated notable limitations, whereas reference-based evaluations showed comparatively better performance. \textit{These results underscore the unreliable nature of current Evaluator LLMs and advocate for cautious implementation in practical applications.} | [
"Doddapaneni, Sumanth",
"Khan, Mohammed Safi Ur Rahman",
"Verma, Sshubam",
"Khapra, Mitesh M"
] | Finding Blind Spots in Evaluator LLMs with Interpretable Checklists | emnlp-main.911 | Poster | 2406.13439 | [
"https://github.com/ai4bharat/fbi"
] | https://huggingface.co/papers/2406.13439 | 2 | 0 | 0 | 4 | [] | [
"ai4bharat/FBI"
] | [] | [] | [
"ai4bharat/FBI"
] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.912.bib | https://aclanthology.org/2024.emnlp-main.912/ | @inproceedings{zhao-etal-2024-longagent,
title = "{LONGAGENT}: Achieving Question Answering for 128k-Token-Long Documents through Multi-Agent Collaboration",
author = "Zhao, Jun and
Zu, Can and
Hao, Xu and
Lu, Yi and
He, Wei and
Ding, Yiwen and
Gui, Tao and
Zhang, Qi and
Huang, Xuanjing",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.912",
pages = "16310--16324",
abstract = "Large language models (LLMs) have achieved tremendous success in understanding language and processing text. However, question-answering (QA) on lengthy documents faces challenges of resource constraints and a high propensity for errors, even for the most advanced models such as GPT-4 and Claude2.In this paper, we introduce {\_}LongAgent{\_}, a multi-agent collaboration method that enables efficient and effective QA over $128k$-token-long documents. {\_}LongAgent{\_} adopts a {\_}divide-and-conquer{\_} strategy, breaking down lengthy documents into shorter, more manageable text chunks. A leader agent comprehends the user{'}s query and organizes the member agents to read their assigned chunks, reasoning a final answer through multiple rounds of discussion.Due to members{'} hallucinations, it{'}s difficult to guarantee that every response provided by each member is accurate.To address this, we develop an {\_}inter-member communication{\_} mechanism that facilitates information sharing, allowing for the detection and mitigation of hallucinatory responses.Experimental results show that a LLaMA-2 7B driven by {\_}LongAgent{\_} can effectively support QA over $128k$-token documents, achieving 16.42{\%} and 1.63{\%} accuracy gains over GPT-4 on single-hop and multi-hop QA settings, respectively.",
}
| Large language models (LLMs) have achieved tremendous success in understanding language and processing text. However, question-answering (QA) on lengthy documents faces challenges of resource constraints and a high propensity for errors, even for the most advanced models such as GPT-4 and Claude2.In this paper, we introduce {\_}LongAgent{\_}, a multi-agent collaboration method that enables efficient and effective QA over $128k$-token-long documents. {\_}LongAgent{\_} adopts a {\_}divide-and-conquer{\_} strategy, breaking down lengthy documents into shorter, more manageable text chunks. A leader agent comprehends the user{'}s query and organizes the member agents to read their assigned chunks, reasoning a final answer through multiple rounds of discussion.Due to members{'} hallucinations, it{'}s difficult to guarantee that every response provided by each member is accurate.To address this, we develop an {\_}inter-member communication{\_} mechanism that facilitates information sharing, allowing for the detection and mitigation of hallucinatory responses.Experimental results show that a LLaMA-2 7B driven by {\_}LongAgent{\_} can effectively support QA over $128k$-token documents, achieving 16.42{\%} and 1.63{\%} accuracy gains over GPT-4 on single-hop and multi-hop QA settings, respectively. | [
"Zhao, Jun",
"Zu, Can",
"Hao, Xu",
"Lu, Yi",
"He, Wei",
"Ding, Yiwen",
"Gui, Tao",
"Zhang, Qi",
"Huang, Xuanjing"
] | LONGAGENT: Achieving Question Answering for 128k-Token-Long Documents through Multi-Agent Collaboration | emnlp-main.912 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.913.bib | https://aclanthology.org/2024.emnlp-main.913/ | @inproceedings{saenger-etal-2024-autopersuade,
title = "{A}uto{P}ersuade: A Framework for Evaluating and Explaining Persuasive Arguments",
author = "Saenger, Till Raphael and
Hinck, Musashi and
Grimmer, Justin and
Stewart, Brandon M.",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.913",
pages = "16325--16342",
abstract = "We introduce a three-part framework for constructing persuasive messages, AutoPersuade. First, we curate a large collection of arguments and gather human evaluations of their persuasiveness. Next, we introduce a novel topic model to identify the features of these arguments that influence persuasion. Finally, we use the model to predict the persuasiveness of new arguments and to assess the causal effects of argument components, offering an explanation of the results. We demonstrate the effectiveness of AutoPersuade in an experimental study on arguments for veganism, validating our findings through human studies and out-of-sample predictions.",
}
| We introduce a three-part framework for constructing persuasive messages, AutoPersuade. First, we curate a large collection of arguments and gather human evaluations of their persuasiveness. Next, we introduce a novel topic model to identify the features of these arguments that influence persuasion. Finally, we use the model to predict the persuasiveness of new arguments and to assess the causal effects of argument components, offering an explanation of the results. We demonstrate the effectiveness of AutoPersuade in an experimental study on arguments for veganism, validating our findings through human studies and out-of-sample predictions. | [
"Saenger, Till Raphael",
"Hinck, Musashi",
"Grimmer, Justin",
"Stewart, Br",
"on M."
] | AutoPersuade: A Framework for Evaluating and Explaining Persuasive Arguments | emnlp-main.913 | Poster | 2410.08917 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.914.bib | https://aclanthology.org/2024.emnlp-main.914/ | @inproceedings{conia-etal-2024-towards,
title = "Towards Cross-Cultural Machine Translation with Retrieval-Augmented Generation from Multilingual Knowledge Graphs",
author = "Conia, Simone and
Lee, Daniel and
Li, Min and
Minhas, Umar Farooq and
Potdar, Saloni and
Li, Yunyao",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.914",
pages = "16343--16360",
abstract = "Translating text that contains entity names is a challenging task, as cultural-related references can vary significantly across languages. These variations may also be caused by transcreation, an adaptation process that entails more than transliteration and word-for-word translation. In this paper, we address the problem of cross-cultural translation on two fronts: (i) we introduce XC-Translate, the first large-scale, manually-created benchmark for machine translation that focuses on text that contains potentially culturally-nuanced entity names, and (ii) we propose KG-MT, a novel end-to-end method to integrate information from a multilingual knowledge graph into a neural machine translation model by leveraging a dense retrieval mechanism. Our experiments and analyses show that current machine translation systems and large language models still struggle to translate texts containing entity names, whereas KG-MT outperforms state-of-the-art approaches by a large margin, obtaining a 129{\%} and 62{\%} relative improvement compared to NLLB-200 and GPT-4, respectively.",
}
| Translating text that contains entity names is a challenging task, as cultural-related references can vary significantly across languages. These variations may also be caused by transcreation, an adaptation process that entails more than transliteration and word-for-word translation. In this paper, we address the problem of cross-cultural translation on two fronts: (i) we introduce XC-Translate, the first large-scale, manually-created benchmark for machine translation that focuses on text that contains potentially culturally-nuanced entity names, and (ii) we propose KG-MT, a novel end-to-end method to integrate information from a multilingual knowledge graph into a neural machine translation model by leveraging a dense retrieval mechanism. Our experiments and analyses show that current machine translation systems and large language models still struggle to translate texts containing entity names, whereas KG-MT outperforms state-of-the-art approaches by a large margin, obtaining a 129{\%} and 62{\%} relative improvement compared to NLLB-200 and GPT-4, respectively. | [
"Conia, Simone",
"Lee, Daniel",
"Li, Min",
"Minhas, Umar Farooq",
"Potdar, Saloni",
"Li, Yunyao"
] | Towards Cross-Cultural Machine Translation with Retrieval-Augmented Generation from Multilingual Knowledge Graphs | emnlp-main.914 | Oral | 2410.14057 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.915.bib | https://aclanthology.org/2024.emnlp-main.915/ | @inproceedings{zhao-etal-2024-exploring-compositional,
title = "Exploring the Compositional Deficiency of Large Language Models in Mathematical Reasoning Through Trap Problems",
author = "Zhao, Jun and
Tong, Jingqi and
Mou, Yurong and
Zhang, Ming and
Zhang, Qi and
Huang, Xuanjing",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.915",
pages = "16361--16376",
abstract = "Human cognition exhibits systematic compositionality, the algebraic ability to generate infinite novel combinations from finite learned components, which is the key to understanding and reasoning about complex logic. In this work, we investigate the compositionality of large language models (LLMs) in mathematical reasoning. Specifically, we construct a new dataset MathTrap by introducing carefully designed logical traps into the problem descriptions of MATH and GSM8K. Since problems with logical flaws are quite rare in the real world, these represent {``}unseen{''} cases to LLMs. Solving these requires the models to systematically compose (1) the mathematical knowledge involved in the original problems with (2) knowledge related to the introduced traps. Our experiments show that while LLMs possess both components of requisite knowledge, they do not \textbf{spontaneously} combine them to handle these novel cases. We explore several methods to mitigate this deficiency, such as natural language prompts, few-shot demonstrations, and fine-tuning. We find that LLMs{'} performance can be improved through the above external intervention. Overall, systematic compositionality remains an open challenge for large language models.",
}
| Human cognition exhibits systematic compositionality, the algebraic ability to generate infinite novel combinations from finite learned components, which is the key to understanding and reasoning about complex logic. In this work, we investigate the compositionality of large language models (LLMs) in mathematical reasoning. Specifically, we construct a new dataset MathTrap by introducing carefully designed logical traps into the problem descriptions of MATH and GSM8K. Since problems with logical flaws are quite rare in the real world, these represent {``}unseen{''} cases to LLMs. Solving these requires the models to systematically compose (1) the mathematical knowledge involved in the original problems with (2) knowledge related to the introduced traps. Our experiments show that while LLMs possess both components of requisite knowledge, they do not \textbf{spontaneously} combine them to handle these novel cases. We explore several methods to mitigate this deficiency, such as natural language prompts, few-shot demonstrations, and fine-tuning. We find that LLMs{'} performance can be improved through the above external intervention. Overall, systematic compositionality remains an open challenge for large language models. | [
"Zhao, Jun",
"Tong, Jingqi",
"Mou, Yurong",
"Zhang, Ming",
"Zhang, Qi",
"Huang, Xuanjing"
] | Exploring the Compositional Deficiency of Large Language Models in Mathematical Reasoning Through Trap Problems | emnlp-main.915 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.916.bib | https://aclanthology.org/2024.emnlp-main.916/ | @inproceedings{shen-etal-2024-scaling,
title = "Scaling Laws for Linear Complexity Language Models",
author = "Shen, Xuyang and
Li, Dong and
Leng, Ruitao and
Qin, Zhen and
Sun, Weigao and
Zhong, Yiran",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.916",
pages = "16377--16426",
abstract = "The interest in linear complexity models for large language models is on the rise, although their scaling capacity remains uncertain. In this study, we present the scaling laws for linear complexity language models to establish a foundation for their scalability. Specifically, we examine the scaling behaviors of three efficient linear architectures. These include TNL, a linear attention model with data-independent decay; HGRN2, a linear RNN with data-dependent decay; and cosFormer2, a linear attention model without decay. We also include LLaMA as a baseline architecture for comparison with softmax attention. These models were trained with six variants, ranging from 70M to 7B parameters on a 300B-token corpus, and evaluated with a total of 1,376 intermediate checkpoints on various downstream tasks. These tasks include validation loss, commonsense reasoning, and information retrieval and generation. The study reveals that existing linear complexity language models exhibit similar scaling capabilities as conventional transformer-based models while also demonstrating superior linguistic proficiency and knowledge retention.",
}
| The interest in linear complexity models for large language models is on the rise, although their scaling capacity remains uncertain. In this study, we present the scaling laws for linear complexity language models to establish a foundation for their scalability. Specifically, we examine the scaling behaviors of three efficient linear architectures. These include TNL, a linear attention model with data-independent decay; HGRN2, a linear RNN with data-dependent decay; and cosFormer2, a linear attention model without decay. We also include LLaMA as a baseline architecture for comparison with softmax attention. These models were trained with six variants, ranging from 70M to 7B parameters on a 300B-token corpus, and evaluated with a total of 1,376 intermediate checkpoints on various downstream tasks. These tasks include validation loss, commonsense reasoning, and information retrieval and generation. The study reveals that existing linear complexity language models exhibit similar scaling capabilities as conventional transformer-based models while also demonstrating superior linguistic proficiency and knowledge retention. | [
"Shen, Xuyang",
"Li, Dong",
"Leng, Ruitao",
"Qin, Zhen",
"Sun, Weigao",
"Zhong, Yiran"
] | Scaling Laws for Linear Complexity Language Models | emnlp-main.916 | Poster | 2406.16690 | [
"https://github.com/opennlplab/scalinglaws"
] | https://huggingface.co/papers/2406.16690 | 5 | 22 | 2 | 6 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.917.bib | https://aclanthology.org/2024.emnlp-main.917/ | @inproceedings{do-etal-2024-autoregressive-multi,
title = "Autoregressive Multi-trait Essay Scoring via Reinforcement Learning with Scoring-aware Multiple Rewards",
author = "Do, Heejin and
Ryu, Sangwon and
Lee, Gary",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.917",
pages = "16427--16438",
abstract = "Recent advances in automated essay scoring (AES) have shifted towards evaluating multiple traits to provide enriched feedback. Like typical AES systems, multi-trait AES employs the quadratic weighted kappa (QWK) to measure agreement with human raters, aligning closely with the rating schema; however, its non-differentiable nature prevents its direct use in neural network training. In this paper, we propose Scoring-aware Multi-reward Reinforcement Learning (SaMRL), which integrates actual evaluation schemes into the training process by designing QWK-based rewards with a mean-squared error penalty for multi-trait AES. Existing reinforcement learning (RL) applications in AES are limited to classification models despite associated performance degradation, as RL requires probability distributions; instead, we adopt an autoregressive score generation framework to leverage token generation probabilities for robust multi-trait score predictions. Empirical analyses demonstrate that SaMRL facilitates model training, notably enhancing scoring of previously inferior prompts.",
}
| Recent advances in automated essay scoring (AES) have shifted towards evaluating multiple traits to provide enriched feedback. Like typical AES systems, multi-trait AES employs the quadratic weighted kappa (QWK) to measure agreement with human raters, aligning closely with the rating schema; however, its non-differentiable nature prevents its direct use in neural network training. In this paper, we propose Scoring-aware Multi-reward Reinforcement Learning (SaMRL), which integrates actual evaluation schemes into the training process by designing QWK-based rewards with a mean-squared error penalty for multi-trait AES. Existing reinforcement learning (RL) applications in AES are limited to classification models despite associated performance degradation, as RL requires probability distributions; instead, we adopt an autoregressive score generation framework to leverage token generation probabilities for robust multi-trait score predictions. Empirical analyses demonstrate that SaMRL facilitates model training, notably enhancing scoring of previously inferior prompts. | [
"Do, Heejin",
"Ryu, Sangwon",
"Lee, Gary"
] | Autoregressive Multi-trait Essay Scoring via Reinforcement Learning with Scoring-aware Multiple Rewards | emnlp-main.917 | Poster | 2409.17472 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.918.bib | https://aclanthology.org/2024.emnlp-main.918/ | @inproceedings{liu-etal-2024-intrinsic,
title = "Intrinsic Self-correction for Enhanced Morality: An Analysis of Internal Mechanisms and the Superficial Hypothesis",
author = "Liu, Guangliang and
Mao, Haitao and
Tang, Jiliang and
Johnson, Kristen",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.918",
pages = "16439--16455",
abstract = "Large Language Models (LLMs) are capable of producing content that perpetuates stereotypes, discrimination, and toxicity.The recently proposed \textit{moral self-correction} is a computationally efficient method for reducing harmful content in the responses of LLMs. However, the process of how injecting self-correction instructions can modify the behavior of LLMs remains under-explored. In this paper, we explore the effectiveness of moral self-correction by answering three research questions: (1) In what scenarios does moral self-correction work? (2) What are the internal mechanisms of LLMs, e.g., hidden states, that are influenced by moral self-correction instructions? (3) Is intrinsic moral self-correction actually superficial in terms of reduced immorality in hidden states? We argue that self-correction can help LLMs find a shortcut to more morally correct output, rather than truly reducing the immorality stored in hidden states.Through empirical investigation with tasks of language generation and multi-choice question answering, we conclude: (i) LLMs exhibit good performance across both tasks, and self-correction instructions are particularly beneficial when the correct answer is already top-ranked; (ii) The morality levels in intermediate hidden states are strong indicators as to whether one instruction would be more effective than another; (iii) Based on our analysis of intermediate hidden states and task case studies of self-correction behaviors, we are first to propose the hypothesis that intrinsic moral self-correction is in fact superficial.",
}
| Large Language Models (LLMs) are capable of producing content that perpetuates stereotypes, discrimination, and toxicity.The recently proposed \textit{moral self-correction} is a computationally efficient method for reducing harmful content in the responses of LLMs. However, the process of how injecting self-correction instructions can modify the behavior of LLMs remains under-explored. In this paper, we explore the effectiveness of moral self-correction by answering three research questions: (1) In what scenarios does moral self-correction work? (2) What are the internal mechanisms of LLMs, e.g., hidden states, that are influenced by moral self-correction instructions? (3) Is intrinsic moral self-correction actually superficial in terms of reduced immorality in hidden states? We argue that self-correction can help LLMs find a shortcut to more morally correct output, rather than truly reducing the immorality stored in hidden states.Through empirical investigation with tasks of language generation and multi-choice question answering, we conclude: (i) LLMs exhibit good performance across both tasks, and self-correction instructions are particularly beneficial when the correct answer is already top-ranked; (ii) The morality levels in intermediate hidden states are strong indicators as to whether one instruction would be more effective than another; (iii) Based on our analysis of intermediate hidden states and task case studies of self-correction behaviors, we are first to propose the hypothesis that intrinsic moral self-correction is in fact superficial. | [
"Liu, Guangliang",
"Mao, Haitao",
"Tang, Jiliang",
"Johnson, Kristen"
] | Intrinsic Self-correction for Enhanced Morality: An Analysis of Internal Mechanisms and the Superficial Hypothesis | emnlp-main.918 | Poster | 2407.15286 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.919.bib | https://aclanthology.org/2024.emnlp-main.919/ | @inproceedings{zhang-etal-2024-atap,
title = "{ATAP}: Automatic Template-Augmented Commonsense Knowledge Graph Completion via Pre-Trained Language Models",
author = "Zhang, Fu and
Ding, Yifan and
Cheng, Jingwei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.919",
pages = "16456--16472",
abstract = "The mission of commonsense knowledge graph completion (CKGC) is to infer missing facts from known commonsense knowledge. CKGC methods can be roughly divided into two categories: triple-based methods and text-based methods. Due to the imbalanced distribution of entities and limited structural information, triple-based methods struggle with long-tail entities. Text-based methods alleviate this issue, but require extensive training and fine-tuning of language models, which reduces efficiency. To alleviate these problems, we propose ATAP, the first CKGC framework that utilizes automatically generated continuous prompt templates combined with pre-trained language models (PLMs). Moreover, ATAP uses a carefully designed new prompt template training strategy, guiding PLMs to generate optimal prompt templates for CKGC tasks. Combining the rich knowledge of PLMs with the template automatic augmentation strategy, ATAP effectively mitigates the long-tail problem and enhances CKGC performance. Results on benchmark datasets show that ATAP achieves state-of-the-art performance overall.",
}
| The mission of commonsense knowledge graph completion (CKGC) is to infer missing facts from known commonsense knowledge. CKGC methods can be roughly divided into two categories: triple-based methods and text-based methods. Due to the imbalanced distribution of entities and limited structural information, triple-based methods struggle with long-tail entities. Text-based methods alleviate this issue, but require extensive training and fine-tuning of language models, which reduces efficiency. To alleviate these problems, we propose ATAP, the first CKGC framework that utilizes automatically generated continuous prompt templates combined with pre-trained language models (PLMs). Moreover, ATAP uses a carefully designed new prompt template training strategy, guiding PLMs to generate optimal prompt templates for CKGC tasks. Combining the rich knowledge of PLMs with the template automatic augmentation strategy, ATAP effectively mitigates the long-tail problem and enhances CKGC performance. Results on benchmark datasets show that ATAP achieves state-of-the-art performance overall. | [
"Zhang, Fu",
"Ding, Yifan",
"Cheng, Jingwei"
] | ATAP: Automatic Template-Augmented Commonsense Knowledge Graph Completion via Pre-Trained Language Models | emnlp-main.919 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.920.bib | https://aclanthology.org/2024.emnlp-main.920/ | @inproceedings{juneja-etal-2024-lm2,
title = "{LM}2: A Simple Society of Language Models Solves Complex Reasoning",
author = "Juneja, Gurusha and
Dutta, Subhabrata and
Chakraborty, Tanmoy",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.920",
pages = "16473--16484",
abstract = "Despite demonstrating emergent reasoning abilities, Large Language Models (LLMS) often lose track of complex, multi-step reasoning. Existing studies show that providing guidance via decomposing the original question into multiple subproblems elicits more robustness in LLM reasoning {--} a decomposer generates the subproblems, and a solver solves each of these subproblems. However, these techniques fail to accommodate coordination between the decomposer and the solver modules (either in a single model or different specialized ones) {--} the decomposer does not keep track of the ability of the solver to follow the decomposed reasoning. In this paper, we propose LM2 to address these challenges. LM2 modularizes the decomposition, solution, and verification into three different language models. The decomposer module identifies the key concepts necessary to solve the problem and generates step-by-step subquestions according to the reasoning requirement. The solver model generates the solution to the subproblems that are then checked by the verifier module; depending upon the feedback from the verifier, the reasoning context is constructed using the subproblems and the solutions. These models are trained to coordinate using policy learning. Exhaustive experimentation suggests the superiority of LM2 over existing methods on in- and out-domain reasoning problems, outperforming the best baselines by 8.1{\%} on MATH, 7.71{\%} on JEEBench, and 9.7{\%} on MedQA problems (code available at https://github.com/ LCS2-IIITD/Language{\_}Model{\_}Multiplex).",
}
| Despite demonstrating emergent reasoning abilities, Large Language Models (LLMS) often lose track of complex, multi-step reasoning. Existing studies show that providing guidance via decomposing the original question into multiple subproblems elicits more robustness in LLM reasoning {--} a decomposer generates the subproblems, and a solver solves each of these subproblems. However, these techniques fail to accommodate coordination between the decomposer and the solver modules (either in a single model or different specialized ones) {--} the decomposer does not keep track of the ability of the solver to follow the decomposed reasoning. In this paper, we propose LM2 to address these challenges. LM2 modularizes the decomposition, solution, and verification into three different language models. The decomposer module identifies the key concepts necessary to solve the problem and generates step-by-step subquestions according to the reasoning requirement. The solver model generates the solution to the subproblems that are then checked by the verifier module; depending upon the feedback from the verifier, the reasoning context is constructed using the subproblems and the solutions. These models are trained to coordinate using policy learning. Exhaustive experimentation suggests the superiority of LM2 over existing methods on in- and out-domain reasoning problems, outperforming the best baselines by 8.1{\%} on MATH, 7.71{\%} on JEEBench, and 9.7{\%} on MedQA problems (code available at https://github.com/ LCS2-IIITD/Language{\_}Model{\_}Multiplex). | [
"Juneja, Gurusha",
"Dutta, Subhabrata",
"Chakraborty, Tanmoy"
] | LM2: A Simple Society of Language Models Solves Complex Reasoning | emnlp-main.920 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.921.bib | https://aclanthology.org/2024.emnlp-main.921/ | @inproceedings{meister-etal-2024-towards,
title = "Towards a Similarity-adjusted Surprisal Theory",
author = "Meister, Clara and
Giulianelli, Mario and
Pimentel, Tiago",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.921",
pages = "16485--16498",
abstract = "Surprisal theory posits that the cognitive effort required to comprehend a word is determined by its contextual predictability, quantified assurprisal. Traditionally, surprisal theory treats words as distinct entities, overlooking any potential similarity between them. Giulianelli et al. (2023) address this limitation by introducing information value, a measure of predictability designed to account for similarities between communicative units. Our work leverages Ricotta and Szeidl{'}s (2006) diversity index to extend surprisal into a metric that we term similarity-adjusted surprisal, exposing a mathematical relationship between surprisal and information value. Similarity-adjusted surprisal aligns with information value when considering graded similarities and reduces to standard surprisal when words are treated as distinct. Experimental results with reading time data indicate that similarity-adjusted surprisal adds predictive power beyond standard surprisal for certain datasets, suggesting it serves as a complementary measure of comprehension effort.",
}
| Surprisal theory posits that the cognitive effort required to comprehend a word is determined by its contextual predictability, quantified assurprisal. Traditionally, surprisal theory treats words as distinct entities, overlooking any potential similarity between them. Giulianelli et al. (2023) address this limitation by introducing information value, a measure of predictability designed to account for similarities between communicative units. Our work leverages Ricotta and Szeidl{'}s (2006) diversity index to extend surprisal into a metric that we term similarity-adjusted surprisal, exposing a mathematical relationship between surprisal and information value. Similarity-adjusted surprisal aligns with information value when considering graded similarities and reduces to standard surprisal when words are treated as distinct. Experimental results with reading time data indicate that similarity-adjusted surprisal adds predictive power beyond standard surprisal for certain datasets, suggesting it serves as a complementary measure of comprehension effort. | [
"Meister, Clara",
"Giulianelli, Mario",
"Pimentel, Tiago"
] | Towards a Similarity-adjusted Surprisal Theory | emnlp-main.921 | Poster | 2410.17676 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.922.bib | https://aclanthology.org/2024.emnlp-main.922/ | @inproceedings{omar-etal-2024-multi,
title = "Multi-Level Information Retrieval Augmented Generation for Knowledge-based Visual Question Answering",
author = "Omar, Adjali and
Ferret, Olivier and
Ghannay, Sahar and
Le Borgne, Herv{\'e}",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.922",
pages = "16499--16513",
abstract = "The Knowledge-Aware Visual Question Answering about Entity task aims to disambiguate entities using textual and visual information, as well as knowledge. It usually relies on two independent steps, information retrieval then reading comprehension, that do not benefit each other. Retrieval Augmented Generation (RAG) offers a solution by using generated answers as feedback for retrieval training. RAG usually relies solely on pseudo-relevant passages retrieved from external knowledge bases which can lead to ineffective answer generation. In this work, we propose a multi-level information RAG approach that enhances answer generation through entity retrieval and query expansion. We formulate a joint-training RAG loss such that answer generation is conditioned on both entity and passage retrievals. We show through experiments new state-of-the-art performance on the VIQuAE KB-VQA benchmark and demonstrate that our approach can help retrieve more actual relevant knowledge to generate accurate answers.",
}
| The Knowledge-Aware Visual Question Answering about Entity task aims to disambiguate entities using textual and visual information, as well as knowledge. It usually relies on two independent steps, information retrieval then reading comprehension, that do not benefit each other. Retrieval Augmented Generation (RAG) offers a solution by using generated answers as feedback for retrieval training. RAG usually relies solely on pseudo-relevant passages retrieved from external knowledge bases which can lead to ineffective answer generation. In this work, we propose a multi-level information RAG approach that enhances answer generation through entity retrieval and query expansion. We formulate a joint-training RAG loss such that answer generation is conditioned on both entity and passage retrievals. We show through experiments new state-of-the-art performance on the VIQuAE KB-VQA benchmark and demonstrate that our approach can help retrieve more actual relevant knowledge to generate accurate answers. | [
"Omar, Adjali",
"Ferret, Olivier",
"Ghannay, Sahar",
"Le Borgne, Herv{\\'e}"
] | Multi-Level Information Retrieval Augmented Generation for Knowledge-based Visual Question Answering | emnlp-main.922 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.923.bib | https://aclanthology.org/2024.emnlp-main.923/ | @inproceedings{he-etal-2024-trust,
title = "Can We Trust the Performance Evaluation of Uncertainty Estimation Methods in Text Summarization?",
author = "He, Jianfeng and
Yang, Runing and
Yu, Linlin and
Li, Changbin and
Jia, Ruoxi and
Chen, Feng and
Jin, Ming and
Lu, Chang-Tien",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.923",
pages = "16514--16575",
abstract = "Text summarization, a key natural language generation (NLG) task, is vital in various domains. However, the high cost of inaccurate summaries in risk-critical applications, particularly those involving human-in-the-loop decision-making, raises concerns about the reliability of uncertainty estimation on text summarization (UE-TS) evaluation methods. This concern stems from the dependency of uncertainty model metrics on diverse and potentially conflicting NLG metrics. To address this issue, we introduce a comprehensive UE-TS benchmark incorporating 31 NLG metrics across four dimensions. The benchmark evaluates the uncertainty estimation capabilities of two large language models and one pre-trained language model on three datasets, with human-annotation analysis incorporated where applicable. We also assess the performance of 14 common uncertainty estimation methods within this benchmark. Our findings emphasize the importance of considering multiple uncorrelated NLG metrics and diverse uncertainty estimation methods to ensure reliable and efficient evaluation of UE-TS techniques. Our code and data are available: https://github.com/he159ok/Benchmark-of-Uncertainty-Estimation-Methods-in-Text-Summarization.",
}
| Text summarization, a key natural language generation (NLG) task, is vital in various domains. However, the high cost of inaccurate summaries in risk-critical applications, particularly those involving human-in-the-loop decision-making, raises concerns about the reliability of uncertainty estimation on text summarization (UE-TS) evaluation methods. This concern stems from the dependency of uncertainty model metrics on diverse and potentially conflicting NLG metrics. To address this issue, we introduce a comprehensive UE-TS benchmark incorporating 31 NLG metrics across four dimensions. The benchmark evaluates the uncertainty estimation capabilities of two large language models and one pre-trained language model on three datasets, with human-annotation analysis incorporated where applicable. We also assess the performance of 14 common uncertainty estimation methods within this benchmark. Our findings emphasize the importance of considering multiple uncorrelated NLG metrics and diverse uncertainty estimation methods to ensure reliable and efficient evaluation of UE-TS techniques. Our code and data are available: https://github.com/he159ok/Benchmark-of-Uncertainty-Estimation-Methods-in-Text-Summarization. | [
"He, Jianfeng",
"Yang, Runing",
"Yu, Linlin",
"Li, Changbin",
"Jia, Ruoxi",
"Chen, Feng",
"Jin, Ming",
"Lu, Chang-Tien"
] | Can We Trust the Performance Evaluation of Uncertainty Estimation Methods in Text Summarization? | emnlp-main.923 | Poster | 2406.17274 | [
"https://github.com/he159ok/benchmark-of-uncertainty-estimation-methods-in-text-summarization"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.924.bib | https://aclanthology.org/2024.emnlp-main.924/ | @inproceedings{goldman-etal-2024-really,
title = "Is It Really Long Context if All You Need Is Retrieval? Towards Genuinely Difficult Long Context {NLP}",
author = "Goldman, Omer and
Jacovi, Alon and
Slobodkin, Aviv and
Maimon, Aviya and
Dagan, Ido and
Tsarfaty, Reut",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.924",
pages = "16576--16586",
abstract = "Improvements in language models{'} capabilities have pushed their applications towards longer contexts, making long-context evaluation and development an active research area. However, many disparate use-cases are grouped together under the umbrella term of {``}long-context{''}, defined simply by the total length of the model{'}s input, including - for example - Needle-in-a-Haystack tasks, book summarization, and information aggregation. Given their varied difficulty, in this position paper we argue that conflating different tasks by their context length is unproductive. As a community, we require a more precise vocabulary to understand what makes long-context tasks similar or different. We propose to unpack the taxonomy of long-context based on the properties that make them more difficult with longer contexts. We propose two orthogonal axes of difficulty: (I) Diffusion: How hard is it to find the necessary information in the context? (II) Scope: How much necessary information is there to find? We survey the literature on long-context, provide justification for this taxonomy as an informative descriptor, and situate the literature with respect to it. We conclude that the most difficult and interesting settings, whose necessary information is very long and highly diffused within the input, is severely under-explored. By using a descriptive vocabulary and discussing the relevant properties of difficulty in long-context, we can implement more informed research in this area. We call for a careful design of tasks and benchmarks with distinctly long context, taking into account the characteristics that make it qualitatively different from shorter context.",
}
| Improvements in language models{'} capabilities have pushed their applications towards longer contexts, making long-context evaluation and development an active research area. However, many disparate use-cases are grouped together under the umbrella term of {``}long-context{''}, defined simply by the total length of the model{'}s input, including - for example - Needle-in-a-Haystack tasks, book summarization, and information aggregation. Given their varied difficulty, in this position paper we argue that conflating different tasks by their context length is unproductive. As a community, we require a more precise vocabulary to understand what makes long-context tasks similar or different. We propose to unpack the taxonomy of long-context based on the properties that make them more difficult with longer contexts. We propose two orthogonal axes of difficulty: (I) Diffusion: How hard is it to find the necessary information in the context? (II) Scope: How much necessary information is there to find? We survey the literature on long-context, provide justification for this taxonomy as an informative descriptor, and situate the literature with respect to it. We conclude that the most difficult and interesting settings, whose necessary information is very long and highly diffused within the input, is severely under-explored. By using a descriptive vocabulary and discussing the relevant properties of difficulty in long-context, we can implement more informed research in this area. We call for a careful design of tasks and benchmarks with distinctly long context, taking into account the characteristics that make it qualitatively different from shorter context. | [
"Goldman, Omer",
"Jacovi, Alon",
"Slobodkin, Aviv",
"Maimon, Aviya",
"Dagan, Ido",
"Tsarfaty, Reut"
] | Is It Really Long Context if All You Need Is Retrieval? Towards Genuinely Difficult Long Context NLP | emnlp-main.924 | Poster | 2407.00402 | [
""
] | https://huggingface.co/papers/2407.00402 | 3 | 22 | 1 | 6 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.925.bib | https://aclanthology.org/2024.emnlp-main.925/ | @inproceedings{chizhov-etal-2024-bpe,
title = "{BPE} Gets Picky: Efficient Vocabulary Refinement During Tokenizer Training",
author = "Chizhov, Pavel and
Arnett, Catherine and
Korotkova, Elizaveta and
Yamshchikov, Ivan P.",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.925",
pages = "16587--16604",
abstract = "Language models can greatly benefit from efficient tokenization. However, they still mostly utilize the classical Byte-Pair Encoding (BPE) algorithm, a simple and reliable method. BPE has been shown to cause such issues as under-trained tokens and sub-optimal compression that may affect the downstream performance. We introduce PickyBPE, a modified BPE algorithm that carries out vocabulary refinement during tokenizer training by removing merges that leave intermediate {``}junk{''} tokens. Our method improves vocabulary efficiency, eliminates under-trained tokens, and does not compromise text compression. Our experiments show that this method either improves downstream performance or does not harm it.",
}
| Language models can greatly benefit from efficient tokenization. However, they still mostly utilize the classical Byte-Pair Encoding (BPE) algorithm, a simple and reliable method. BPE has been shown to cause such issues as under-trained tokens and sub-optimal compression that may affect the downstream performance. We introduce PickyBPE, a modified BPE algorithm that carries out vocabulary refinement during tokenizer training by removing merges that leave intermediate {``}junk{''} tokens. Our method improves vocabulary efficiency, eliminates under-trained tokens, and does not compromise text compression. Our experiments show that this method either improves downstream performance or does not harm it. | [
"Chizhov, Pavel",
"Arnett, Catherine",
"Korotkova, Elizaveta",
"Yamshchikov, Ivan P."
] | BPE Gets Picky: Efficient Vocabulary Refinement During Tokenizer Training | emnlp-main.925 | Poster | 2409.04599 | [
"https://github.com/pchizhov/picky_bpe"
] | https://huggingface.co/papers/2409.04599 | 2 | 1 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.926.bib | https://aclanthology.org/2024.emnlp-main.926/ | @inproceedings{shi-etal-2024-segment,
title = "{SEGMENT}+: Long Text Processing with Short-Context Language Models",
author = "Shi, Wei and
Li, Shuang and
Yu, Kerun and
Chen, Jinglei and
Liang, Zujie and
Wu, Xinhui and
Qian, Yuxi and
Wei, Feng and
Zheng, Bo and
Liang, Jiaqing and
Chen, Jiangjie and
Xiao, Yanghua",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.926",
pages = "16605--16617",
abstract = "There is a growing interest in expanding the input capacity of language models (LMs) across various domains. However, simply increasing the context window does not guarantee robust performance across diverse long-input processing tasks, such as understanding extensive documents and extracting detailed information from lengthy and noisy data. In response, we introduce Segment+, a general framework that enables LMs to handle extended inputs within limited context windows efficiently. Segment+ utilizes structured notes and a filtering module to manage information flow, resulting in a system that is both controllable and interpretable. Our extensive experiments across various model sizes, focusing on long-document question-answering and Needle-in-a-Haystack tasks, demonstrate the effectiveness of Segment+ in improving performance.",
}
| There is a growing interest in expanding the input capacity of language models (LMs) across various domains. However, simply increasing the context window does not guarantee robust performance across diverse long-input processing tasks, such as understanding extensive documents and extracting detailed information from lengthy and noisy data. In response, we introduce Segment+, a general framework that enables LMs to handle extended inputs within limited context windows efficiently. Segment+ utilizes structured notes and a filtering module to manage information flow, resulting in a system that is both controllable and interpretable. Our extensive experiments across various model sizes, focusing on long-document question-answering and Needle-in-a-Haystack tasks, demonstrate the effectiveness of Segment+ in improving performance. | [
"Shi, Wei",
"Li, Shuang",
"Yu, Kerun",
"Chen, Jinglei",
"Liang, Zujie",
"Wu, Xinhui",
"Qian, Yuxi",
"Wei, Feng",
"Zheng, Bo",
"Liang, Jiaqing",
"Chen, Jiangjie",
"Xiao, Yanghua"
] | SEGMENT+: Long Text Processing with Short-Context Language Models | emnlp-main.926 | Poster | 2410.06519 | [
""
] | https://huggingface.co/papers/2410.06519 | 1 | 0 | 0 | 12 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.927.bib | https://aclanthology.org/2024.emnlp-main.927/ | @inproceedings{yin-etal-2024-explicit,
title = "Explicit Memory Learning with Expectation Maximization",
author = "Yin, Zhangyue and
Sun, Qiushi and
Guo, Qipeng and
Zeng, Zhiyuan and
Cheng, Qinyuan and
Qiu, Xipeng and
Huang, Xuanjing",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.927",
pages = "16618--16635",
}
| No abstract found | [
"Yin, Zhangyue",
"Sun, Qiushi",
"Guo, Qipeng",
"Zeng, Zhiyuan",
"Cheng, Qinyuan",
"Qiu, Xipeng",
"Huang, Xuanjing"
] | Explicit Memory Learning with Expectation Maximization | emnlp-main.927 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.928.bib | https://aclanthology.org/2024.emnlp-main.928/ | @inproceedings{nair-etal-2024-closing,
title = "Closing the Loop: Learning to Generate Writing Feedback via Language Model Simulated Student Revisions",
author = "Nair, Inderjeet Jayakumar and
Tan, Jiaye and
Su, Xiaotian and
Gere, Anne and
Wang, Xu and
Wang, Lu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.928",
pages = "16636--16657",
abstract = "Providing feedback is widely recognized as crucial for refining students{'} writing skills. Recent advances in language models (LMs) have made it possible to automatically generate feedback that is actionable and well-aligned with human-specified attributes. However, it remains unclear whether the feedback generated by these models is truly effective in enhancing the quality of student revisions. Moreover, prompting LMs with a precise set of instructions to generate feedback is nontrivial due to the lack of consensus regarding the specific attributes that can lead to improved revising performance. To address these challenges, we propose PROF that PROduces Feedback via learning from LM simulated student revisions. PROF aims to iteratively optimize the feedback generator by directly maximizing the effectiveness of students{'} overall revising performance as simulated by LMs. Focusing on an economic essay assignment, we empirically test the efficacy of PROF and observe that our approach not only surpasses a variety of baseline methods in effectiveness of improving students{'} writing but also demonstrates enhanced pedagogical values, even though it was not explicitly trained for this aspect.",
}
| Providing feedback is widely recognized as crucial for refining students{'} writing skills. Recent advances in language models (LMs) have made it possible to automatically generate feedback that is actionable and well-aligned with human-specified attributes. However, it remains unclear whether the feedback generated by these models is truly effective in enhancing the quality of student revisions. Moreover, prompting LMs with a precise set of instructions to generate feedback is nontrivial due to the lack of consensus regarding the specific attributes that can lead to improved revising performance. To address these challenges, we propose PROF that PROduces Feedback via learning from LM simulated student revisions. PROF aims to iteratively optimize the feedback generator by directly maximizing the effectiveness of students{'} overall revising performance as simulated by LMs. Focusing on an economic essay assignment, we empirically test the efficacy of PROF and observe that our approach not only surpasses a variety of baseline methods in effectiveness of improving students{'} writing but also demonstrates enhanced pedagogical values, even though it was not explicitly trained for this aspect. | [
"Nair, Inderjeet Jayakumar",
"Tan, Jiaye",
"Su, Xiaotian",
"Gere, Anne",
"Wang, Xu",
"Wang, Lu"
] | Closing the Loop: Learning to Generate Writing Feedback via Language Model Simulated Student Revisions | emnlp-main.928 | Poster | 2410.08058 | [
"https://github.com/launchnlp/PROF"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.929.bib | https://aclanthology.org/2024.emnlp-main.929/ | @inproceedings{shen-etal-2024-small,
title = "Small {LLM}s Are Weak Tool Learners: A Multi-{LLM} Agent",
author = "Shen, Weizhou and
Li, Chenliang and
Chen, Hongzhan and
Yan, Ming and
Quan, Xiaojun and
Chen, Hehong and
Zhang, Ji and
Huang, Fei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.929",
pages = "16658--16680",
abstract = "Large Language Model (LLM) agents significantly extend the capabilities of standalone LLMs, empowering them to interact with external tools (e.g., APIs, functions) and complete various tasks in a self-directed fashion. The challenge of tool use demands that LLMs not only understand user queries and generate answers accurately but also excel in task planning, tool invocation, and result summarization. While traditional works focus on training a single LLM with all these capabilities, performance limitations become apparent, particularly with smaller models. To overcome these challenges, we propose a novel approach that decomposes the aforementioned capabilities into a planner, caller, and summarizer. Each component is implemented by a single LLM that focuses on a specific capability and collaborates with others to accomplish the task. This modular framework facilitates individual updates and the potential use of smaller LLMs for building each capability. To effectively train this framework, we introduce a two-stage training paradigm. First, we fine-tune a backbone LLM on the entire dataset without discriminating sub-tasks, providing the model with a comprehensive understanding of the task. Second, the fine-tuned LLM is used to instantiate the planner, caller, and summarizer respectively, which are continually fine-tuned on respective sub-tasks. Evaluation across various tool-use benchmarks illustrates that our proposed multi-LLM framework surpasses the traditional single-LLM approach, highlighting its efficacy and advantages in tool learning.",
}
| Large Language Model (LLM) agents significantly extend the capabilities of standalone LLMs, empowering them to interact with external tools (e.g., APIs, functions) and complete various tasks in a self-directed fashion. The challenge of tool use demands that LLMs not only understand user queries and generate answers accurately but also excel in task planning, tool invocation, and result summarization. While traditional works focus on training a single LLM with all these capabilities, performance limitations become apparent, particularly with smaller models. To overcome these challenges, we propose a novel approach that decomposes the aforementioned capabilities into a planner, caller, and summarizer. Each component is implemented by a single LLM that focuses on a specific capability and collaborates with others to accomplish the task. This modular framework facilitates individual updates and the potential use of smaller LLMs for building each capability. To effectively train this framework, we introduce a two-stage training paradigm. First, we fine-tune a backbone LLM on the entire dataset without discriminating sub-tasks, providing the model with a comprehensive understanding of the task. Second, the fine-tuned LLM is used to instantiate the planner, caller, and summarizer respectively, which are continually fine-tuned on respective sub-tasks. Evaluation across various tool-use benchmarks illustrates that our proposed multi-LLM framework surpasses the traditional single-LLM approach, highlighting its efficacy and advantages in tool learning. | [
"Shen, Weizhou",
"Li, Chenliang",
"Chen, Hongzhan",
"Yan, Ming",
"Quan, Xiaojun",
"Chen, Hehong",
"Zhang, Ji",
"Huang, Fei"
] | Small LLMs Are Weak Tool Learners: A Multi-LLM Agent | emnlp-main.929 | Poster | 2401.07324 | [
"https://github.com/x-plug/multi-llm-agent"
] | https://huggingface.co/papers/2401.07324 | 1 | 3 | 2 | 8 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.930.bib | https://aclanthology.org/2024.emnlp-main.930/ | @inproceedings{neo-etal-2024-interpreting,
title = "Interpreting Context Look-ups in Transformers: Investigating Attention-{MLP} Interactions",
author = "Neo, Clement and
Cohen, Shay B and
Barez, Fazl",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.930",
pages = "16681--16697",
abstract = "Understanding the inner workings of large language models (LLMs) is crucial for advancing their theoretical foundations and real-world applications. While the attention mechanism and multi-layer perceptrons (MLPs) have been studied independently, their interactions remain largely unexplored. This study investigates how attention heads and next-token neurons interact in LLMs to predict new words. We propose a methodology to identify next-token neurons, find prompts that highly activate them, and determine the upstream attention heads responsible. We then generate and evaluate explanations for the activity of these attention heads in an automated manner. Our findings reveal that some attention heads recognize specific contexts relevant to predicting a token and activate a downstream token-predicting neuron accordingly. This mechanism provides a deeper understanding of how attention heads work with MLP neurons to perform next-token prediction. Our approach offers a foundation for further research into the intricate workings of LLMs and their impact on text generation and understanding.",
}
| Understanding the inner workings of large language models (LLMs) is crucial for advancing their theoretical foundations and real-world applications. While the attention mechanism and multi-layer perceptrons (MLPs) have been studied independently, their interactions remain largely unexplored. This study investigates how attention heads and next-token neurons interact in LLMs to predict new words. We propose a methodology to identify next-token neurons, find prompts that highly activate them, and determine the upstream attention heads responsible. We then generate and evaluate explanations for the activity of these attention heads in an automated manner. Our findings reveal that some attention heads recognize specific contexts relevant to predicting a token and activate a downstream token-predicting neuron accordingly. This mechanism provides a deeper understanding of how attention heads work with MLP neurons to perform next-token prediction. Our approach offers a foundation for further research into the intricate workings of LLMs and their impact on text generation and understanding. | [
"Neo, Clement",
"Cohen, Shay B",
"Barez, Fazl"
] | Interpreting Context Look-ups in Transformers: Investigating Attention-MLP Interactions | emnlp-main.930 | Poster | 2402.15055 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.931.bib | https://aclanthology.org/2024.emnlp-main.931/ | @inproceedings{hengle-etal-2024-still,
title = "Still Not Quite There! Evaluating Large Language Models for Comorbid Mental Health Diagnosis",
author = "Hengle, Amey and
Kulkarni, Atharva and
Patankar, Shantanu Deepak and
Chandrasekaran, Madhumitha and
D{'}silva, Sneha and
Jacob, Jemima S. and
Gupta, Rashmi",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.931",
pages = "16698--16721",
abstract = "In this study, we introduce ANGST, a novel, first of its kind benchmark for depression-anxiety comorbidity classification from social media posts. Unlike contemporary datasets that often oversimplify the intricate interplay between different mental health disorders by treating them as isolated conditions, ANGST enables multi-label classification, allowing each post to be simultaneously identified as indicating depression and/or anxiety. Comprising 2876 meticulously annotated posts by expert psychologists and an additional 7667 silver-labeled posts, ANGST posits a more representative sample of online mental health discourse. Moreover, we benchmark ANGST using various state-of-the-art language models, ranging from Mental-BERT to GPT-4. Our results provide significant insights into the capabilities and limitations of these models in complex diagnostic scenarios. While GPT-4 generally outperforms other models, none achieve an F1 score exceeding 72{\%} in multi-class comorbid classification, underscoring the ongoing challenges in applying language models to mental health diagnostics.",
}
| In this study, we introduce ANGST, a novel, first of its kind benchmark for depression-anxiety comorbidity classification from social media posts. Unlike contemporary datasets that often oversimplify the intricate interplay between different mental health disorders by treating them as isolated conditions, ANGST enables multi-label classification, allowing each post to be simultaneously identified as indicating depression and/or anxiety. Comprising 2876 meticulously annotated posts by expert psychologists and an additional 7667 silver-labeled posts, ANGST posits a more representative sample of online mental health discourse. Moreover, we benchmark ANGST using various state-of-the-art language models, ranging from Mental-BERT to GPT-4. Our results provide significant insights into the capabilities and limitations of these models in complex diagnostic scenarios. While GPT-4 generally outperforms other models, none achieve an F1 score exceeding 72{\%} in multi-class comorbid classification, underscoring the ongoing challenges in applying language models to mental health diagnostics. | [
"Hengle, Amey",
"Kulkarni, Atharva",
"Patankar, Shantanu Deepak",
"Ch",
"rasekaran, Madhumitha",
"D{'}silva, Sneha",
"Jacob, Jemima S.",
"Gupta, Rashmi"
] | Still Not Quite There! Evaluating Large Language Models for Comorbid Mental Health Diagnosis | emnlp-main.931 | Poster | 2410.03908 | [
""
] | https://huggingface.co/papers/2410.03908 | 0 | 0 | 0 | 7 | [] | [
"mental-health-comorbidity-classification/ANGST"
] | [] | [] | [
"mental-health-comorbidity-classification/ANGST"
] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.932.bib | https://aclanthology.org/2024.emnlp-main.932/ | @inproceedings{cui-etal-2024-odyssey,
title = "The Odyssey of Commonsense Causality: From Foundational Benchmarks to Cutting-Edge Reasoning",
author = {Cui, Shaobo and
Jin, Zhijing and
Sch{\"o}lkopf, Bernhard and
Faltings, Boi},
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.932",
pages = "16722--16763",
abstract = "Understanding commonsense causality is a unique mark of intelligence for humans. It helps people understand the principles of the real world better and benefits the decision-making process related to causation. For instance, commonsense causality is crucial in judging whether a defendant{'}s action causes the plaintiff{'}s loss in determining legal liability. Despite its significance, a systematic exploration of this topic is notably lacking. Our comprehensive survey bridges this gap by focusing on taxonomies, benchmarks, acquisition methods, qualitative reasoning, and quantitative measurements in commonsense causality, synthesizing insights from over 200 representative articles. Our work aims to provide a systematic overview, update scholars on recent advancements, provide a practical guide for beginners, and highlight promising future research directions in this vital field. A summary of the related literature is available at https://github.com/cui-shaobo/causality-papers .",
}
| Understanding commonsense causality is a unique mark of intelligence for humans. It helps people understand the principles of the real world better and benefits the decision-making process related to causation. For instance, commonsense causality is crucial in judging whether a defendant{'}s action causes the plaintiff{'}s loss in determining legal liability. Despite its significance, a systematic exploration of this topic is notably lacking. Our comprehensive survey bridges this gap by focusing on taxonomies, benchmarks, acquisition methods, qualitative reasoning, and quantitative measurements in commonsense causality, synthesizing insights from over 200 representative articles. Our work aims to provide a systematic overview, update scholars on recent advancements, provide a practical guide for beginners, and highlight promising future research directions in this vital field. A summary of the related literature is available at https://github.com/cui-shaobo/causality-papers . | [
"Cui, Shaobo",
"Jin, Zhijing",
"Sch{\\\"o}lkopf, Bernhard",
"Faltings, Boi"
] | The Odyssey of Commonsense Causality: From Foundational Benchmarks to Cutting-Edge Reasoning | emnlp-main.932 | Poster | 2406.19307 | [
""
] | https://huggingface.co/papers/2406.19307 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.933.bib | https://aclanthology.org/2024.emnlp-main.933/ | @inproceedings{smadu-etal-2024-investigating,
title = "Investigating Large Language Models for Complex Word Identification in Multilingual and Multidomain Setups",
author = "Sm{\u{a}}du, R{\u{a}}zvan-Alexandru and
Ion, David-Gabriel and
Cercel, Dumitru-Clementin and
Pop, Florin and
Cercel, Mihaela-Claudia",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.933",
pages = "16764--16800",
abstract = "Complex Word Identification (CWI) is an essential step in the lexical simplification task and has recently become a task on its own. Some variations of this binary classification task have emerged, such as lexical complexity prediction (LCP) and complexity evaluation of multi-word expressions (MWE). Large language models (LLMs) recently became popular in the Natural Language Processing community because of their versatility and capability to solve unseen tasks in zero/few-shot settings. Our work investigates LLM usage, specifically open-source models such as Llama 2, Llama 3, and Vicuna v1.5, and closed-source, such as ChatGPT-3.5-turbo and GPT-4o, in the CWI, LCP, and MWE settings. We evaluate zero-shot, few-shot, and fine-tuning settings and show that LLMs struggle in certain conditions or achieve comparable results against existing methods. In addition, we provide some views on meta-learning combined with prompt learning. In the end, we conclude that the current state of LLMs cannot or barely outperform existing methods, which are usually much smaller.",
}
| Complex Word Identification (CWI) is an essential step in the lexical simplification task and has recently become a task on its own. Some variations of this binary classification task have emerged, such as lexical complexity prediction (LCP) and complexity evaluation of multi-word expressions (MWE). Large language models (LLMs) recently became popular in the Natural Language Processing community because of their versatility and capability to solve unseen tasks in zero/few-shot settings. Our work investigates LLM usage, specifically open-source models such as Llama 2, Llama 3, and Vicuna v1.5, and closed-source, such as ChatGPT-3.5-turbo and GPT-4o, in the CWI, LCP, and MWE settings. We evaluate zero-shot, few-shot, and fine-tuning settings and show that LLMs struggle in certain conditions or achieve comparable results against existing methods. In addition, we provide some views on meta-learning combined with prompt learning. In the end, we conclude that the current state of LLMs cannot or barely outperform existing methods, which are usually much smaller. | [
"Sm{\\u{a}}du, R{\\u{a}}zvan-Alex",
"ru",
"Ion, David-Gabriel",
"Cercel, Dumitru-Clementin",
"Pop, Florin",
"Cercel, Mihaela-Claudia"
] | Investigating Large Language Models for Complex Word Identification in Multilingual and Multidomain Setups | emnlp-main.933 | Poster | 2411.01706 | [
"https://github.com/razvanalex-phd/cwi_llm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.934.bib | https://aclanthology.org/2024.emnlp-main.934/ | @inproceedings{gu-etal-2024-model,
title = "Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue",
author = "Gu, Jia-Chen and
Xu, Hao-Xiang and
Ma, Jun-Yu and
Lu, Pan and
Ling, Zhen-Hua and
Chang, Kai-Wei and
Peng, Nanyun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.934",
pages = "16801--16819",
abstract = "Model editing is a technique that edits the large language models (LLMs) with updated knowledge to alleviate hallucinations without resource-intensive retraining. While current model editing methods can effectively modify a model{'}s behavior within a specific area of interest, they often overlook the potential unintended side effects on the general abilities of LLMs such as reasoning, natural language inference, and question answering. In this paper, we raise concerns that model editing{'}s improvements on factuality may come at the cost of a significant degradation of the model{'}s general abilities. We systematically analyze the side effects by evaluating four popular editing methods on three LLMs across eight representative tasks. Our extensive empirical experiments show that it is challenging for current editing methods to simultaneously improve factuality of LLMs and maintain their general abilities. Our analysis reveals that the side effects are caused by model editing altering the original model weights excessively, leading to overfitting to the edited facts. To mitigate this, a method named RECT is proposed to regularize the edit update weights by imposing constraints on their complexity based on the RElative Change in weighT. Evaluation results show that RECT can significantly mitigate the side effects of editing while still maintaining over 94{\%} editing performance.",
}
| Model editing is a technique that edits the large language models (LLMs) with updated knowledge to alleviate hallucinations without resource-intensive retraining. While current model editing methods can effectively modify a model{'}s behavior within a specific area of interest, they often overlook the potential unintended side effects on the general abilities of LLMs such as reasoning, natural language inference, and question answering. In this paper, we raise concerns that model editing{'}s improvements on factuality may come at the cost of a significant degradation of the model{'}s general abilities. We systematically analyze the side effects by evaluating four popular editing methods on three LLMs across eight representative tasks. Our extensive empirical experiments show that it is challenging for current editing methods to simultaneously improve factuality of LLMs and maintain their general abilities. Our analysis reveals that the side effects are caused by model editing altering the original model weights excessively, leading to overfitting to the edited facts. To mitigate this, a method named RECT is proposed to regularize the edit update weights by imposing constraints on their complexity based on the RElative Change in weighT. Evaluation results show that RECT can significantly mitigate the side effects of editing while still maintaining over 94{\%} editing performance. | [
"Gu, Jia-Chen",
"Xu, Hao-Xiang",
"Ma, Jun-Yu",
"Lu, Pan",
"Ling, Zhen-Hua",
"Chang, Kai-Wei",
"Peng, Nanyun"
] | Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue | emnlp-main.934 | Poster | 2401.04700 | [
"https://github.com/jasonforjoy/model-editing-hurt"
] | https://huggingface.co/papers/2401.04700 | 2 | 3 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.935.bib | https://aclanthology.org/2024.emnlp-main.935/ | @inproceedings{patel-etal-2024-large,
title = "Are Large Language Models In-Context Personalized Summarizers? Get an i{COPERNICUS} Test Done!",
author = "Patel, Divya and
Patel, Pathik and
Chander, Ankush and
Dasgupta, Sourish and
Chakraborty, Tanmoy",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.935",
pages = "16820--16842",
}
| No abstract found | [
"Patel, Divya",
"Patel, Pathik",
"Ch",
"er, Ankush",
"Dasgupta, Sourish",
"Chakraborty, Tanmoy"
] | Are Large Language Models In-Context Personalized Summarizers? Get an iCOPERNICUS Test Done! | emnlp-main.935 | Poster | 2410.00149 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.936.bib | https://aclanthology.org/2024.emnlp-main.936/ | @inproceedings{saley-etal-2024-meditod,
title = "{M}edi{TOD}: An {E}nglish Dialogue Dataset for Medical History Taking with Comprehensive Annotations",
author = "Saley, Vishal Vivek and
Saha, Goonjan and
Das, Rocktim Jyoti and
Raghu, Dinesh and
., Mausam",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.936",
pages = "16843--16877",
abstract = "Medical task-oriented dialogue systems can assist doctors by collecting patient medical history, aiding in diagnosis, or guiding treatment selection, thereby reducing doctor burnout and expanding access to medical services. However, doctor-patient dialogue datasets are not readily available, primarily due to privacy regulations. Moreover, existing datasets lack comprehensive annotations involving medical slots and their different attributes, such as symptoms and their onset, progression, and severity. These comprehensive annotations are crucial for accurate diagnosis. Finally, most existing datasets are non-English, limiting their utility for the larger research community.In response, we introduce MediTOD, a new dataset of doctor-patient dialogues in English for the medical history-taking task. Collaborating with doctors, we devise a questionnaire-based labeling scheme tailored to the medical domain. Then, medical professionals create the dataset with high-quality comprehensive annotations, capturing medical slots and their attributes. We establish benchmarks in supervised and few-shot settings on MediTOD for natural language understanding, policy learning, and natural language generation subtasks, evaluating models from both TOD and biomedical domains. We make MediTOD publicly available for future research.",
}
| Medical task-oriented dialogue systems can assist doctors by collecting patient medical history, aiding in diagnosis, or guiding treatment selection, thereby reducing doctor burnout and expanding access to medical services. However, doctor-patient dialogue datasets are not readily available, primarily due to privacy regulations. Moreover, existing datasets lack comprehensive annotations involving medical slots and their different attributes, such as symptoms and their onset, progression, and severity. These comprehensive annotations are crucial for accurate diagnosis. Finally, most existing datasets are non-English, limiting their utility for the larger research community.In response, we introduce MediTOD, a new dataset of doctor-patient dialogues in English for the medical history-taking task. Collaborating with doctors, we devise a questionnaire-based labeling scheme tailored to the medical domain. Then, medical professionals create the dataset with high-quality comprehensive annotations, capturing medical slots and their attributes. We establish benchmarks in supervised and few-shot settings on MediTOD for natural language understanding, policy learning, and natural language generation subtasks, evaluating models from both TOD and biomedical domains. We make MediTOD publicly available for future research. | [
"Saley, Vishal Vivek",
"Saha, Goonjan",
"Das, Rocktim Jyoti",
"Raghu, Dinesh",
"., Mausam"
] | MediTOD: An English Dialogue Dataset for Medical History Taking with Comprehensive Annotations | emnlp-main.936 | Poster | 2410.14204 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.937.bib | https://aclanthology.org/2024.emnlp-main.937/ | @inproceedings{nandy-etal-2024-yesbut,
title = "***{Y}es{B}ut***: A High-Quality Annotated Multimodal Dataset for evaluating Satire Comprehension capability of Vision-Language Models",
author = "Nandy, Abhilash and
Agarwal, Yash and
Patwa, Ashish and
Das, Millon Madhur and
Bansal, Aman and
Raj, Ankit and
Goyal, Pawan and
Ganguly, Niloy",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.937",
pages = "16878--16895",
abstract = "Understanding satire and humor is a challenging task for even current Vision-Language models. In this paper, we propose the challenging tasks of Satirical Image Detection (detecting whether an image is satirical), Understanding (generating the reason behind the image being satirical), and Completion (given one half of the image, selecting the other half from 2 given options, such that the complete image is satirical) and release a high-quality dataset ***YesBut***, consisting of 2547 images, 1084 satirical and 1463 non-satirical, containing different artistic styles, to evaluate those tasks. Each satirical image in the dataset depicts a normal scenario, along with a conflicting scenario which is funny or ironic. Despite the success of current Vision-Language Models on multimodal tasks such as Visual QA and Image Captioning, our benchmarking experiments show that such models perform poorly on the proposed tasks on the ***YesBut*** Dataset in Zero-Shot Settings w.r.t both automated as well as human evaluation. Additionally, we release a dataset of 119 real, satirical photographs for further research.",
}
| Understanding satire and humor is a challenging task for even current Vision-Language models. In this paper, we propose the challenging tasks of Satirical Image Detection (detecting whether an image is satirical), Understanding (generating the reason behind the image being satirical), and Completion (given one half of the image, selecting the other half from 2 given options, such that the complete image is satirical) and release a high-quality dataset ***YesBut***, consisting of 2547 images, 1084 satirical and 1463 non-satirical, containing different artistic styles, to evaluate those tasks. Each satirical image in the dataset depicts a normal scenario, along with a conflicting scenario which is funny or ironic. Despite the success of current Vision-Language Models on multimodal tasks such as Visual QA and Image Captioning, our benchmarking experiments show that such models perform poorly on the proposed tasks on the ***YesBut*** Dataset in Zero-Shot Settings w.r.t both automated as well as human evaluation. Additionally, we release a dataset of 119 real, satirical photographs for further research. | [
"N",
"y, Abhilash",
"Agarwal, Yash",
"Patwa, Ashish",
"Das, Millon Madhur",
"Bansal, Aman",
"Raj, Ankit",
"Goyal, Pawan",
"Ganguly, Niloy"
] | ***YesBut***: A High-Quality Annotated Multimodal Dataset for evaluating Satire Comprehension capability of Vision-Language Models | emnlp-main.937 | Poster | [
"https://github.com/abhi1nandy2/yesbut_dataset"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.938.bib | https://aclanthology.org/2024.emnlp-main.938/ | @inproceedings{zhang-etal-2024-working,
title = "Working Memory Identifies Reasoning Limits in Language Models",
author = "Zhang, Chunhui and
Jian, Yiren and
Ouyang, Zhongyu and
Vosoughi, Soroush",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.938",
pages = "16896--16922",
abstract = "This study explores the inherent limitations of large language models (LLMs) from a scaling perspective, focusing on the upper bounds of their cognitive capabilities. We integrate insights from cognitive science to quantitatively examine how LLMs perform on n-back tasks{---}a benchmark used to assess working memory, which involves temporarily holding and manipulating information. Our findings reveal that despite the increased model size, LLMs still face significant challenges in holding and processing information effectively, especially under complex task conditions. We also assess various prompting strategies, revealing their diverse impacts on LLM performance. The results highlight the struggle of current LLMs to autonomously discover optimal problem-solving patterns without heavily relying on manually corrected prompts. To move beyond these constraints, fundamental improvements in the planning and search of LLMs are essential for them to reason autonomously. Improving these capabilities will reduce the reliance on external corrections and enable LLMs to become more autonomous in their problem-solving processes.",
}
| This study explores the inherent limitations of large language models (LLMs) from a scaling perspective, focusing on the upper bounds of their cognitive capabilities. We integrate insights from cognitive science to quantitatively examine how LLMs perform on n-back tasks{---}a benchmark used to assess working memory, which involves temporarily holding and manipulating information. Our findings reveal that despite the increased model size, LLMs still face significant challenges in holding and processing information effectively, especially under complex task conditions. We also assess various prompting strategies, revealing their diverse impacts on LLM performance. The results highlight the struggle of current LLMs to autonomously discover optimal problem-solving patterns without heavily relying on manually corrected prompts. To move beyond these constraints, fundamental improvements in the planning and search of LLMs are essential for them to reason autonomously. Improving these capabilities will reduce the reliance on external corrections and enable LLMs to become more autonomous in their problem-solving processes. | [
"Zhang, Chunhui",
"Jian, Yiren",
"Ouyang, Zhongyu",
"Vosoughi, Soroush"
] | Working Memory Identifies Reasoning Limits in Language Models | emnlp-main.938 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.939.bib | https://aclanthology.org/2024.emnlp-main.939/ | @inproceedings{wang-etal-2024-raft,
title = "{RAFT}: Realistic Attacks to Fool Text Detectors",
author = "Wang, James Liyuan and
Li, Ran and
Yang, Junfeng and
Mao, Chengzhi",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.939",
pages = "16923--16936",
abstract = "Large language models (LLMs) have exhibited remarkable fluency across various tasks. However, their unethical applications, such as disseminating disinformation, have become a growing concern. Although recent works have proposed a number of LLM detection methods, their robustness and reliability remain unclear. In this paper, we present RAFT: a grammar error-free black-box attack against existing LLM detectors. In contrast to previous attacks for language models, our method exploits the transferability of LLM embeddings at the word-level while preserving the original text quality. We leverage an auxiliary embedding to greedily select candidate words to perturb against the target detector. Experiments reveal that our attack effectively compromises all detectors in the study across various domains by up to 99{\%}, and are transferable across source models. Manual human evaluation studies show our attacks are realistic and indistinguishable from original human-written text. We also show that examples generated by RAFT can be used to train adversarially robust detectors. Our work shows that current LLM detectors are not adversarially robust, underscoring the urgent need for more resilient detection mechanisms.",
}
| Large language models (LLMs) have exhibited remarkable fluency across various tasks. However, their unethical applications, such as disseminating disinformation, have become a growing concern. Although recent works have proposed a number of LLM detection methods, their robustness and reliability remain unclear. In this paper, we present RAFT: a grammar error-free black-box attack against existing LLM detectors. In contrast to previous attacks for language models, our method exploits the transferability of LLM embeddings at the word-level while preserving the original text quality. We leverage an auxiliary embedding to greedily select candidate words to perturb against the target detector. Experiments reveal that our attack effectively compromises all detectors in the study across various domains by up to 99{\%}, and are transferable across source models. Manual human evaluation studies show our attacks are realistic and indistinguishable from original human-written text. We also show that examples generated by RAFT can be used to train adversarially robust detectors. Our work shows that current LLM detectors are not adversarially robust, underscoring the urgent need for more resilient detection mechanisms. | [
"Wang, James Liyuan",
"Li, Ran",
"Yang, Junfeng",
"Mao, Chengzhi"
] | RAFT: Realistic Attacks to Fool Text Detectors | emnlp-main.939 | Poster | 2410.03658 | [
"https://github.com/jameslwang/raft"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.940.bib | https://aclanthology.org/2024.emnlp-main.940/ | @inproceedings{you-etal-2024-llm,
title = "{LLM}-Evolve: Evaluation for {LLM}{'}s Evolving Capability on Benchmarks",
author = "You, Jiaxuan and
Liu, Mingjie and
Prabhumoye, Shrimai and
Patwary, Mostofa and
Shoeybi, Mohammad and
Catanzaro, Bryan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.940",
pages = "16937--16942",
abstract = "The advancement of large language models (LLMs) has extended their use to dynamic and interactive real-world applications, where models engage continuously with their environment and potentially enhance their performance over time. Most existing LLM benchmarks evaluate LLMs on i.i.d. tasks, overlooking their ability to learn iteratively from past experiences. Our paper bridges this evaluation gap by proposing a novel framework, LLM-Evolve, which extends established benchmarks to sequential problem-solving settings. LLM-Evolve evaluates LLMs over multiple rounds, providing feedback after each round to build a demonstration memory that the models can query in future tasks. We applied LLM-Evolve to the MMLU, GSM8K, and AgentBench benchmarks, testing 8 state-of-the-art open-source and closed-source models. Results show that LLMs can achieve performance improvements of up to 17{\%} by learning from past interactions, with the quality of retrieval algorithms and feedback significantly influencing this capability. These insights advocate for more understanding and benchmarks for LLMs{'} performance in evolving interactive scenarios.",
}
| The advancement of large language models (LLMs) has extended their use to dynamic and interactive real-world applications, where models engage continuously with their environment and potentially enhance their performance over time. Most existing LLM benchmarks evaluate LLMs on i.i.d. tasks, overlooking their ability to learn iteratively from past experiences. Our paper bridges this evaluation gap by proposing a novel framework, LLM-Evolve, which extends established benchmarks to sequential problem-solving settings. LLM-Evolve evaluates LLMs over multiple rounds, providing feedback after each round to build a demonstration memory that the models can query in future tasks. We applied LLM-Evolve to the MMLU, GSM8K, and AgentBench benchmarks, testing 8 state-of-the-art open-source and closed-source models. Results show that LLMs can achieve performance improvements of up to 17{\%} by learning from past interactions, with the quality of retrieval algorithms and feedback significantly influencing this capability. These insights advocate for more understanding and benchmarks for LLMs{'} performance in evolving interactive scenarios. | [
"You, Jiaxuan",
"Liu, Mingjie",
"Prabhumoye, Shrimai",
"Patwary, Mostofa",
"Shoeybi, Mohammad",
"Catanzaro, Bryan"
] | LLM-Evolve: Evaluation for LLM's Evolving Capability on Benchmarks | emnlp-main.940 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.941.bib | https://aclanthology.org/2024.emnlp-main.941/ | @inproceedings{jaiswal-etal-2024-ffn,
title = "{FFN}-{S}kip{LLM}: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping",
author = "Jaiswal, Ajay Kumar and
Hu, Bodun and
Yin, Lu and
Ro, Yeonju and
Chen, Tianlong and
Liu, Shiwei and
Akella, Aditya",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.941",
pages = "16943--16956",
abstract = "Autoregressive Large Language Models (e.g., LLaMa, GPTs) are omnipresent achieving remarkable success in language understanding and generation. However, such impressive capability typically comes with a substantial model size, which presents significant challenges for autoregressive token-by-token generation. To mitigate computation overload incurred during generation, several early-exit and layer-dropping strategies have been proposed. Despite some promising success due to the redundancy across LLMs layers on metrics like Rough-L/BLUE, our careful knowledge-intensive evaluation unveils issues such as generation collapse, hallucination, and noticeable performance drop even at the trivial exit ratio of {\textasciitilde}10-15{\%} of layers. We attribute these errors primarily to ineffective handling of the KV cache through state copying during early exit. In this work, we observe the saturation of computationally expensive feed-forward blocks of LLM layers and propose FFN-SkipLLM, which is a novel fine-grained skip strategy for autoregressive LLMs. FFN-SkipLLM leverages an input-adaptive feed-forward skipping approach that can skip {\textasciitilde}25-30{\%} of FFN blocks of LLMs with marginal change in performance on knowledge-intensive generation tasks without any requirement to handle the KV cache. Our extensive experiments and ablation studies across benchmarks like MT-Bench, Factoid-QA, and variable-length text summarization illustrate how our simple and easy-to-use method can facilitate faster autoregressive decoding.",
}
| Autoregressive Large Language Models (e.g., LLaMa, GPTs) are omnipresent achieving remarkable success in language understanding and generation. However, such impressive capability typically comes with a substantial model size, which presents significant challenges for autoregressive token-by-token generation. To mitigate computation overload incurred during generation, several early-exit and layer-dropping strategies have been proposed. Despite some promising success due to the redundancy across LLMs layers on metrics like Rough-L/BLUE, our careful knowledge-intensive evaluation unveils issues such as generation collapse, hallucination, and noticeable performance drop even at the trivial exit ratio of {\textasciitilde}10-15{\%} of layers. We attribute these errors primarily to ineffective handling of the KV cache through state copying during early exit. In this work, we observe the saturation of computationally expensive feed-forward blocks of LLM layers and propose FFN-SkipLLM, which is a novel fine-grained skip strategy for autoregressive LLMs. FFN-SkipLLM leverages an input-adaptive feed-forward skipping approach that can skip {\textasciitilde}25-30{\%} of FFN blocks of LLMs with marginal change in performance on knowledge-intensive generation tasks without any requirement to handle the KV cache. Our extensive experiments and ablation studies across benchmarks like MT-Bench, Factoid-QA, and variable-length text summarization illustrate how our simple and easy-to-use method can facilitate faster autoregressive decoding. | [
"Jaiswal, Ajay Kumar",
"Hu, Bodun",
"Yin, Lu",
"Ro, Yeonju",
"Chen, Tianlong",
"Liu, Shiwei",
"Akella, Aditya"
] | FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping | emnlp-main.941 | Poster | 2404.03865 | [
""
] | https://huggingface.co/papers/2404.03865 | 0 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.942.bib | https://aclanthology.org/2024.emnlp-main.942/ | @inproceedings{potter-yuan-2024-llm,
title = "{LLM}-based Code-Switched Text Generation for Grammatical Error Correction",
author = "Potter, Tom and
Yuan, Zheng",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.942",
pages = "16957--16965",
abstract = "With the rise of globalisation, code-switching (CSW) has become a ubiquitous part of multilingual conversation, posing new challenges for natural language processing (NLP), especially in Grammatical Error Correction (GEC). This work explores the complexities of applying GEC systems to CSW texts. Our objectives include evaluating the performance of state-of-the-art GEC systems on an authentic CSW dataset from English as a Second Language (ESL) learners, exploring synthetic data generation as a solution to data scarcity, and developing a model capable of correcting grammatical errors in monolingual and CSW texts. We generated synthetic CSW GEC data, resulting in one of the first substantial datasets for this task, and showed that a model trained on this data is capable of significant improvements over existing systems. This work targets ESL learners, aiming to provide educational technologies that aid in the development of their English grammatical correctness without constraining their natural multilingualism.",
}
| With the rise of globalisation, code-switching (CSW) has become a ubiquitous part of multilingual conversation, posing new challenges for natural language processing (NLP), especially in Grammatical Error Correction (GEC). This work explores the complexities of applying GEC systems to CSW texts. Our objectives include evaluating the performance of state-of-the-art GEC systems on an authentic CSW dataset from English as a Second Language (ESL) learners, exploring synthetic data generation as a solution to data scarcity, and developing a model capable of correcting grammatical errors in monolingual and CSW texts. We generated synthetic CSW GEC data, resulting in one of the first substantial datasets for this task, and showed that a model trained on this data is capable of significant improvements over existing systems. This work targets ESL learners, aiming to provide educational technologies that aid in the development of their English grammatical correctness without constraining their natural multilingualism. | [
"Potter, Tom",
"Yuan, Zheng"
] | LLM-based Code-Switched Text Generation for Grammatical Error Correction | emnlp-main.942 | Poster | 2410.10349 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.943.bib | https://aclanthology.org/2024.emnlp-main.943/ | @inproceedings{farahani-johansson-2024-deciphering,
title = "Deciphering the Interplay of Parametric and Non-parametric Memory in Retrieval-augmented Language Models",
author = "Farahani, Mehrdad and
Johansson, Richard",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.943",
pages = "16966--16977",
abstract = "Generative language models often struggle with specialized or less-discussed knowledge. A potential solution is found in Retrieval-Augmented Generation (RAG) models which act like retrieving information before generating responses. In this study, we explore how the Atlas approach, a RAG model, decides between what it already knows (parametric) and what it retrieves (non-parametric). We use causal mediation analysis and controlled experiments to examine how internal representations influence information processing. Our findings disentangle the effects of parametric knowledge and the retrieved context. They indicate that in cases where the model can choose between both types of information (parametric and non-parametric), it relies more on the context than the parametric knowledge. Furthermore, the analysis investigates the computations involved in \textit{how} the model uses the information from the context. We find that multiple mechanisms are active within the model and can be detected with mediation analysis: first, the decision of \textit{whether the context is relevant}, and second, how the encoder computes output representations to support copying when relevant.",
}
| Generative language models often struggle with specialized or less-discussed knowledge. A potential solution is found in Retrieval-Augmented Generation (RAG) models which act like retrieving information before generating responses. In this study, we explore how the Atlas approach, a RAG model, decides between what it already knows (parametric) and what it retrieves (non-parametric). We use causal mediation analysis and controlled experiments to examine how internal representations influence information processing. Our findings disentangle the effects of parametric knowledge and the retrieved context. They indicate that in cases where the model can choose between both types of information (parametric and non-parametric), it relies more on the context than the parametric knowledge. Furthermore, the analysis investigates the computations involved in \textit{how} the model uses the information from the context. We find that multiple mechanisms are active within the model and can be detected with mediation analysis: first, the decision of \textit{whether the context is relevant}, and second, how the encoder computes output representations to support copying when relevant. | [
"Farahani, Mehrdad",
"Johansson, Richard"
] | Deciphering the Interplay of Parametric and Non-parametric Memory in Retrieval-augmented Language Models | emnlp-main.943 | Poster | 2410.05162 | [
"https://github.com/m3hrdadfi/rag-memory-interplay"
] | https://huggingface.co/papers/2410.05162 | 0 | 0 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.944.bib | https://aclanthology.org/2024.emnlp-main.944/ | @inproceedings{kim-seo-2024-efficient,
title = "On Efficient Language and Vision Assistants for Visually-Situated Natural Language Understanding: What Matters in Reading and Reasoning",
author = "Kim, Geewook and
Seo, Minjoon",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.944",
pages = "16978--17000",
abstract = "Recent advancements in language and vision assistants have showcased impressive capabilities but suffer from a lack of transparency, limiting broader research and reproducibility. While open-source models handle general image tasks effectively, they face challenges with the high computational demands of complex visually-situated text understanding. Such tasks often require increased token inputs and large vision modules to harness high-resolution information. Striking a balance between model size and data importance remains an open question. This study aims to redefine the design of vision-language models by identifying key components and creating efficient models with constrained inference costs. By strategically formulating datasets, optimizing vision modules, and enhancing supervision techniques, we achieve significant improvements in inference throughput while maintaining high performance. Extensive experiments across models ranging from 160M to 13B parameters offer insights into model optimization.We will fully open-source our codebase, models, and datasets at https://github.com/naver-ai/elva.",
}
| Recent advancements in language and vision assistants have showcased impressive capabilities but suffer from a lack of transparency, limiting broader research and reproducibility. While open-source models handle general image tasks effectively, they face challenges with the high computational demands of complex visually-situated text understanding. Such tasks often require increased token inputs and large vision modules to harness high-resolution information. Striking a balance between model size and data importance remains an open question. This study aims to redefine the design of vision-language models by identifying key components and creating efficient models with constrained inference costs. By strategically formulating datasets, optimizing vision modules, and enhancing supervision techniques, we achieve significant improvements in inference throughput while maintaining high performance. Extensive experiments across models ranging from 160M to 13B parameters offer insights into model optimization.We will fully open-source our codebase, models, and datasets at https://github.com/naver-ai/elva. | [
"Kim, Geewook",
"Seo, Minjoon"
] | On Efficient Language and Vision Assistants for Visually-Situated Natural Language Understanding: What Matters in Reading and Reasoning | emnlp-main.944 | Poster | 2406.11823 | [
"https://github.com/naver-ai/elva"
] | https://huggingface.co/papers/2406.11823 | 0 | 0 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.945.bib | https://aclanthology.org/2024.emnlp-main.945/ | @inproceedings{he-etal-2024-community,
title = "Community-Cross-Instruct: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities",
author = "He, Zihao and
Chu, Minh Duc and
Dorn, Rebecca and
Guo, Siyi and
Lerman, Kristina",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.945",
pages = "17001--17019",
abstract = "Social scientists use surveys to probe the opinions and beliefs of populations, but these methods are slow, costly, and prone to biases. Recent advances in large language models (LLMs) enable the creating of computational representations or {``}digital twins{''} of populations that generate human-like responses mimicking the population{'}s language, styles, and attitudes. We introduce Community-Cross-Instruct, an unsupervised framework for aligning LLMs to online communities to elicit their beliefs. Given a corpus of a community{'}s online discussions, Community-Cross-Instruct automatically generates instruction-output pairs by an advanced LLM to (1) finetune a foundational LLM to faithfully represent that community, and (2) evaluate the alignment of the finetuned model to the community. We demonstrate the method{'}s utility in accurately representing political and diet communities on Reddit. Unlike prior methods requiring human-authored instructions, Community-Cross-Instruct generates instructions in a fully unsupervised manner, enhancing scalability and generalization across domains. This work enables cost-effective and automated surveying of diverse online communities.",
}
| Social scientists use surveys to probe the opinions and beliefs of populations, but these methods are slow, costly, and prone to biases. Recent advances in large language models (LLMs) enable the creating of computational representations or {``}digital twins{''} of populations that generate human-like responses mimicking the population{'}s language, styles, and attitudes. We introduce Community-Cross-Instruct, an unsupervised framework for aligning LLMs to online communities to elicit their beliefs. Given a corpus of a community{'}s online discussions, Community-Cross-Instruct automatically generates instruction-output pairs by an advanced LLM to (1) finetune a foundational LLM to faithfully represent that community, and (2) evaluate the alignment of the finetuned model to the community. We demonstrate the method{'}s utility in accurately representing political and diet communities on Reddit. Unlike prior methods requiring human-authored instructions, Community-Cross-Instruct generates instructions in a fully unsupervised manner, enhancing scalability and generalization across domains. This work enables cost-effective and automated surveying of diverse online communities. | [
"He, Zihao",
"Chu, Minh Duc",
"Dorn, Rebecca",
"Guo, Siyi",
"Lerman, Kristina"
] | Community-Cross-Instruct: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities | emnlp-main.945 | Poster | 2406.12074 | [
""
] | https://huggingface.co/papers/2406.12074 | 0 | 0 | 0 | 5 | [
"sunatte/txt2sql",
"MachoMaheen/devdock4bit"
] | [] | [
"smarttang/blingsec"
] | [
"sunatte/txt2sql",
"MachoMaheen/devdock4bit"
] | [] | [
"smarttang/blingsec"
] | 1 |
https://aclanthology.org/2024.emnlp-main.946.bib | https://aclanthology.org/2024.emnlp-main.946/ | @inproceedings{kurtic-etal-2024-mathador,
title = "Mathador-{LM}: A Dynamic Benchmark for Mathematical Reasoning on Large Language Models",
author = "Kurtic, Eldar and
Moeini, Amir and
Alistarh, Dan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.946",
pages = "17020--17027",
abstract = "We introduce Mathador-LM, a new benchmark for evaluating the mathematical reasoning on large language models (LLMs), combining ruleset interpretation, planning, and problem-solving. This benchmark is inspired by the Mathador game, where the objective is to reach a target number using basic arithmetic operations on a given set of base numbers, following a simple set of rules. We show that, across leading LLMs, we obtain stable average performance while generating benchmark instances dynamically, following a target difficulty level. Thus, our benchmark alleviates concerns about test-set leakage into training data, an issue that often undermines popular benchmarks. Additionally, we conduct a comprehensive evaluation of both open and closed-source state-of-the-art LLMs on Mathador-LM. Our findings reveal that contemporary models struggle with Mathador-LM, scoring significantly lower than average 3rd graders. This stands in stark contrast to their strong performance on popular mathematical reasoning benchmarks. The implementation of Mathador-LM benchmark is available at https://github.com/IST-DASLab/Mathador-LM.",
}
| We introduce Mathador-LM, a new benchmark for evaluating the mathematical reasoning on large language models (LLMs), combining ruleset interpretation, planning, and problem-solving. This benchmark is inspired by the Mathador game, where the objective is to reach a target number using basic arithmetic operations on a given set of base numbers, following a simple set of rules. We show that, across leading LLMs, we obtain stable average performance while generating benchmark instances dynamically, following a target difficulty level. Thus, our benchmark alleviates concerns about test-set leakage into training data, an issue that often undermines popular benchmarks. Additionally, we conduct a comprehensive evaluation of both open and closed-source state-of-the-art LLMs on Mathador-LM. Our findings reveal that contemporary models struggle with Mathador-LM, scoring significantly lower than average 3rd graders. This stands in stark contrast to their strong performance on popular mathematical reasoning benchmarks. The implementation of Mathador-LM benchmark is available at https://github.com/IST-DASLab/Mathador-LM. | [
"Kurtic, Eldar",
"Moeini, Amir",
"Alistarh, Dan"
] | Mathador-LM: A Dynamic Benchmark for Mathematical Reasoning on Large Language Models | emnlp-main.946 | Poster | 2406.12572 | [
"https://github.com/ist-daslab/mathador-lm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.947.bib | https://aclanthology.org/2024.emnlp-main.947/ | @inproceedings{liao-etal-2024-reasoning,
title = "Reasoning Paths with Reference Objects Elicit Quantitative Spatial Reasoning in Large Vision-Language Models",
author = "Liao, Yuan-Hong and
Mahmood, Rafid and
Fidler, Sanja and
Acuna, David",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.947",
pages = "17028--17047",
abstract = "Despite recent advances demonstrating vision- language models{'} (VLMs) abilities to describe complex relationships among objects in images using natural language, their capability to quantitatively reason about object sizes and distances remains underexplored. In this work, we introduce a manually annotated benchmark of 241 questions across five categories specifically designed for quantitative spatial reasoning, and systematically investigate the performance of SoTA VLMs on this task. Our analysis reveals that questions involving reasoning about distances between objects are particularly challenging for SoTA VLMs; however, some VLMs perform significantly better at this task than others, with an almost 40 points gap between the two best performing models. We also make the surprising observation that the success rate of the top-performing VLM increases by 19 points when a reasoning path using a reference object emerges naturally in the response. Inspired by this observation, we develop a zero-shot prompting technique, SpatialPrompt, that encourages VLMs to answer quantitative spatial questions using references objects as visual cues. Specifically, we demonstrate that instruct- ing VLMs to use reference objects in their reasoning paths significantly improves their quantitative spatial reasoning performance, bypassing the need for external data, architectural modifications, or fine-tuning. Remarkably, by solely using SpatialPrompt, Gemini 1.5 Pro, GPT-4V, and GPT-4o improve by 56.2, 28.5, and 6.7 points on average in Q-Spatial Bench without the need for more data, model architectural modifications, or fine-tuning.",
}
| Despite recent advances demonstrating vision- language models{'} (VLMs) abilities to describe complex relationships among objects in images using natural language, their capability to quantitatively reason about object sizes and distances remains underexplored. In this work, we introduce a manually annotated benchmark of 241 questions across five categories specifically designed for quantitative spatial reasoning, and systematically investigate the performance of SoTA VLMs on this task. Our analysis reveals that questions involving reasoning about distances between objects are particularly challenging for SoTA VLMs; however, some VLMs perform significantly better at this task than others, with an almost 40 points gap between the two best performing models. We also make the surprising observation that the success rate of the top-performing VLM increases by 19 points when a reasoning path using a reference object emerges naturally in the response. Inspired by this observation, we develop a zero-shot prompting technique, SpatialPrompt, that encourages VLMs to answer quantitative spatial questions using references objects as visual cues. Specifically, we demonstrate that instruct- ing VLMs to use reference objects in their reasoning paths significantly improves their quantitative spatial reasoning performance, bypassing the need for external data, architectural modifications, or fine-tuning. Remarkably, by solely using SpatialPrompt, Gemini 1.5 Pro, GPT-4V, and GPT-4o improve by 56.2, 28.5, and 6.7 points on average in Q-Spatial Bench without the need for more data, model architectural modifications, or fine-tuning. | [
"Liao, Yuan-Hong",
"Mahmood, Rafid",
"Fidler, Sanja",
"Acuna, David"
] | Reasoning Paths with Reference Objects Elicit Quantitative Spatial Reasoning in Large Vision-Language Models | emnlp-main.947 | Poster | 2409.09788 | [
""
] | https://huggingface.co/papers/2409.09788 | 0 | 0 | 0 | 4 | [] | [
"andrewliao11/Q-Spatial-Bench"
] | [] | [] | [
"andrewliao11/Q-Spatial-Bench"
] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.948.bib | https://aclanthology.org/2024.emnlp-main.948/ | @inproceedings{karpinska-etal-2024-one,
title = "One Thousand and One Pairs: A {``}novel{''} challenge for long-context language models",
author = "Karpinska, Marzena and
Thai, Katherine and
Lo, Kyle and
Goyal, Tanya and
Iyyer, Mohit",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.948",
pages = "17048--17085",
abstract = "Synthetic long-context LLM benchmarks (e.g., {``}needle-in-the-haystack{''}) test only surface-level retrieval capabilities; but how well can long-context LLMs retrieve, synthesize, and reason over information across book-length inputs? We address this question by creating NoCha, a dataset of 1,001 minimally different pairs of true and false claims about 67 recently-published English fictional books, written by human readers of those books. In contrast to existing long-context benchmarks, our annotators confirm that the largest share of pairs in NoCha require global reasoning over the entire book to verify. Our experiments show that while human readers easily perform this task, it is enormously challenging for all ten long-context LLMs that we evaluate: no open-weight model performs above random chance (despite their strong performance on synthetic benchmarks), while GPT-4o achieves the highest pair accuracy at 55.8{\%}. Further analysis reveals that (1) on average, models perform much better on pairs that require only sentence-level retrieval vs. global reasoning; (2) model-generated explanations for their decisions are often inaccurate even for correctly-labeled claims; and (3) models perform substantially worse on speculative fiction books that contain extensive world-building. The methodology proposed in NoCha allows for the evolution of the benchmark dataset and the easy analysis of future models.",
}
| Synthetic long-context LLM benchmarks (e.g., {``}needle-in-the-haystack{''}) test only surface-level retrieval capabilities; but how well can long-context LLMs retrieve, synthesize, and reason over information across book-length inputs? We address this question by creating NoCha, a dataset of 1,001 minimally different pairs of true and false claims about 67 recently-published English fictional books, written by human readers of those books. In contrast to existing long-context benchmarks, our annotators confirm that the largest share of pairs in NoCha require global reasoning over the entire book to verify. Our experiments show that while human readers easily perform this task, it is enormously challenging for all ten long-context LLMs that we evaluate: no open-weight model performs above random chance (despite their strong performance on synthetic benchmarks), while GPT-4o achieves the highest pair accuracy at 55.8{\%}. Further analysis reveals that (1) on average, models perform much better on pairs that require only sentence-level retrieval vs. global reasoning; (2) model-generated explanations for their decisions are often inaccurate even for correctly-labeled claims; and (3) models perform substantially worse on speculative fiction books that contain extensive world-building. The methodology proposed in NoCha allows for the evolution of the benchmark dataset and the easy analysis of future models. | [
"Karpinska, Marzena",
"Thai, Katherine",
"Lo, Kyle",
"Goyal, Tanya",
"Iyyer, Mohit"
] | One Thousand and One Pairs: A “novel” challenge for long-context language models | emnlp-main.948 | Poster | [
"https://github.com/marzenakrp/nocha"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.949.bib | https://aclanthology.org/2024.emnlp-main.949/ | @inproceedings{vu-etal-2024-foundational,
title = "Foundational Autoraters: Taming Large Language Models for Better Automatic Evaluation",
author = "Vu, Tu and
Krishna, Kalpesh and
Alzubi, Salaheddin and
Tar, Chris and
Faruqui, Manaal and
Sung, Yun-Hsuan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.949",
pages = "17086--17105",
abstract = "As large language models (LLMs) evolve, evaluating their output reliably becomes increasingly difficult due to the high cost of human evaluation. To address this, we introduce FLAMe, a family of Foundational Large Autorater Models. FLAMe is trained on a diverse set of over 100 quality assessment tasks, incorporating 5M+ human judgments curated from publicly released human evaluations. FLAMe outperforms models like GPT-4 and Claude-3 on various held-out tasks, and serves as a powerful starting point for fine-tuning, as shown in our reward model evaluation case study (FLAMe-RM). On Reward-Bench, FLAMe-RM-24B achieves 87.8{\%} accuracy, surpassing GPT-4-0125 (85.9{\%}) and GPT-4o (84.7{\%}). Additionally, we introduce FLAMe-Opt-RM, an efficient tail-patch fine-tuning approach that offers competitive RewardBench performance using 25{\mbox{$\times$}}fewer training datapoints. Our FLAMe variants outperform popular proprietary LLM-as-a-Judge models on 8 of 12 autorater benchmarks, covering 53 quality assessment tasks, including RewardBench and LLM-AggreFact. Finally, our analysis shows that FLAMe is significantly less biased than other LLM-as-a-Judge models on the CoBBLEr autorater bias benchmark.",
}
| As large language models (LLMs) evolve, evaluating their output reliably becomes increasingly difficult due to the high cost of human evaluation. To address this, we introduce FLAMe, a family of Foundational Large Autorater Models. FLAMe is trained on a diverse set of over 100 quality assessment tasks, incorporating 5M+ human judgments curated from publicly released human evaluations. FLAMe outperforms models like GPT-4 and Claude-3 on various held-out tasks, and serves as a powerful starting point for fine-tuning, as shown in our reward model evaluation case study (FLAMe-RM). On Reward-Bench, FLAMe-RM-24B achieves 87.8{\%} accuracy, surpassing GPT-4-0125 (85.9{\%}) and GPT-4o (84.7{\%}). Additionally, we introduce FLAMe-Opt-RM, an efficient tail-patch fine-tuning approach that offers competitive RewardBench performance using 25{\mbox{$\times$}}fewer training datapoints. Our FLAMe variants outperform popular proprietary LLM-as-a-Judge models on 8 of 12 autorater benchmarks, covering 53 quality assessment tasks, including RewardBench and LLM-AggreFact. Finally, our analysis shows that FLAMe is significantly less biased than other LLM-as-a-Judge models on the CoBBLEr autorater bias benchmark. | [
"Vu, Tu",
"Krishna, Kalpesh",
"Alzubi, Salaheddin",
"Tar, Chris",
"Faruqui, Manaal",
"Sung, Yun-Hsuan"
] | Foundational Autoraters: Taming Large Language Models for Better Automatic Evaluation | emnlp-main.949 | Poster | 2407.10817 | [
""
] | https://huggingface.co/papers/2407.10817 | 3 | 13 | 2 | 6 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.950.bib | https://aclanthology.org/2024.emnlp-main.950/ | @inproceedings{hale-stanojevic-2024-llms,
title = "Do {LLM}s learn a true syntactic universal?",
author = "Hale, John T. and
Stanojevi{\'c}, Milo{\v{s}}",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.950",
pages = "17106--17119",
abstract = "Do large multilingual language models learn language universals? We consider a candidate universal much-discussed in the linguistics literature, the Final-over-Final Condition (Sheehan et al., 2017b). This Condition is syntactic in the sense that it can only be stated by reference to abstract sentence properties such as nested phrases and head direction. A study of typologically diverse {``}mixed head direction{''} languages confirms that the Condition holds in corpora. But in a targeted syntactic evaluation, Gemini Pro only seems to respect the Condition in German, Russian, Hungarian and Serbian. These relatively high-resource languages contrast with Basque, where Gemini Pro does not seem to have learned the Condition at all. This result suggests that modern language models may need additional sources of bias in order to become truly human-like, within a developmentally-realistic budget of training data.",
}
| Do large multilingual language models learn language universals? We consider a candidate universal much-discussed in the linguistics literature, the Final-over-Final Condition (Sheehan et al., 2017b). This Condition is syntactic in the sense that it can only be stated by reference to abstract sentence properties such as nested phrases and head direction. A study of typologically diverse {``}mixed head direction{''} languages confirms that the Condition holds in corpora. But in a targeted syntactic evaluation, Gemini Pro only seems to respect the Condition in German, Russian, Hungarian and Serbian. These relatively high-resource languages contrast with Basque, where Gemini Pro does not seem to have learned the Condition at all. This result suggests that modern language models may need additional sources of bias in order to become truly human-like, within a developmentally-realistic budget of training data. | [
"Hale, John T.",
"Stanojevi{\\'c}, Milo{\\v{s}}"
] | Do LLMs learn a true syntactic universal? | emnlp-main.950 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.951.bib | https://aclanthology.org/2024.emnlp-main.951/ | @inproceedings{kwon-etal-2024-gdpo,
title = "{GDPO}: Learning to Directly Align Language Models with Diversity Using {GF}low{N}ets",
author = "Kwon, Oh Joon and
Matsunaga, Daiki E. and
Kim, Kee-Eung",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.951",
pages = "17120--17139",
abstract = "A critical component of the current generation of language models is preference alignment, which aims to precisely control the model{'}s behavior to meet human needs and values. The most notable among such methods is Reinforcement Learning with Human Feedback (RLHF) and its offline variant Direct Preference Optimization (DPO), both of which seek to maximize a reward model based on human preferences. In particular, DPO derives reward signals directly from the offline preference data, but in doing so overfits the reward signals and generates suboptimal responses that may contain human biases in the dataset. In this work, we propose a practical application of a diversity-seeking RL algorithm called GFlowNet-DPO (GDPO) in an offline preference alignment setting to curtail such challenges. Empirical results show GDPO can generate far more diverse responses than the baseline methods that are still relatively aligned with human values in dialog generation and summarization tasks.",
}
| A critical component of the current generation of language models is preference alignment, which aims to precisely control the model{'}s behavior to meet human needs and values. The most notable among such methods is Reinforcement Learning with Human Feedback (RLHF) and its offline variant Direct Preference Optimization (DPO), both of which seek to maximize a reward model based on human preferences. In particular, DPO derives reward signals directly from the offline preference data, but in doing so overfits the reward signals and generates suboptimal responses that may contain human biases in the dataset. In this work, we propose a practical application of a diversity-seeking RL algorithm called GFlowNet-DPO (GDPO) in an offline preference alignment setting to curtail such challenges. Empirical results show GDPO can generate far more diverse responses than the baseline methods that are still relatively aligned with human values in dialog generation and summarization tasks. | [
"Kwon, Oh Joon",
"Matsunaga, Daiki E.",
"Kim, Kee-Eung"
] | GDPO: Learning to Directly Align Language Models with Diversity Using GFlowNets | emnlp-main.951 | Poster | 2410.15096 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.952.bib | https://aclanthology.org/2024.emnlp-main.952/ | @inproceedings{chen-etal-2024-susceptible,
title = "How Susceptible are Large Language Models to Ideological Manipulation?",
author = "Chen, Kai and
He, Zihao and
Yan, Jun and
Shi, Taiwei and
Lerman, Kristina",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.952",
pages = "17140--17161",
abstract = "Large Language Models (LLMs) possess the potential to exert substantial influence on public perceptions and interactions with information. This raises concerns about the societal impact that could arise if the ideologies within these models can be easily manipulated. In this work, we investigate how effectively LLMs can learn and generalize ideological biases from their instruction-tuning data. Our findings reveal a concerning vulnerability: exposure to only a small amount of ideologically driven samples significantly alters the ideology of LLMs. Notably, LLMs demonstrate a startling ability to absorb ideology from one topic and generalize it to even unrelated ones. The ease with which LLMs{'} ideologies can be skewed underscores the risks associated with intentionally poisoned training data by malicious actors or inadvertently introduced biases by data annotators. It also emphasizes the imperative for robust safeguards to mitigate the influence of ideological manipulations on LLMs.",
}
| Large Language Models (LLMs) possess the potential to exert substantial influence on public perceptions and interactions with information. This raises concerns about the societal impact that could arise if the ideologies within these models can be easily manipulated. In this work, we investigate how effectively LLMs can learn and generalize ideological biases from their instruction-tuning data. Our findings reveal a concerning vulnerability: exposure to only a small amount of ideologically driven samples significantly alters the ideology of LLMs. Notably, LLMs demonstrate a startling ability to absorb ideology from one topic and generalize it to even unrelated ones. The ease with which LLMs{'} ideologies can be skewed underscores the risks associated with intentionally poisoned training data by malicious actors or inadvertently introduced biases by data annotators. It also emphasizes the imperative for robust safeguards to mitigate the influence of ideological manipulations on LLMs. | [
"Chen, Kai",
"He, Zihao",
"Yan, Jun",
"Shi, Taiwei",
"Lerman, Kristina"
] | How Susceptible are Large Language Models to Ideological Manipulation? | emnlp-main.952 | Poster | 2402.11725 | [
"https://github.com/kaichen23/llm_ideo_manipulate"
] | https://huggingface.co/papers/2402.11725 | 1 | 1 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.953.bib | https://aclanthology.org/2024.emnlp-main.953/ | @inproceedings{harel-canada-etal-2024-measuring,
title = "Measuring Psychological Depth in Language Models",
author = "Harel-Canada, Fabrice Y and
Zhou, Hanyu and
Muppalla, Sreya and
Yildiz, Zeynep Senahan and
Kim, Miryung and
Sahai, Amit and
Peng, Nanyun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.953",
pages = "17162--17196",
abstract = "Evaluations of creative stories generated by large language models (LLMs) often focus on objective properties of the text, such as its style, coherence, and diversity. While these metrics are indispensable, they do not speak to a story{'}s subjective, psychological impact from a reader{'}s perspective. We introduce the Psychological Depth Scale (PDS), a novel framework rooted in literary theory that measures an LLM{'}s ability to produce authentic and narratively complex stories that provoke emotion, empathy, and engagement. We empirically validate our framework by showing that humans can consistently evaluate stories based on PDS (0.72 Krippendorff{'}s alpha). We also explore techniques for automating the PDS to easily scale future analyses. GPT-4o, combined with a novel Mixture-of-Personas (MoP) prompting strategy, achieves an average Spearman correlation of 0.51 with human judgment while Llama-3-70B with constrained decoding scores as high as 0.68 for empathy. Finally, we compared the depth of stories authored by both humans and LLMs. Surprisingly, GPT-4 stories either surpassed or were statistically indistinguishable from highly-rated human-written stories sourced from Reddit. By shifting the focus from text to reader, the Psychological Depth Scale is a validated, automated, and systematic means of measuring the capacity of LLMs to connect with humans through the stories they tell.",
}
| Evaluations of creative stories generated by large language models (LLMs) often focus on objective properties of the text, such as its style, coherence, and diversity. While these metrics are indispensable, they do not speak to a story{'}s subjective, psychological impact from a reader{'}s perspective. We introduce the Psychological Depth Scale (PDS), a novel framework rooted in literary theory that measures an LLM{'}s ability to produce authentic and narratively complex stories that provoke emotion, empathy, and engagement. We empirically validate our framework by showing that humans can consistently evaluate stories based on PDS (0.72 Krippendorff{'}s alpha). We also explore techniques for automating the PDS to easily scale future analyses. GPT-4o, combined with a novel Mixture-of-Personas (MoP) prompting strategy, achieves an average Spearman correlation of 0.51 with human judgment while Llama-3-70B with constrained decoding scores as high as 0.68 for empathy. Finally, we compared the depth of stories authored by both humans and LLMs. Surprisingly, GPT-4 stories either surpassed or were statistically indistinguishable from highly-rated human-written stories sourced from Reddit. By shifting the focus from text to reader, the Psychological Depth Scale is a validated, automated, and systematic means of measuring the capacity of LLMs to connect with humans through the stories they tell. | [
"Harel-Canada, Fabrice Y",
"Zhou, Hanyu",
"Muppalla, Sreya",
"Yildiz, Zeynep Senahan",
"Kim, Miryung",
"Sahai, Amit",
"Peng, Nanyun"
] | Measuring Psychological Depth in Language Models | emnlp-main.953 | Oral | 2406.12680 | [
"https://github.com/PlusLabNLP/psychdepth"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.954.bib | https://aclanthology.org/2024.emnlp-main.954/ | @inproceedings{zhao-etal-2024-media,
title = "Media Attitude Detection via Framing Analysis with Events and their Relations",
author = "Zhao, Jin and
Tu, Jingxuan and
Du, Han and
Xue, Nianwen",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.954",
pages = "17197--17210",
abstract = "Framing is used to present some selective aspects of an issue and making them more salient, which aims to promote certain values, interpretations, or solutions (Entman, 1993). This study investigates the nuances of media framing on public perception and understanding by examining how events are presented within news articles. Unlike previous research that primarily focused on word choice as a framing device, this work explores the comprehensive narrative construction through events and their relations. Our method integrates event extraction, cross-document event coreference, and causal relationship mapping among events to extract framing devices employed by media to assess their role in framing the narrative. We evaluate our approach with a media attitude detection task and show that the use of event mentions, event cluster descriptors, and their causal relations effectively captures the subtle nuances of framing, thereby providing deeper insights into the attitudes conveyed by news articles. The experimental results show the framing device models surpass the baseline models and offers a more detailed and explainable analysis of media framing effects. We make the source code and dataset publicly available.",
}
| Framing is used to present some selective aspects of an issue and making them more salient, which aims to promote certain values, interpretations, or solutions (Entman, 1993). This study investigates the nuances of media framing on public perception and understanding by examining how events are presented within news articles. Unlike previous research that primarily focused on word choice as a framing device, this work explores the comprehensive narrative construction through events and their relations. Our method integrates event extraction, cross-document event coreference, and causal relationship mapping among events to extract framing devices employed by media to assess their role in framing the narrative. We evaluate our approach with a media attitude detection task and show that the use of event mentions, event cluster descriptors, and their causal relations effectively captures the subtle nuances of framing, thereby providing deeper insights into the attitudes conveyed by news articles. The experimental results show the framing device models surpass the baseline models and offers a more detailed and explainable analysis of media framing effects. We make the source code and dataset publicly available. | [
"Zhao, Jin",
"Tu, Jingxuan",
"Du, Han",
"Xue, Nianwen"
] | Media Attitude Detection via Framing Analysis with Events and their Relations | emnlp-main.954 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.955.bib | https://aclanthology.org/2024.emnlp-main.955/ | @inproceedings{ba-etal-2024-fill,
title = "Fill In The Gaps: Model Calibration and Generalization with Synthetic Data",
author = "Ba, Yang and
Mancenido, Michelle V and
Pan, Rong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.955",
pages = "17211--17225",
abstract = "As machine learning models continue to swiftly advance, calibrating their performance has become a major concern prior to practical and widespread implementation. Most existing calibration methods often negatively impact model accuracy due to the lack of diversity of validation data, resulting in reduced generalizability. To address this, we propose a calibration method that incorporates synthetic data without compromising accuracy. We derive the expected calibration error (ECE) bound using the Probably Approximately Correct (PAC) learning framework. Large language models (LLMs), known for their ability to mimic real data and generate text with mixed class labels, are utilized as a synthetic data generation strategy to lower the ECE bound and improve model accuracy on real test data. Additionally, we propose data generation mechanisms for efficient calibration. Testing our method on four different natural language processing tasks, we observed an average up to 34{\%} increase in accuracy and 33{\%} decrease in ECE.",
}
| As machine learning models continue to swiftly advance, calibrating their performance has become a major concern prior to practical and widespread implementation. Most existing calibration methods often negatively impact model accuracy due to the lack of diversity of validation data, resulting in reduced generalizability. To address this, we propose a calibration method that incorporates synthetic data without compromising accuracy. We derive the expected calibration error (ECE) bound using the Probably Approximately Correct (PAC) learning framework. Large language models (LLMs), known for their ability to mimic real data and generate text with mixed class labels, are utilized as a synthetic data generation strategy to lower the ECE bound and improve model accuracy on real test data. Additionally, we propose data generation mechanisms for efficient calibration. Testing our method on four different natural language processing tasks, we observed an average up to 34{\%} increase in accuracy and 33{\%} decrease in ECE. | [
"Ba, Yang",
"Mancenido, Michelle V",
"Pan, Rong"
] | Fill In The Gaps: Model Calibration and Generalization with Synthetic Data | emnlp-main.955 | Poster | 2410.10864 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.956.bib | https://aclanthology.org/2024.emnlp-main.956/ | @inproceedings{shaier-etal-2024-adaptive,
title = "Adaptive Question Answering: Enhancing Language Model Proficiency for Addressing Knowledge Conflicts with Source Citations",
author = "Shaier, Sagi and
Kobren, Ari and
Ogren, Philip V.",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.956",
pages = "17226--17239",
abstract = "Resolving knowledge conflicts is a crucial challenge in Question Answering (QA) tasks, as the internet contains numerous conflicting facts and opinions. While some research has made progress in tackling ambiguous settings where multiple valid answers exist, these approaches often neglect to provide source citations, leaving users to evaluate the factuality of each answer. On the other hand, existing work on citation generation has focused on unambiguous settings with single answers, failing to address the complexity of real-world scenarios. Despite the importance of both aspects, no prior research has combined them, leaving a significant gap in the development of QA systems. In this work, we bridge this gap by proposing the novel task of QA with source citation in ambiguous settings, where multiple valid answers exist. To facilitate research in this area, we create a comprehensive framework consisting of: (1) five novel datasets, obtained by augmenting three existing reading comprehension datasets with citation meta-data across various ambiguous settings, such as distractors and paraphrasing; (2) the first ambiguous multi-hop QA dataset featuring real-world, naturally occurring contexts; (3) two new metrics to evaluate models{'} performances; and (4) several strong baselines using rule-based, prompting, and finetuning approaches over five large language models. We hope that this new task, datasets, metrics, and baselines will inspire the community to push the boundaries of QA research and develop more trustworthy and interpretable systems.",
}
| Resolving knowledge conflicts is a crucial challenge in Question Answering (QA) tasks, as the internet contains numerous conflicting facts and opinions. While some research has made progress in tackling ambiguous settings where multiple valid answers exist, these approaches often neglect to provide source citations, leaving users to evaluate the factuality of each answer. On the other hand, existing work on citation generation has focused on unambiguous settings with single answers, failing to address the complexity of real-world scenarios. Despite the importance of both aspects, no prior research has combined them, leaving a significant gap in the development of QA systems. In this work, we bridge this gap by proposing the novel task of QA with source citation in ambiguous settings, where multiple valid answers exist. To facilitate research in this area, we create a comprehensive framework consisting of: (1) five novel datasets, obtained by augmenting three existing reading comprehension datasets with citation meta-data across various ambiguous settings, such as distractors and paraphrasing; (2) the first ambiguous multi-hop QA dataset featuring real-world, naturally occurring contexts; (3) two new metrics to evaluate models{'} performances; and (4) several strong baselines using rule-based, prompting, and finetuning approaches over five large language models. We hope that this new task, datasets, metrics, and baselines will inspire the community to push the boundaries of QA research and develop more trustworthy and interpretable systems. | [
"Shaier, Sagi",
"Kobren, Ari",
"Ogren, Philip V."
] | Adaptive Question Answering: Enhancing Language Model Proficiency for Addressing Knowledge Conflicts with Source Citations | emnlp-main.956 | Poster | 2410.04241 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.957.bib | https://aclanthology.org/2024.emnlp-main.957/ | @inproceedings{mendes-etal-2024-granular,
title = "Granular Privacy Control for Geolocation with Vision Language Models",
author = "Mendes, Ethan and
Chen, Yang and
Hays, James and
Das, Sauvik and
Xu, Wei and
Ritter, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.957",
pages = "17240--17292",
abstract = "Vision Language Models (VLMs) are rapidly advancing in their capability to answer information-seeking questions. As these models are widely deployed in consumer applications, they could lead to new privacy risks due to emergent abilities to identify people in photos, geolocate images, etc. As we demonstrate, somewhat surprisingly, current open-source and proprietary VLMs are very capable image geolocators, making widespread geolocation with VLMs an immediate privacy risk, rather than merely a theoretical future concern. As a first step to address this challenge, we develop a new benchmark, GPTGeoChat, to test the capability of VLMs to moderate geolocation dialogues with users. We collect a set of 1,000 image geolocation conversations between in-house annotators and GPT-4v, which are annotated with the granularity of location information revealed at each turn. Using this new dataset we evaluate the ability of various VLMs to moderate GPT-4v geolocation conversations by determining when too much location information has been revealed. We find that custom fine-tuned models perform on par with prompted API-based models when identifying leaked location information at the country or city level, however fine-tuning on supervised data appears to be needed to accurately moderate finer granularities, such as the name of a restaurant or building.",
}
| Vision Language Models (VLMs) are rapidly advancing in their capability to answer information-seeking questions. As these models are widely deployed in consumer applications, they could lead to new privacy risks due to emergent abilities to identify people in photos, geolocate images, etc. As we demonstrate, somewhat surprisingly, current open-source and proprietary VLMs are very capable image geolocators, making widespread geolocation with VLMs an immediate privacy risk, rather than merely a theoretical future concern. As a first step to address this challenge, we develop a new benchmark, GPTGeoChat, to test the capability of VLMs to moderate geolocation dialogues with users. We collect a set of 1,000 image geolocation conversations between in-house annotators and GPT-4v, which are annotated with the granularity of location information revealed at each turn. Using this new dataset we evaluate the ability of various VLMs to moderate GPT-4v geolocation conversations by determining when too much location information has been revealed. We find that custom fine-tuned models perform on par with prompted API-based models when identifying leaked location information at the country or city level, however fine-tuning on supervised data appears to be needed to accurately moderate finer granularities, such as the name of a restaurant or building. | [
"Mendes, Ethan",
"Chen, Yang",
"Hays, James",
"Das, Sauvik",
"Xu, Wei",
"Ritter, Alan"
] | Granular Privacy Control for Geolocation with Vision Language Models | emnlp-main.957 | Oral | 2407.04952 | [
"https://github.com/ethanm88/GPTGeoChat"
] | https://huggingface.co/papers/2407.04952 | 3 | 4 | 1 | 6 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.958.bib | https://aclanthology.org/2024.emnlp-main.958/ | @inproceedings{jiang-xu-2024-medreadme,
title = "{M}ed{R}ead{M}e: A Systematic Study for Fine-grained Sentence Readability in Medical Domain",
author = "Jiang, Chao and
Xu, Wei",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.958",
pages = "17293--17319",
abstract = "Medical texts are notoriously challenging to read. Properly measuring their readability is the first step towards making them more accessible. Here, we present the first systematic study on fine-grained readability measurements in the medical domain, at both sentence-level and span-level. We first introduce a new dataset MedReadMe, which consists of manually annotated readability ratings and fine-grained complex span annotation for 4,520 sentences, featuring two novel {``}Google-Easy{''} and {``}Google-Hard{''} categories. It supports our quantitative analysis, which covers 650 linguistic features and additional complex span features, to answer {``}why medical sentences are so hard.{''} Enabled by our high-quality annotation, we benchmark several state-of-the-art sentence-level readability metrics, including unsupervised, supervised, and prompting-based methods using recently developed large language models (LLMs). Informed by our fine-grained complex span annotation, we find that adding a single feature, capturing the number of jargon spans, into existing readability formulas can significantly improve their correlation with human judgments, and also make them more stable. We will publicly release data and code.",
}
| Medical texts are notoriously challenging to read. Properly measuring their readability is the first step towards making them more accessible. Here, we present the first systematic study on fine-grained readability measurements in the medical domain, at both sentence-level and span-level. We first introduce a new dataset MedReadMe, which consists of manually annotated readability ratings and fine-grained complex span annotation for 4,520 sentences, featuring two novel {``}Google-Easy{''} and {``}Google-Hard{''} categories. It supports our quantitative analysis, which covers 650 linguistic features and additional complex span features, to answer {``}why medical sentences are so hard.{''} Enabled by our high-quality annotation, we benchmark several state-of-the-art sentence-level readability metrics, including unsupervised, supervised, and prompting-based methods using recently developed large language models (LLMs). Informed by our fine-grained complex span annotation, we find that adding a single feature, capturing the number of jargon spans, into existing readability formulas can significantly improve their correlation with human judgments, and also make them more stable. We will publicly release data and code. | [
"Jiang, Chao",
"Xu, Wei"
] | MedReadMe: A Systematic Study for Fine-grained Sentence Readability in Medical Domain | emnlp-main.958 | Oral | 2405.02144 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.959.bib | https://aclanthology.org/2024.emnlp-main.959/ | @inproceedings{shah-etal-2024-memeclip,
title = "{M}eme{CLIP}: Leveraging {CLIP} Representations for Multimodal Meme Classification",
author = "Shah, Siddhant Bikram and
Shiwakoti, Shuvam and
Chaudhary, Maheep and
Wang, Haohan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.959",
pages = "17320--17332",
abstract = "The complexity of text-embedded images presents a formidable challenge in machine learning given the need for multimodal understanding of multiple aspects of expression conveyed by them. While previous research in multimodal analysis has primarily focused on singular aspects such as hate speech and its subclasses, this study expands this focus to encompass multiple aspects of linguistics: hate, targets of hate, stance, and humor. We introduce a novel dataset PrideMM comprising 5,063 text-embedded images associated with the LGBTQ+ Pride movement, thereby addressing a serious gap in existing resources. We conduct extensive experimentation on PrideMM by using unimodal and multimodal baseline methods to establish benchmarks for each task. Additionally, we propose a novel framework MemeCLIP for efficient downstream learning while preserving the knowledge of the pre-trained CLIP model. The results of our experiments show that MemeCLIP achieves superior performance compared to previously proposed frameworks on two real-world datasets. We further compare the performance of MemeCLIP and zero-shot GPT-4 on the hate classification task. Finally, we discuss the shortcomings of our model by qualitatively analyzing misclassified samples. Our code and dataset are publicly available at: https://github.com/SiddhantBikram/MemeCLIP.",
}
| The complexity of text-embedded images presents a formidable challenge in machine learning given the need for multimodal understanding of multiple aspects of expression conveyed by them. While previous research in multimodal analysis has primarily focused on singular aspects such as hate speech and its subclasses, this study expands this focus to encompass multiple aspects of linguistics: hate, targets of hate, stance, and humor. We introduce a novel dataset PrideMM comprising 5,063 text-embedded images associated with the LGBTQ+ Pride movement, thereby addressing a serious gap in existing resources. We conduct extensive experimentation on PrideMM by using unimodal and multimodal baseline methods to establish benchmarks for each task. Additionally, we propose a novel framework MemeCLIP for efficient downstream learning while preserving the knowledge of the pre-trained CLIP model. The results of our experiments show that MemeCLIP achieves superior performance compared to previously proposed frameworks on two real-world datasets. We further compare the performance of MemeCLIP and zero-shot GPT-4 on the hate classification task. Finally, we discuss the shortcomings of our model by qualitatively analyzing misclassified samples. Our code and dataset are publicly available at: https://github.com/SiddhantBikram/MemeCLIP. | [
"Shah, Siddhant Bikram",
"Shiwakoti, Shuvam",
"Chaudhary, Maheep",
"Wang, Haohan"
] | MemeCLIP: Leveraging CLIP Representations for Multimodal Meme Classification | emnlp-main.959 | Poster | 2409.14703 | [
"https://github.com/siddhantbikram/memeclip"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.960.bib | https://aclanthology.org/2024.emnlp-main.960/ | @inproceedings{zhu-etal-2024-flipguard,
title = "{F}lip{G}uard: Defending Preference Alignment against Update Regression with Constrained Optimization",
author = "Zhu, Mingye and
Liu, Yi and
Wang, Quan and
Guo, Junbo and
Mao, Zhendong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.960",
pages = "17333--17350",
}
| No abstract found | [
"Zhu, Mingye",
"Liu, Yi",
"Wang, Quan",
"Guo, Junbo",
"Mao, Zhendong"
] | FlipGuard: Defending Preference Alignment against Update Regression with Constrained Optimization | emnlp-main.960 | Poster | 2410.00508 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.961.bib | https://aclanthology.org/2024.emnlp-main.961/ | @inproceedings{chen-etal-2024-storysparkqa,
title = "{S}tory{S}park{QA}: Expert-Annotated {QA} Pairs with Real-World Knowledge for Children{'}s Story-Based Learning",
author = "Chen, Jiaju and
Lu, Yuxuan and
Zhang, Shao and
Yao, Bingsheng and
Dong, Yuanzhe and
Xu, Ying and
Li, Yunyao and
Wang, Qianwen and
Wang, Dakuo and
Sun, Yuling",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.961",
pages = "17351--17370",
abstract = "Interactive story reading is common in early childhood education, where teachers expect to teach both language skills and real-world knowledge beyond the story. While many story reading systems have been developed for this activity, they often fail to infuse real-world knowledge into the conversation. This limitation can be attributed to the existing question-answering (QA) datasets used for children{'}s education, upon which the systems are built, failing to capture the nuances of how education experts think when conducting interactive story reading activities. To bridge this gap, we design an annotation framework, empowered by existing knowledge graph to capture experts{'} annotations and thinking process, and leverage this framework to construct StorySparkQA dataset, which comprises 5, 868 expert-annotated QA pairs with real-world knowledge. We conduct automated and human expert evaluations across various QA pair generation settings to demonstrate that our StorySparkQA can effectively support models in generating QA pairs that target real-world knowledge beyond story content. StorySparkQA is available at https://huggingface.co/datasets/NEU-HAI/StorySparkQA.",
}
| Interactive story reading is common in early childhood education, where teachers expect to teach both language skills and real-world knowledge beyond the story. While many story reading systems have been developed for this activity, they often fail to infuse real-world knowledge into the conversation. This limitation can be attributed to the existing question-answering (QA) datasets used for children{'}s education, upon which the systems are built, failing to capture the nuances of how education experts think when conducting interactive story reading activities. To bridge this gap, we design an annotation framework, empowered by existing knowledge graph to capture experts{'} annotations and thinking process, and leverage this framework to construct StorySparkQA dataset, which comprises 5, 868 expert-annotated QA pairs with real-world knowledge. We conduct automated and human expert evaluations across various QA pair generation settings to demonstrate that our StorySparkQA can effectively support models in generating QA pairs that target real-world knowledge beyond story content. StorySparkQA is available at https://huggingface.co/datasets/NEU-HAI/StorySparkQA. | [
"Chen, Jiaju",
"Lu, Yuxuan",
"Zhang, Shao",
"Yao, Bingsheng",
"Dong, Yuanzhe",
"Xu, Ying",
"Li, Yunyao",
"Wang, Qianwen",
"Wang, Dakuo",
"Sun, Yuling"
] | StorySparkQA: Expert-Annotated QA Pairs with Real-World Knowledge for Children's Story-Based Learning | emnlp-main.961 | Poster | 2311.09756 | [
"https://github.com/neuhai/storysparkqa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.962.bib | https://aclanthology.org/2024.emnlp-main.962/ | @inproceedings{liu-etal-2024-medcot,
title = "{M}ed{C}o{T}: Medical Chain of Thought via Hierarchical Expert",
author = "Liu, Jiaxiang and
Wang, Yuan and
Du, Jiawei and
Zhou, Joey Tianyi and
Liu, Zuozhu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.962",
pages = "17371--17389",
abstract = "Artificial intelligence has advanced in Medical Visual Question Answering (Med-VQA), but prevalent research tends to focus on the accuracy of the answers, often overlooking the reasoning paths and interpretability, which are crucial in clinical settings. Besides, current Med-VQA algorithms, typically reliant on singular models, lack the robustness needed for real-world medical diagnostics which usually require collaborative expert evaluation. To address these shortcomings, this paper presents MedCoT, a novel hierarchical expert verification reasoning chain method designed to enhance interpretability and accuracy in biomedical imaging inquiries. MedCoT is predicated on two principles: The necessity for explicit reasoning paths in Med-VQA and the requirement for multi-expert review to formulate accurate conclusions. The methodology involves an Initial Specialist proposing diagnostic rationales, followed by a Follow-up Specialist who validates these rationales, and finally, a consensus is reached through a vote among a sparse Mixture of Experts within the locally deployed Diagnostic Specialist, which then provides the definitive diagnosis. Experimental evaluations on four standard Med-VQA datasets demonstrate that MedCoT surpasses existing state-of-the-art approaches, providing significant improvements in performance and interpretability.",
}
| Artificial intelligence has advanced in Medical Visual Question Answering (Med-VQA), but prevalent research tends to focus on the accuracy of the answers, often overlooking the reasoning paths and interpretability, which are crucial in clinical settings. Besides, current Med-VQA algorithms, typically reliant on singular models, lack the robustness needed for real-world medical diagnostics which usually require collaborative expert evaluation. To address these shortcomings, this paper presents MedCoT, a novel hierarchical expert verification reasoning chain method designed to enhance interpretability and accuracy in biomedical imaging inquiries. MedCoT is predicated on two principles: The necessity for explicit reasoning paths in Med-VQA and the requirement for multi-expert review to formulate accurate conclusions. The methodology involves an Initial Specialist proposing diagnostic rationales, followed by a Follow-up Specialist who validates these rationales, and finally, a consensus is reached through a vote among a sparse Mixture of Experts within the locally deployed Diagnostic Specialist, which then provides the definitive diagnosis. Experimental evaluations on four standard Med-VQA datasets demonstrate that MedCoT surpasses existing state-of-the-art approaches, providing significant improvements in performance and interpretability. | [
"Liu, Jiaxiang",
"Wang, Yuan",
"Du, Jiawei",
"Zhou, Joey Tianyi",
"Liu, Zuozhu"
] | MedCoT: Medical Chain of Thought via Hierarchical Expert | emnlp-main.962 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.963.bib | https://aclanthology.org/2024.emnlp-main.963/ | @inproceedings{lin-etal-2024-varying,
title = "Varying Sentence Representations via Condition-Specified Routers",
author = "Lin, Ziyong and
Wang, Quansen and
Jia, Zixia and
Zheng, Zilong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.963",
pages = "17390--17401",
abstract = "Semantic similarity between two sentences is inherently subjective and can vary significantly based on the specific aspects emphasized. Consequently, traditional sentence encoders must be capable of generating conditioned sentence representations that account for diverse conditions or aspects. In this paper, we propose a novel yet efficient framework based on transformer-style language models that facilitates advanced conditioned sentence representation while maintaining model parameters and computational efficiency. Empirical evaluations on the Conditional Semantic Textual Similarity and Knowledge Graph Completion tasks demonstrate the superiority of our proposed framework.",
}
| Semantic similarity between two sentences is inherently subjective and can vary significantly based on the specific aspects emphasized. Consequently, traditional sentence encoders must be capable of generating conditioned sentence representations that account for diverse conditions or aspects. In this paper, we propose a novel yet efficient framework based on transformer-style language models that facilitates advanced conditioned sentence representation while maintaining model parameters and computational efficiency. Empirical evaluations on the Conditional Semantic Textual Similarity and Knowledge Graph Completion tasks demonstrate the superiority of our proposed framework. | [
"Lin, Ziyong",
"Wang, Quansen",
"Jia, Zixia",
"Zheng, Zilong"
] | Varying Sentence Representations via Condition-Specified Routers | emnlp-main.963 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.964.bib | https://aclanthology.org/2024.emnlp-main.964/ | @inproceedings{ou-etal-2024-inductive,
title = "Inductive-Deductive Strategy Reuse for Multi-Turn Instructional Dialogues",
author = "Ou, Jiao and
Wu, Jiayu and
Liu, Che and
Zhang, Fuzheng and
Zhang, Di and
Gai, Kun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.964",
pages = "17402--17431",
abstract = "Aligning large language models (LLMs) with human expectations requires high-quality instructional dialogues, which can be achieved by raising diverse, in-depth, and insightful instructions that deepen interactions. Existing methods target instructions from real instruction dialogues as a learning goal and fine-tune a user simulator for posing instructions. However, the user simulator struggles to implicitly model complex dialogue flows and pose high-quality instructions. In this paper, we take inspiration from the cognitive abilities inherent in human learning and propose the explicit modeling of complex dialogue flows through instructional strategy reuse. Specifically, we first induce high-level strategies from various real instruction dialogues. These strategies are applied to new dialogue scenarios deductively, where the instructional strategies facilitate high-quality instructions. Experimental results show that our method can generate diverse, in-depth, and insightful instructions for a given dialogue history. The constructed multi-turn instructional dialogues can outperform competitive baselines on the downstream chat model.",
}
| Aligning large language models (LLMs) with human expectations requires high-quality instructional dialogues, which can be achieved by raising diverse, in-depth, and insightful instructions that deepen interactions. Existing methods target instructions from real instruction dialogues as a learning goal and fine-tune a user simulator for posing instructions. However, the user simulator struggles to implicitly model complex dialogue flows and pose high-quality instructions. In this paper, we take inspiration from the cognitive abilities inherent in human learning and propose the explicit modeling of complex dialogue flows through instructional strategy reuse. Specifically, we first induce high-level strategies from various real instruction dialogues. These strategies are applied to new dialogue scenarios deductively, where the instructional strategies facilitate high-quality instructions. Experimental results show that our method can generate diverse, in-depth, and insightful instructions for a given dialogue history. The constructed multi-turn instructional dialogues can outperform competitive baselines on the downstream chat model. | [
"Ou, Jiao",
"Wu, Jiayu",
"Liu, Che",
"Zhang, Fuzheng",
"Zhang, Di",
"Gai, Kun"
] | Inductive-Deductive Strategy Reuse for Multi-Turn Instructional Dialogues | emnlp-main.964 | Poster | 2404.11095 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.965.bib | https://aclanthology.org/2024.emnlp-main.965/ | @inproceedings{ferrando-voita-2024-information,
title = "Information Flow Routes: Automatically Interpreting Language Models at Scale",
author = "Ferrando, Javier and
Voita, Elena",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.965",
pages = "17432--17445",
abstract = "Information flows by routes inside the network via mechanisms implemented in the model. These routes can be represented as graphs where nodes correspond to token representations and edges to computations. We automatically build these graphs in a top-down manner, for each prediction leaving only the most important nodes and edges. In contrast to the existing workflows relying on activation patching, we do this through attribution: this allows us to efficiently uncover existing circuits with just a single forward pass. Unlike with patching, we do not need a human to carefully design prediction templates, and we can extract information flow routes for any prediction (not just the ones among the allowed templates). As a result, we can analyze model behavior in general, for specific types of predictions, or different domains. We experiment with Llama 2 and show that some attention head roles are overall important, e.g. previous token heads and subword merging heads. Next, we find similarities in Llama 2 behavior when handling tokens of the same part of speech. Finally, we show that some model components can be specialized on domains such as coding or multilingual texts.",
}
| Information flows by routes inside the network via mechanisms implemented in the model. These routes can be represented as graphs where nodes correspond to token representations and edges to computations. We automatically build these graphs in a top-down manner, for each prediction leaving only the most important nodes and edges. In contrast to the existing workflows relying on activation patching, we do this through attribution: this allows us to efficiently uncover existing circuits with just a single forward pass. Unlike with patching, we do not need a human to carefully design prediction templates, and we can extract information flow routes for any prediction (not just the ones among the allowed templates). As a result, we can analyze model behavior in general, for specific types of predictions, or different domains. We experiment with Llama 2 and show that some attention head roles are overall important, e.g. previous token heads and subword merging heads. Next, we find similarities in Llama 2 behavior when handling tokens of the same part of speech. Finally, we show that some model components can be specialized on domains such as coding or multilingual texts. | [
"Ferr",
"o, Javier",
"Voita, Elena"
] | Information Flow Routes: Automatically Interpreting Language Models at Scale | emnlp-main.965 | Poster | 2403.00824 | [
"https://github.com/facebookresearch/llm-transparency-tool"
] | https://huggingface.co/papers/2403.00824 | 0 | 3 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.966.bib | https://aclanthology.org/2024.emnlp-main.966/ | @inproceedings{zhou-etal-2024-simple,
title = "A Simple yet Effective Training-free Prompt-free Approach to {C}hinese Spelling Correction Based on Large Language Models",
author = "Zhou, Houquan and
Li, Zhenghua and
Zhang, Bo and
Li, Chen and
Lai, Shaopeng and
Zhang, Ji and
Huang, Fei and
Zhang, Min",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.966",
pages = "17446--17467",
abstract = "This work proposes a simple training-free prompt-free approach to leverage large language models (LLMs) for the Chinese spelling correction (CSC) task, which is totally different from all previous CSC approaches. The key idea is to use an LLM as a pure language model in a conventional manner. The LLM goes through the input sentence from the beginning, and at each inference step, produces a distribution over its vocabulary for deciding the next token, given a partial sentence. To ensure that the output sentence remains faithful to the input sentence, we design a minimal distortion model that utilizes pronunciation or shape similarities between the original and replaced characters. Furthermore, we propose two useful reward strategies to address practical challenges specific to the CSC task. Experiments on five public datasets demonstrate that our approach significantly improves LLM performance, enabling them to compete with state-of-the-art domain-general CSC models.",
}
| This work proposes a simple training-free prompt-free approach to leverage large language models (LLMs) for the Chinese spelling correction (CSC) task, which is totally different from all previous CSC approaches. The key idea is to use an LLM as a pure language model in a conventional manner. The LLM goes through the input sentence from the beginning, and at each inference step, produces a distribution over its vocabulary for deciding the next token, given a partial sentence. To ensure that the output sentence remains faithful to the input sentence, we design a minimal distortion model that utilizes pronunciation or shape similarities between the original and replaced characters. Furthermore, we propose two useful reward strategies to address practical challenges specific to the CSC task. Experiments on five public datasets demonstrate that our approach significantly improves LLM performance, enabling them to compete with state-of-the-art domain-general CSC models. | [
"Zhou, Houquan",
"Li, Zhenghua",
"Zhang, Bo",
"Li, Chen",
"Lai, Shaopeng",
"Zhang, Ji",
"Huang, Fei",
"Zhang, Min"
] | A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Correction Based on Large Language Models | emnlp-main.966 | Poster | 2410.04027 | [
"https://github.com/Jacob-Zhou/simple-csc"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.967.bib | https://aclanthology.org/2024.emnlp-main.967/ | @inproceedings{dai-etal-2024-representational,
title = "Representational Analysis of Binding in Language Models",
author = "Dai, Qin and
Heinzerling, Benjamin and
Inui, Kentaro",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.967",
pages = "17468--17493",
abstract = "Entity tracking is essential for complex reasoning. To perform in-context entity tracking, language models (LMs) must bind an entity to its attribute (e.g., bind a container to its content) to recall attribute for a given entity. For example, given a context mentioning {``}The coffee is in Box Z, the stone is in Box M, the map is in Box H{''}, to infer {``}Box Z contains the coffee{''} later, LMs must bind {``}Box Z{''} to {``}coffee{''}. To explain the binding behaviour of LMs, existing research introduces a Binding ID mechanism and states that LMs use a abstract concept called Binding ID (BI) to internally mark entity-attribute pairs. However, they have not directly captured the BI information from entity activations. In this work, we provide a novel view of the Binding ID mechanism by localizing the BI information. Specifically, we discover that there exists a low-rank subspace in the hidden state (or activation) of LMs, that primarily encodes BIs. To identify this subspace, we take principle component analysis as our first attempt and it is empirically proven to be effective. Moreover, we also discover that when editing representations along directions in the subspace, LMs tend to bind a given entity to other attributes accordingly. For example, by patching activations along the BI encoding direction we can make the LM to infer {``}Box Z contains the stone{''} and {``}Box Z contains the map{''}.",
}
| Entity tracking is essential for complex reasoning. To perform in-context entity tracking, language models (LMs) must bind an entity to its attribute (e.g., bind a container to its content) to recall attribute for a given entity. For example, given a context mentioning {``}The coffee is in Box Z, the stone is in Box M, the map is in Box H{''}, to infer {``}Box Z contains the coffee{''} later, LMs must bind {``}Box Z{''} to {``}coffee{''}. To explain the binding behaviour of LMs, existing research introduces a Binding ID mechanism and states that LMs use a abstract concept called Binding ID (BI) to internally mark entity-attribute pairs. However, they have not directly captured the BI information from entity activations. In this work, we provide a novel view of the Binding ID mechanism by localizing the BI information. Specifically, we discover that there exists a low-rank subspace in the hidden state (or activation) of LMs, that primarily encodes BIs. To identify this subspace, we take principle component analysis as our first attempt and it is empirically proven to be effective. Moreover, we also discover that when editing representations along directions in the subspace, LMs tend to bind a given entity to other attributes accordingly. For example, by patching activations along the BI encoding direction we can make the LM to infer {``}Box Z contains the stone{''} and {``}Box Z contains the map{''}. | [
"Dai, Qin",
"Heinzerling, Benjamin",
"Inui, Kentaro"
] | Representational Analysis of Binding in Language Models | emnlp-main.967 | Poster | 2409.05448 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.968.bib | https://aclanthology.org/2024.emnlp-main.968/ | @inproceedings{yu-etal-2024-cosafe,
title = "{C}o{S}afe: Evaluating Large Language Model Safety in Multi-Turn Dialogue Coreference",
author = "Yu, Erxin and
Li, Jing and
Liao, Ming and
Wang, Siqi and
Zuchen, Gao and
Mi, Fei and
Hong, Lanqing",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.968",
pages = "17494--17508",
abstract = "As large language models (LLMs) constantly evolve, ensuring their safety remains a critical research issue. Previous red teaming approaches for LLM safety have primarily focused on single prompt attacks or goal hijacking. To the best of our knowledge, we are the first to study LLM safety in multi-turn dialogue coreference. We created a dataset of 1,400 questions across 14 categories, each featuring multi-turn coreference safety attacks. We then conducted detailed evaluations on five widely used open-source LLMs. The results indicated that under multi-turn coreference safety attacks, the highest attack success rate was 56{\%} with the LLaMA2-Chat-7b model, while the lowest was 13.9{\%} with the Mistral-7B-Instruct model. These findings highlight the safety vulnerabilities in LLMs during dialogue coreference interactions.",
}
| As large language models (LLMs) constantly evolve, ensuring their safety remains a critical research issue. Previous red teaming approaches for LLM safety have primarily focused on single prompt attacks or goal hijacking. To the best of our knowledge, we are the first to study LLM safety in multi-turn dialogue coreference. We created a dataset of 1,400 questions across 14 categories, each featuring multi-turn coreference safety attacks. We then conducted detailed evaluations on five widely used open-source LLMs. The results indicated that under multi-turn coreference safety attacks, the highest attack success rate was 56{\%} with the LLaMA2-Chat-7b model, while the lowest was 13.9{\%} with the Mistral-7B-Instruct model. These findings highlight the safety vulnerabilities in LLMs during dialogue coreference interactions. | [
"Yu, Erxin",
"Li, Jing",
"Liao, Ming",
"Wang, Siqi",
"Zuchen, Gao",
"Mi, Fei",
"Hong, Lanqing"
] | CoSafe: Evaluating Large Language Model Safety in Multi-Turn Dialogue Coreference | emnlp-main.968 | Poster | 2406.17626 | [
"https://github.com/ErxinYu/CoSafe-Dataset"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.969.bib | https://aclanthology.org/2024.emnlp-main.969/ | @inproceedings{schimanski-etal-2024-climretrieve,
title = "{C}lim{R}etrieve: A Benchmarking Dataset for Information Retrieval from Corporate Climate Disclosures",
author = "Schimanski, Tobias and
Ni, Jingwei and
Mart{\'\i}n, Roberto Spacey and
Ranger, Nicola and
Leippold, Markus",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.969",
pages = "17509--17524",
abstract = "To handle the vast amounts of qualitative data produced in corporate climate communication, stakeholders increasingly rely on Retrieval Augmented Generation (RAG) systems. However, a significant gap remains in evaluating domain-specific information retrieval {--} the basis for answer generation. To address this challenge, this work simulates the typical tasks of a sustainability analyst by examining 30 sustainability reports with 16 detailed climate-related questions. As a result, we obtain a dataset with over 8.5K unique question-source-answer pairs labeled by different levels of relevance. Furthermore, we develop a use case with the dataset to investigate the integration of expert knowledge into information retrieval with embeddings. Although we show that incorporating expert knowledge works, we also outline the critical limitations of embeddings in knowledge-intensive downstream domains like climate change communication.",
}
| To handle the vast amounts of qualitative data produced in corporate climate communication, stakeholders increasingly rely on Retrieval Augmented Generation (RAG) systems. However, a significant gap remains in evaluating domain-specific information retrieval {--} the basis for answer generation. To address this challenge, this work simulates the typical tasks of a sustainability analyst by examining 30 sustainability reports with 16 detailed climate-related questions. As a result, we obtain a dataset with over 8.5K unique question-source-answer pairs labeled by different levels of relevance. Furthermore, we develop a use case with the dataset to investigate the integration of expert knowledge into information retrieval with embeddings. Although we show that incorporating expert knowledge works, we also outline the critical limitations of embeddings in knowledge-intensive downstream domains like climate change communication. | [
"Schimanski, Tobias",
"Ni, Jingwei",
"Mart{\\'\\i}n, Roberto Spacey",
"Ranger, Nicola",
"Leippold, Markus"
] | ClimRetrieve: A Benchmarking Dataset for Information Retrieval from Corporate Climate Disclosures | emnlp-main.969 | Poster | 2406.09818 | [
"https://github.com/tobischimanski/climretrieve"
] | https://huggingface.co/papers/2406.09818 | 0 | 2 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.970.bib | https://aclanthology.org/2024.emnlp-main.970/ | @inproceedings{ran-etal-2024-context,
title = "Context-Aware Adapter Tuning for Few-Shot Relation Learning in Knowledge Graphs",
author = "Ran, Liu and
Liu, Zhongzhou and
Li, Xiaoli and
Fang, Yuan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.970",
pages = "17525--17537",
abstract = "Knowledge graphs (KGs) are instrumental in various real-world applications, yet they often suffer from incompleteness due to missing relations. To predict instances for novel relations with limited training examples, few-shot relation learning approaches have emerged, utilizing techniques such as meta-learning. However, the assumption is that novel relations in meta-testing and base relations in meta-training are independently and identically distributed, which may not hold in practice. To address the limitation, we propose RelAdapter, a context-aware adapter for few-shot relation learning in KGs designed to enhance the adaptation process in meta-learning. First, RelAdapter is equipped with a lightweight adapter module that facilitates relation-specific, tunable adaptation of meta-knowledge in a parameter-efficient manner. Second, RelAdapter is enriched with contextual information about the target relation, enabling enhanced adaptation to each distinct relation. Extensive experiments on three benchmark KGs validate the superiority of RelAdapter over state-of-the-art methods.",
}
| Knowledge graphs (KGs) are instrumental in various real-world applications, yet they often suffer from incompleteness due to missing relations. To predict instances for novel relations with limited training examples, few-shot relation learning approaches have emerged, utilizing techniques such as meta-learning. However, the assumption is that novel relations in meta-testing and base relations in meta-training are independently and identically distributed, which may not hold in practice. To address the limitation, we propose RelAdapter, a context-aware adapter for few-shot relation learning in KGs designed to enhance the adaptation process in meta-learning. First, RelAdapter is equipped with a lightweight adapter module that facilitates relation-specific, tunable adaptation of meta-knowledge in a parameter-efficient manner. Second, RelAdapter is enriched with contextual information about the target relation, enabling enhanced adaptation to each distinct relation. Extensive experiments on three benchmark KGs validate the superiority of RelAdapter over state-of-the-art methods. | [
"Ran, Liu",
"Liu, Zhongzhou",
"Li, Xiaoli",
"Fang, Yuan"
] | Context-Aware Adapter Tuning for Few-Shot Relation Learning in Knowledge Graphs | emnlp-main.970 | Poster | 2410.09123 | [
"https://github.com/liuran998/RelAdapter"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.971.bib | https://aclanthology.org/2024.emnlp-main.971/ | @inproceedings{ma-wang-2024-zero,
title = "Zero-Shot Detection of {LLM}-Generated Text using Token Cohesiveness",
author = "Ma, Shixuan and
Wang, Quan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.971",
pages = "17538--17553",
abstract = "The increasing capability and widespread usage of large language models (LLMs) highlight the desirability of automatic detection of LLM-generated text. Zero-shot detectors, due to their training-free nature, have received considerable attention and notable success. In this paper, we identify a new feature, token cohesiveness, that is useful for zero-shot detection, and we demonstrate that LLM-generated text tends to exhibit higher token cohesiveness than human-written text. Based on this observation, we devise TOCSIN, a generic dual-channel detection paradigm that uses token cohesiveness as a plug-and-play module to improve existing zero-shot detectors. To calculate token cohesiveness, TOCSIN only requires a few rounds of random token deletion and semantic difference measurement, making it particularly suitable for a practical black-box setting where the source model used for generation is not accessible. Extensive experiments with four state-of-the-art base detectors on various datasets, source models, and evaluation settings demonstrate the effectiveness and generality of the proposed approach. Code available at: https://github.com/Shixuan-Ma/TOCSIN.",
}
| The increasing capability and widespread usage of large language models (LLMs) highlight the desirability of automatic detection of LLM-generated text. Zero-shot detectors, due to their training-free nature, have received considerable attention and notable success. In this paper, we identify a new feature, token cohesiveness, that is useful for zero-shot detection, and we demonstrate that LLM-generated text tends to exhibit higher token cohesiveness than human-written text. Based on this observation, we devise TOCSIN, a generic dual-channel detection paradigm that uses token cohesiveness as a plug-and-play module to improve existing zero-shot detectors. To calculate token cohesiveness, TOCSIN only requires a few rounds of random token deletion and semantic difference measurement, making it particularly suitable for a practical black-box setting where the source model used for generation is not accessible. Extensive experiments with four state-of-the-art base detectors on various datasets, source models, and evaluation settings demonstrate the effectiveness and generality of the proposed approach. Code available at: https://github.com/Shixuan-Ma/TOCSIN. | [
"Ma, Shixuan",
"Wang, Quan"
] | Zero-Shot Detection of LLM-Generated Text using Token Cohesiveness | emnlp-main.971 | Poster | 2409.16914 | [
"https://github.com/shixuan-ma/tocsin"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.972.bib | https://aclanthology.org/2024.emnlp-main.972/ | @inproceedings{chen-etal-2024-dual-oriented,
title = "Dual-oriented Disentangled Network with Counterfactual Intervention for Multimodal Intent Detection",
author = "Chen, Zhanpeng and
Zhu, Zhihong and
Zhuang, Xianwei and
Huang, Zhiqi and
Zou, Yuexian",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.972",
pages = "17554--17567",
abstract = "Multimodal intent detection is designed to leverage diverse modalities for a comprehensive understanding of user intentions in real-world scenarios, thus playing a critical role in modern task-oriented dialogue systems. Existing methods have made great progress in modal alignment and fusion, however, two vital limitations are neglected: (I) close entanglement of multimodal semantics with modal structures; (II) insufficient learning of the causal effects of semantic and modality-specific information on the final predictions under the end-to-end training fashion. To alleviate the above limitations, we introduce the Dual-oriented Disentangled Network with Counterfactual Intervention (DuoDN). DuoDN addresses key limitations in current systems by effectively disentangling and utilizing modality-specific and multimodal semantic information. The model consists of a Dual-oriented Disentangled Encoder that decouples semantics-oriented and modality-oriented representations, alongside a Counterfactual Intervention Module that applies causal inference to understand causal effects by injecting confounders. Experiments on three benchmark datasets demonstrate DuoDN{'}s superiority over existing methods, with extensive analysis validating its advantages.",
}
| Multimodal intent detection is designed to leverage diverse modalities for a comprehensive understanding of user intentions in real-world scenarios, thus playing a critical role in modern task-oriented dialogue systems. Existing methods have made great progress in modal alignment and fusion, however, two vital limitations are neglected: (I) close entanglement of multimodal semantics with modal structures; (II) insufficient learning of the causal effects of semantic and modality-specific information on the final predictions under the end-to-end training fashion. To alleviate the above limitations, we introduce the Dual-oriented Disentangled Network with Counterfactual Intervention (DuoDN). DuoDN addresses key limitations in current systems by effectively disentangling and utilizing modality-specific and multimodal semantic information. The model consists of a Dual-oriented Disentangled Encoder that decouples semantics-oriented and modality-oriented representations, alongside a Counterfactual Intervention Module that applies causal inference to understand causal effects by injecting confounders. Experiments on three benchmark datasets demonstrate DuoDN{'}s superiority over existing methods, with extensive analysis validating its advantages. | [
"Chen, Zhanpeng",
"Zhu, Zhihong",
"Zhuang, Xianwei",
"Huang, Zhiqi",
"Zou, Yuexian"
] | Dual-oriented Disentangled Network with Counterfactual Intervention for Multimodal Intent Detection | emnlp-main.972 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.973.bib | https://aclanthology.org/2024.emnlp-main.973/ | @inproceedings{wang-etal-2024-llms-mllms,
title = "From {LLM}s to {MLLM}s: Exploring the Landscape of Multimodal Jailbreaking",
author = "Wang, Siyuan and
Long, Zhuohan and
Fan, Zhihao and
Wei, Zhongyu",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.973",
pages = "17568--17582",
abstract = "The rapid development of Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) has exposed vulnerabilities to various adversarial attacks. This paper provides a comprehensive overview of jailbreaking research targeting both LLMs and MLLMs, highlighting recent advancements in evaluation benchmarks, attack techniques and defense strategies. Compared to the more advanced state of unimodal jailbreaking, multimodal domain remains underexplored. We summarize the limitations and potential research directions of multimodal jailbreaking, aiming to inspire future research and further enhance the robustness and security of MLLMs.",
}
| The rapid development of Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) has exposed vulnerabilities to various adversarial attacks. This paper provides a comprehensive overview of jailbreaking research targeting both LLMs and MLLMs, highlighting recent advancements in evaluation benchmarks, attack techniques and defense strategies. Compared to the more advanced state of unimodal jailbreaking, multimodal domain remains underexplored. We summarize the limitations and potential research directions of multimodal jailbreaking, aiming to inspire future research and further enhance the robustness and security of MLLMs. | [
"Wang, Siyuan",
"Long, Zhuohan",
"Fan, Zhihao",
"Wei, Zhongyu"
] | From LLMs to MLLMs: Exploring the Landscape of Multimodal Jailbreaking | emnlp-main.973 | Poster | 2406.14859 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.974.bib | https://aclanthology.org/2024.emnlp-main.974/ | @inproceedings{wang-etal-2024-symbolic,
title = "Symbolic Working Memory Enhances Language Models for Complex Rule Application",
author = "Wang, Siyuan and
Wei, Zhongyu and
Choi, Yejin and
Ren, Xiang",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.974",
pages = "17583--17604",
abstract = "Large Language Models (LLMs) have shown remarkable reasoning performance but struggle with multi-step deductive reasoning involving a series of rule application steps, especially when rules are presented non-sequentially. Our preliminary analysis shows that while LLMs excel in single-step rule application, their performance drops significantly in multi-step scenarios due to the challenge in rule grounding. It requires anchoring the applicable rule and supporting facts at each step, amidst multiple input rules, facts, and inferred facts. To address this, we propose augmenting LLMs with external working memory and introduce a neurosymbolic framework for rule application. The memory stores facts and rules in both natural language and symbolic forms, enabling precise tracking. Utilizing this memory, our framework iteratively performs symbolic rule grounding and LLM-based rule implementation. The former matches predicates and variables of symbolic rules and facts to ground applicable rules at each step. Experiments indicate our framework{'}s effectiveness in rule application and its robustness across various steps and settings.",
}
| Large Language Models (LLMs) have shown remarkable reasoning performance but struggle with multi-step deductive reasoning involving a series of rule application steps, especially when rules are presented non-sequentially. Our preliminary analysis shows that while LLMs excel in single-step rule application, their performance drops significantly in multi-step scenarios due to the challenge in rule grounding. It requires anchoring the applicable rule and supporting facts at each step, amidst multiple input rules, facts, and inferred facts. To address this, we propose augmenting LLMs with external working memory and introduce a neurosymbolic framework for rule application. The memory stores facts and rules in both natural language and symbolic forms, enabling precise tracking. Utilizing this memory, our framework iteratively performs symbolic rule grounding and LLM-based rule implementation. The former matches predicates and variables of symbolic rules and facts to ground applicable rules at each step. Experiments indicate our framework{'}s effectiveness in rule application and its robustness across various steps and settings. | [
"Wang, Siyuan",
"Wei, Zhongyu",
"Choi, Yejin",
"Ren, Xiang"
] | Symbolic Working Memory Enhances Language Models for Complex Rule Application | emnlp-main.974 | Poster | 2408.13654 | [
"https://github.com/siyuanwangw/ruleapplication"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.975.bib | https://aclanthology.org/2024.emnlp-main.975/ | @inproceedings{tan-etal-2024-lloco,
title = "{LL}o{CO}: Learning Long Contexts Offline",
author = "Tan, Sijun and
Li, Xiuyu and
Patil, Shishir G and
Wu, Ziyang and
Zhang, Tianjun and
Keutzer, Kurt and
Gonzalez, Joseph E. and
Popa, Raluca Ada",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.975",
pages = "17605--17621",
abstract = "Processing long contexts remains a challenge for large language models (LLMs) due to the quadratic computational and memory overhead of the self-attention mechanism and the substantial KV cache sizes during generation. We propose LLoCO, a novel approach to address this problem by learning contexts offline through context compression and in-domain parameter-efficient finetuning with LoRA. Our method enables an LLM to create a concise representation of the original context and efficiently retrieve relevant information to answer questions accurately. Our approach extends the effective context window of a 4k token LLaMA2-7B model to handle up to 128k tokens. We evaluate our approach on several long-context question-answering datasets, demonstrating that LLoCO significantly outperforms in-context learning while using $30 \times$ fewer tokens during inference. LLoCO achieves up to $7.62 \times$ speed-up during inference and $11.52 \times$ higher throughput during finetuning, substantially reduces the cost of long document question answering. This makes it a promising solution for efficient long context processing.",
}
| Processing long contexts remains a challenge for large language models (LLMs) due to the quadratic computational and memory overhead of the self-attention mechanism and the substantial KV cache sizes during generation. We propose LLoCO, a novel approach to address this problem by learning contexts offline through context compression and in-domain parameter-efficient finetuning with LoRA. Our method enables an LLM to create a concise representation of the original context and efficiently retrieve relevant information to answer questions accurately. Our approach extends the effective context window of a 4k token LLaMA2-7B model to handle up to 128k tokens. We evaluate our approach on several long-context question-answering datasets, demonstrating that LLoCO significantly outperforms in-context learning while using $30 \times$ fewer tokens during inference. LLoCO achieves up to $7.62 \times$ speed-up during inference and $11.52 \times$ higher throughput during finetuning, substantially reduces the cost of long document question answering. This makes it a promising solution for efficient long context processing. | [
"Tan, Sijun",
"Li, Xiuyu",
"Patil, Shishir G",
"Wu, Ziyang",
"Zhang, Tianjun",
"Keutzer, Kurt",
"Gonzalez, Joseph E.",
"Popa, Raluca Ada"
] | LLoCO: Learning Long Contexts Offline | emnlp-main.975 | Poster | 2404.07979 | [
"https://github.com/jeffreysijuntan/lloco"
] | https://huggingface.co/papers/2404.07979 | 7 | 20 | 2 | 8 | [
"xiuyul/Lloco-7b-qasper",
"xiuyul/Lloco-7b-nqa",
"xiuyul/Lloco-7b-qmsum",
"xiuyul/Lloco-7b-hqa",
"xiuyul/Lloco-7b-quality"
] | [] | [] | [
"xiuyul/Lloco-7b-qasper",
"xiuyul/Lloco-7b-nqa",
"xiuyul/Lloco-7b-qmsum",
"xiuyul/Lloco-7b-hqa",
"xiuyul/Lloco-7b-quality"
] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.976.bib | https://aclanthology.org/2024.emnlp-main.976/ | @inproceedings{mao-etal-2024-dont,
title = "Don{'}t Forget Your Reward Values: Language Model Alignment via Value-based Calibration",
author = "Mao, Xin and
Li, Feng-Lin and
Xu, Huimin and
Zhang, Wei and
Chen, Wang and
Luu, Anh Tuan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.976",
pages = "17622--17642",
abstract = "While Reinforcement Learning from Human Feedback (RLHF) significantly enhances the generation quality of Large Language Models (LLMs), recent studies have raised concerns regarding the complexity and instability associated with the Proximal Policy Optimization (PPO) algorithm, proposing a series of order-based alignment methods as viable alternatives. This paper delves into existing order-based methods, unifying them into one framework and examining their inefficiencies in utilizing reward values. Building upon these findings, we propose a new Value-based Calibration (VCB) method to better align LLMs with human preferences. Experimental results demonstrate that VCB surpasses existing alignment methods on AI assistant and summarization datasets, providing impressive generalizability, robustness, and diversity in different settings.",
}
| While Reinforcement Learning from Human Feedback (RLHF) significantly enhances the generation quality of Large Language Models (LLMs), recent studies have raised concerns regarding the complexity and instability associated with the Proximal Policy Optimization (PPO) algorithm, proposing a series of order-based alignment methods as viable alternatives. This paper delves into existing order-based methods, unifying them into one framework and examining their inefficiencies in utilizing reward values. Building upon these findings, we propose a new Value-based Calibration (VCB) method to better align LLMs with human preferences. Experimental results demonstrate that VCB surpasses existing alignment methods on AI assistant and summarization datasets, providing impressive generalizability, robustness, and diversity in different settings. | [
"Mao, Xin",
"Li, Feng-Lin",
"Xu, Huimin",
"Zhang, Wei",
"Chen, Wang",
"Luu, Anh Tuan"
] | Don't Forget Your Reward Values: Language Model Alignment via Value-based Calibration | emnlp-main.976 | Poster | 2402.16030 | [
"https://github.com/maoxinn/vcb"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.977.bib | https://aclanthology.org/2024.emnlp-main.977/ | @inproceedings{lee-etal-2024-mentor,
title = "Mentor-{KD}: Making Small Language Models Better Multi-step Reasoners",
author = "Lee, Hojae and
Kim, Junho and
Lee, SangKeun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.977",
pages = "17643--17658",
abstract = "Large Language Models (LLMs) have displayed remarkable performances across various complex tasks by leveraging Chain-of-Thought (CoT) prompting. Recently, studies have proposed a Knowledge Distillation (KD) approach, reasoning distillation, which transfers such reasoning ability of LLMs through fine-tuning language models of multi-step rationales generated by LLM teachers. However, they have inadequately considered two challenges regarding insufficient distillation sets from the LLM teacher model, in terms of 1) data quality and 2) soft label provision. In this paper, we propose Mentor-KD, which effectively distills the multi-step reasoning capability of LLMs to smaller LMs while addressing the aforementioned challenges. Specifically, we exploit a mentor, intermediate-sized task-specific fine-tuned model, to augment additional CoT annotations and provide soft labels for the student model during reasoning distillation. We conduct extensive experiments and confirm Mentor-KD{'}s effectiveness across various models and complex reasoning tasks.",
}
| Large Language Models (LLMs) have displayed remarkable performances across various complex tasks by leveraging Chain-of-Thought (CoT) prompting. Recently, studies have proposed a Knowledge Distillation (KD) approach, reasoning distillation, which transfers such reasoning ability of LLMs through fine-tuning language models of multi-step rationales generated by LLM teachers. However, they have inadequately considered two challenges regarding insufficient distillation sets from the LLM teacher model, in terms of 1) data quality and 2) soft label provision. In this paper, we propose Mentor-KD, which effectively distills the multi-step reasoning capability of LLMs to smaller LMs while addressing the aforementioned challenges. Specifically, we exploit a mentor, intermediate-sized task-specific fine-tuned model, to augment additional CoT annotations and provide soft labels for the student model during reasoning distillation. We conduct extensive experiments and confirm Mentor-KD{'}s effectiveness across various models and complex reasoning tasks. | [
"Lee, Hojae",
"Kim, Junho",
"Lee, SangKeun"
] | Mentor-KD: Making Small Language Models Better Multi-step Reasoners | emnlp-main.977 | Poster | 2410.09037 | [
"https://github.com/2hojae/mentor-kd"
] | https://huggingface.co/papers/2410.09037 | 3 | 4 | 2 | 3 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.978.bib | https://aclanthology.org/2024.emnlp-main.978/ | @inproceedings{tian-etal-2024-large-language,
title = "Are Large Language Models Capable of Generating Human-Level Narratives?",
author = "Tian, Yufei and
Huang, Tenghao and
Liu, Miri and
Jiang, Derek and
Spangher, Alexander and
Chen, Muhao and
May, Jonathan and
Peng, Nanyun",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.978",
pages = "17659--17681",
abstract = "As daily reliance on large language models (LLMs) grows, assessing their generation quality is crucial to understanding how they might impact on our communications. This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression. We introduce a novel computational framework to analyze narratives through three discourse-level aspects: i) story arcs, ii) turning points, and iii) affective dimensions, including arousal and valence. By leveraging expert and automatic annotations, we uncover significant discrepancies between the LLM- and human- written stories. While human-written stories are suspenseful, arousing, and diverse in narrative structures, LLM stories are homogeneously positive and lack tension. Next, we measure narrative reasoning skills as a precursor to generative capacities, concluding that most LLMs fall short of human abilities in discourse understanding. Finally, we show that explicit integration of aforementioned discourse features can enhance storytelling, as is demonstrated by over 40{\%} improvement in neural storytelling in terms of diversity, suspense, and arousal. Such advances promise to facilitate greater and more natural roles LLMs in human communication.",
}
| As daily reliance on large language models (LLMs) grows, assessing their generation quality is crucial to understanding how they might impact on our communications. This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression. We introduce a novel computational framework to analyze narratives through three discourse-level aspects: i) story arcs, ii) turning points, and iii) affective dimensions, including arousal and valence. By leveraging expert and automatic annotations, we uncover significant discrepancies between the LLM- and human- written stories. While human-written stories are suspenseful, arousing, and diverse in narrative structures, LLM stories are homogeneously positive and lack tension. Next, we measure narrative reasoning skills as a precursor to generative capacities, concluding that most LLMs fall short of human abilities in discourse understanding. Finally, we show that explicit integration of aforementioned discourse features can enhance storytelling, as is demonstrated by over 40{\%} improvement in neural storytelling in terms of diversity, suspense, and arousal. Such advances promise to facilitate greater and more natural roles LLMs in human communication. | [
"Tian, Yufei",
"Huang, Tenghao",
"Liu, Miri",
"Jiang, Derek",
"Spangher, Alex",
"er",
"Chen, Muhao",
"May, Jonathan",
"Peng, Nanyun"
] | Are Large Language Models Capable of Generating Human-Level Narratives? | emnlp-main.978 | Oral | 2407.13248 | [
"https://github.com/pluslabnlp/narrative-discourse"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.979.bib | https://aclanthology.org/2024.emnlp-main.979/ | @inproceedings{hwang-etal-2024-mp2d,
title = "{MP}2{D}: An Automated Topic Shift Dialogue Generation Framework Leveraging Knowledge Graphs",
author = "Hwang, Yerin and
Kim, Yongil and
Jang, Yunah and
Bang, Jeesoo and
Bae, Hyunkyung and
Jung, Kyomin",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.979",
pages = "17682--17702",
abstract = "Despite advancements in on-topic dialogue systems, effectively managing topic shifts within dialogues remains a persistent challenge, largely attributed to the limited availability of training datasets. To address this issue, we propose Multi-Passage to Dialogue (MP2D), a data generation framework that automatically creates conversational question-answering datasets with natural topic transitions. By leveraging the relationships between entities in a knowledge graph, MP2D maps the flow of topics within a dialogue, effectively mirroring the dynamics of human conversation. It retrieves relevant passages corresponding to the topics and transforms them into dialogues through the passage-to-dialogue method. Through quantitative and qualitative experiments, we demonstrate MP2D{'}s efficacy in generating dialogue with natural topic shifts. Furthermore, this study introduces a novel benchmark for topic shift dialogues, TS-WikiDialog. Utilizing the dataset, we demonstrate that even Large Language Models (LLMs) struggle to handle topic shifts in dialogue effectively, and we showcase the performance improvements of models trained on datasets generated by MP2D across diverse topic shift dialogue tasks.",
}
| Despite advancements in on-topic dialogue systems, effectively managing topic shifts within dialogues remains a persistent challenge, largely attributed to the limited availability of training datasets. To address this issue, we propose Multi-Passage to Dialogue (MP2D), a data generation framework that automatically creates conversational question-answering datasets with natural topic transitions. By leveraging the relationships between entities in a knowledge graph, MP2D maps the flow of topics within a dialogue, effectively mirroring the dynamics of human conversation. It retrieves relevant passages corresponding to the topics and transforms them into dialogues through the passage-to-dialogue method. Through quantitative and qualitative experiments, we demonstrate MP2D{'}s efficacy in generating dialogue with natural topic shifts. Furthermore, this study introduces a novel benchmark for topic shift dialogues, TS-WikiDialog. Utilizing the dataset, we demonstrate that even Large Language Models (LLMs) struggle to handle topic shifts in dialogue effectively, and we showcase the performance improvements of models trained on datasets generated by MP2D across diverse topic shift dialogue tasks. | [
"Hwang, Yerin",
"Kim, Yongil",
"Jang, Yunah",
"Bang, Jeesoo",
"Bae, Hyunkyung",
"Jung, Kyomin"
] | MP2D: An Automated Topic Shift Dialogue Generation Framework Leveraging Knowledge Graphs | emnlp-main.979 | Poster | 2403.05814 | [
""
] | https://huggingface.co/papers/2403.05814 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.980.bib | https://aclanthology.org/2024.emnlp-main.980/ | @inproceedings{lu-naseem-2024-large,
title = "Can Large Language Models Enhance Predictions of Disease Progression? Investigating Through Disease Network Link Prediction",
author = "Lu, Haohui and
Naseem, Usman",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.980",
pages = "17703--17715",
abstract = "Large Language Models (LLMs) have made significant strides in various tasks, yet their effectiveness in predicting disease progression remains relatively unexplored. To fill this gap, we use LLMs and employ advanced graph prompting and Retrieval-Augmented Generation (RAG) to predict disease comorbidity within disease networks. Specifically, we introduce a disease Comorbidity prediction model using LLM, named ComLLM, which leverages domain knowledge to enhance the prediction performance. Based on the comprehensive experimental results, ComLLM consistently outperforms conventional models, such as Graph Neural Networks, achieving average area under the curve (AUC) improvements of 10.70{\%} and 6.07{\%} over the best baseline models in two distinct disease networks. ComLLM is evaluated across multiple settings for disease progression prediction, employing various prompting strategies, including zero-shot, few-shot, Chain-of-Thought, graph prompting and RAG. Our results show that graph prompting and RAG enhance LLM performance in disease progression prediction tasks. ComLLM exhibits superior predictive capabilities and serves as a proof-of-concept for LLM-based systems in disease progression prediction, highlighting its potential for broad applications in healthcare.",
}
| Large Language Models (LLMs) have made significant strides in various tasks, yet their effectiveness in predicting disease progression remains relatively unexplored. To fill this gap, we use LLMs and employ advanced graph prompting and Retrieval-Augmented Generation (RAG) to predict disease comorbidity within disease networks. Specifically, we introduce a disease Comorbidity prediction model using LLM, named ComLLM, which leverages domain knowledge to enhance the prediction performance. Based on the comprehensive experimental results, ComLLM consistently outperforms conventional models, such as Graph Neural Networks, achieving average area under the curve (AUC) improvements of 10.70{\%} and 6.07{\%} over the best baseline models in two distinct disease networks. ComLLM is evaluated across multiple settings for disease progression prediction, employing various prompting strategies, including zero-shot, few-shot, Chain-of-Thought, graph prompting and RAG. Our results show that graph prompting and RAG enhance LLM performance in disease progression prediction tasks. ComLLM exhibits superior predictive capabilities and serves as a proof-of-concept for LLM-based systems in disease progression prediction, highlighting its potential for broad applications in healthcare. | [
"Lu, Haohui",
"Naseem, Usman"
] | Can Large Language Models Enhance Predictions of Disease Progression? Investigating Through Disease Network Link Prediction | emnlp-main.980 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.981.bib | https://aclanthology.org/2024.emnlp-main.981/ | @inproceedings{wang-etal-2024-searching,
title = "Searching for Best Practices in Retrieval-Augmented Generation",
author = "Wang, Xiaohua and
Wang, Zhenghua and
Gao, Xuan and
Zhang, Feiran and
Wu, Yixin and
Xu, Zhibo and
Shi, Tianyuan and
Wang, Zhengyuan and
Li, Shizheng and
Qian, Qi and
Yin, Ruicheng and
Lv, Changze and
Zheng, Xiaoqing and
Huang, Xuanjing",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.981",
pages = "17716--17736",
abstract = "Retrieval-augmented generation (RAG) techniques have proven to be effective in integrating up-to-date information, mitigating hallucinations, and enhancing response quality, particularly in specialized domains. While many RAG approaches have been proposed to enhance large language models through query-dependent retrievals, these approaches still suffer from their complex implementation and prolonged response times. Typically, a RAG workflow involves multiple processing steps, each of which can be executed in various ways. Here, we investigate existing RAG approaches and their potential combinations to identify optimal RAG practices. Through extensive experiments, we suggest several strategies for deploying RAG that balance both performance and efficiency. Moreover, we demonstrate that multimodal retrieval techniques can significantly enhance question-answering capabilities about visual inputs and accelerate the generation of multimodal content using a {``}retrieval as generation{''} strategy.",
}
| Retrieval-augmented generation (RAG) techniques have proven to be effective in integrating up-to-date information, mitigating hallucinations, and enhancing response quality, particularly in specialized domains. While many RAG approaches have been proposed to enhance large language models through query-dependent retrievals, these approaches still suffer from their complex implementation and prolonged response times. Typically, a RAG workflow involves multiple processing steps, each of which can be executed in various ways. Here, we investigate existing RAG approaches and their potential combinations to identify optimal RAG practices. Through extensive experiments, we suggest several strategies for deploying RAG that balance both performance and efficiency. Moreover, we demonstrate that multimodal retrieval techniques can significantly enhance question-answering capabilities about visual inputs and accelerate the generation of multimodal content using a {``}retrieval as generation{''} strategy. | [
"Wang, Xiaohua",
"Wang, Zhenghua",
"Gao, Xuan",
"Zhang, Feiran",
"Wu, Yixin",
"Xu, Zhibo",
"Shi, Tianyuan",
"Wang, Zhengyuan",
"Li, Shizheng",
"Qian, Qi",
"Yin, Ruicheng",
"Lv, Changze",
"Zheng, Xiaoqing",
"Huang, Xuanjing"
] | Searching for Best Practices in Retrieval-Augmented Generation | emnlp-main.981 | Poster | 2407.01219 | [
"https://github.com/FudanDNN-NLP/RAG"
] | https://huggingface.co/papers/2407.01219 | 0 | 11 | 0 | 14 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.982.bib | https://aclanthology.org/2024.emnlp-main.982/ | @inproceedings{abdulhai-etal-2024-moral,
title = "Moral Foundations of Large Language Models",
author = "Abdulhai, Marwa and
Serapio-Garc{\'\i}a, Gregory and
Crepy, Clement and
Valter, Daria and
Canny, John and
Jaques, Natasha",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.982",
pages = "17737--17752",
abstract = "Moral foundations theory (MFT) is a psychological assessment tool that decomposes human moral reasoning into five factors, including care/harm, liberty/oppression, and sanctity/degradation (Graham et al., 2009). People vary in the weight they place on these dimensions when making moral decisions, in part due to their cultural upbringing and political ideology. As large language models (LLMs) are trained on datasets collected from the internet, they may reflect the biases that are present in such corpora. This paper uses MFT as a lens to analyze whether popular LLMs have acquired a bias towards a particular set of moral values. We analyze known LLMs and find they exhibit particular moral foundations, and show how these relate to human moral foundations and political affiliations. We also measure the consistency of these biases, or whether they vary strongly depending on the context of how the model is prompted. Finally, we show that we can adversarially select prompts that encourage the moral to exhibit a particular set of moral foundations, and that this can affect the model{'}s behavior on downstream tasks. These findings help illustrate the potential risks and unintended consequences of LLMs assuming a particular moral stance.",
}
| Moral foundations theory (MFT) is a psychological assessment tool that decomposes human moral reasoning into five factors, including care/harm, liberty/oppression, and sanctity/degradation (Graham et al., 2009). People vary in the weight they place on these dimensions when making moral decisions, in part due to their cultural upbringing and political ideology. As large language models (LLMs) are trained on datasets collected from the internet, they may reflect the biases that are present in such corpora. This paper uses MFT as a lens to analyze whether popular LLMs have acquired a bias towards a particular set of moral values. We analyze known LLMs and find they exhibit particular moral foundations, and show how these relate to human moral foundations and political affiliations. We also measure the consistency of these biases, or whether they vary strongly depending on the context of how the model is prompted. Finally, we show that we can adversarially select prompts that encourage the moral to exhibit a particular set of moral foundations, and that this can affect the model{'}s behavior on downstream tasks. These findings help illustrate the potential risks and unintended consequences of LLMs assuming a particular moral stance. | [
"Abdulhai, Marwa",
"Serapio-Garc{\\'\\i}a, Gregory",
"Crepy, Clement",
"Valter, Daria",
"Canny, John",
"Jaques, Natasha"
] | Moral Foundations of Large Language Models | emnlp-main.982 | Poster | 2310.15337 | [
"https://github.com/abdulhaim/moral_foundations_llm"
] | https://huggingface.co/papers/2310.15337 | 1 | 1 | 1 | 6 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.983.bib | https://aclanthology.org/2024.emnlp-main.983/ | @inproceedings{nigatu-etal-2024-zenos,
title = "The Zeno{'}s Paradox of {`}Low-Resource{'} Languages",
author = "Nigatu, Hellina Hailu and
Tonja, Atnafu Lambebo and
Rosman, Benjamin and
Solorio, Thamar and
Choudhury, Monojit",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.983",
pages = "17753--17774",
abstract = "The disparity in the languages commonly studied in Natural Language Processing (NLP) is typically reflected by referring to languages as low vs high-resourced. However, there is limited consensus on what exactly qualifies as a {`}low-resource language.{'} To understand how NLP papers define and study {`}low resource{'} languages, we qualitatively analyzed 150 papers from the ACL Anthology and popular speech-processing conferences that mention the keyword {`}low-resource.{'} Based on our analysis, we show how several interacting axes contribute to {`}low-resourcedness{'} of a language and why that makes it difficult to track progress for each individual language. We hope our work (1) elicits explicit definitions of the terminology when it is used in papers and (2) provides grounding for the different axes to consider when connoting a language as low-resource.",
}
| The disparity in the languages commonly studied in Natural Language Processing (NLP) is typically reflected by referring to languages as low vs high-resourced. However, there is limited consensus on what exactly qualifies as a {`}low-resource language.{'} To understand how NLP papers define and study {`}low resource{'} languages, we qualitatively analyzed 150 papers from the ACL Anthology and popular speech-processing conferences that mention the keyword {`}low-resource.{'} Based on our analysis, we show how several interacting axes contribute to {`}low-resourcedness{'} of a language and why that makes it difficult to track progress for each individual language. We hope our work (1) elicits explicit definitions of the terminology when it is used in papers and (2) provides grounding for the different axes to consider when connoting a language as low-resource. | [
"Nigatu, Hellina Hailu",
"Tonja, Atnafu Lambebo",
"Rosman, Benjamin",
"Solorio, Thamar",
"Choudhury, Monojit"
] | The Zeno's Paradox of `Low-Resource' Languages | emnlp-main.983 | Poster | 2410.20817 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.984.bib | https://aclanthology.org/2024.emnlp-main.984/ | @inproceedings{srivastava-etal-2024-knowledge,
title = "Knowledge Planning in Large Language Models for Domain-Aligned Counseling Summarization",
author = "Srivastava, Aseem and
Joshi, Smriti and
Chakraborty, Tanmoy and
Akhtar, Md Shad",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.984",
pages = "17775--17789",
abstract = "In mental health counseling, condensing dialogues into concise and relevant summaries (aka counseling notes) holds pivotal significance. Large Language Models (LLMs) exhibit remarkable capabilities in various generative tasks; however, their adaptation to domain-specific intricacies remains challenging, especially within mental health contexts. Unlike standard LLMs, mental health experts first plan to apply domain knowledge in writing summaries. Our work enhances LLMs{'} ability by introducing a novel planning engine to orchestrate structuring knowledge alignment. To achieve high-order planning, we divide knowledge encapsulation into two major phases: (i) holding dialogue structure and (ii) incorporating domain-specific knowledge. We employ a planning engine on Llama-2, resulting in a novel framework, PIECE. Our proposed system employs knowledge filtering-cum-scaffolding to encapsulate domain knowledge. Additionally, PIECE leverages sheaf convolution learning to enhance its understanding of the dialogue{'}s structural nuances. We compare PIECE with 14 baseline methods and observe a significant improvement across ROUGE and Bleurt scores. Further, expert evaluation and analyses validate the generation quality to be effective, sometimes even surpassing the gold standard. We further benchmark PIECE with other LLMs and report improvement, including Llama-2 (+2.72{\%}), Mistral (+2.04{\%}), and Zephyr (+1.59{\%}), to justify the generalizability of the planning engine.",
}
| In mental health counseling, condensing dialogues into concise and relevant summaries (aka counseling notes) holds pivotal significance. Large Language Models (LLMs) exhibit remarkable capabilities in various generative tasks; however, their adaptation to domain-specific intricacies remains challenging, especially within mental health contexts. Unlike standard LLMs, mental health experts first plan to apply domain knowledge in writing summaries. Our work enhances LLMs{'} ability by introducing a novel planning engine to orchestrate structuring knowledge alignment. To achieve high-order planning, we divide knowledge encapsulation into two major phases: (i) holding dialogue structure and (ii) incorporating domain-specific knowledge. We employ a planning engine on Llama-2, resulting in a novel framework, PIECE. Our proposed system employs knowledge filtering-cum-scaffolding to encapsulate domain knowledge. Additionally, PIECE leverages sheaf convolution learning to enhance its understanding of the dialogue{'}s structural nuances. We compare PIECE with 14 baseline methods and observe a significant improvement across ROUGE and Bleurt scores. Further, expert evaluation and analyses validate the generation quality to be effective, sometimes even surpassing the gold standard. We further benchmark PIECE with other LLMs and report improvement, including Llama-2 (+2.72{\%}), Mistral (+2.04{\%}), and Zephyr (+1.59{\%}), to justify the generalizability of the planning engine. | [
"Srivastava, Aseem",
"Joshi, Smriti",
"Chakraborty, Tanmoy",
"Akhtar, Md Shad"
] | Knowledge Planning in Large Language Models for Domain-Aligned Counseling Summarization | emnlp-main.984 | Poster | 2409.14907 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.985.bib | https://aclanthology.org/2024.emnlp-main.985/ | @inproceedings{ramu-etal-2024-enhancing,
title = "Enhancing Post-Hoc Attributions in Long Document Comprehension via Coarse Grained Answer Decomposition",
author = "Ramu, Pritika and
Goswami, Koustava and
Saxena, Apoorv and
Srinivasan, Balaji Vasan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.985",
pages = "17790--17806",
abstract = "Accurately attributing answer text to its source document is crucial for developing a reliable question-answering system. However, attribution for long documents remains largely unexplored. Post-hoc attribution systems are designed to map answer text back to the source document, yet the granularity of this mapping has not been addressed. Furthermore, a critical question arises: What exactly should be attributed? This involves identifying the specific information units within an answer that require grounding. In this paper, we propose and investigate a novel approach to the factual decomposition of generated answers for attribution, employing template-based in-context learning. To accomplish this, we utilize the question and integrate negative sampling during few-shot in-context learning for decomposition. This approach enhances the semantic understanding of both abstractive and extractive answers. We examine the impact of answer decomposition by providing a thorough examination of various attribution approaches, ranging from retrieval-based techniques to LLM-based attributors.",
}
| Accurately attributing answer text to its source document is crucial for developing a reliable question-answering system. However, attribution for long documents remains largely unexplored. Post-hoc attribution systems are designed to map answer text back to the source document, yet the granularity of this mapping has not been addressed. Furthermore, a critical question arises: What exactly should be attributed? This involves identifying the specific information units within an answer that require grounding. In this paper, we propose and investigate a novel approach to the factual decomposition of generated answers for attribution, employing template-based in-context learning. To accomplish this, we utilize the question and integrate negative sampling during few-shot in-context learning for decomposition. This approach enhances the semantic understanding of both abstractive and extractive answers. We examine the impact of answer decomposition by providing a thorough examination of various attribution approaches, ranging from retrieval-based techniques to LLM-based attributors. | [
"Ramu, Pritika",
"Goswami, Koustava",
"Saxena, Apoorv",
"Srinivasan, Balaji Vasan"
] | Enhancing Post-Hoc Attributions in Long Document Comprehension via Coarse Grained Answer Decomposition | emnlp-main.985 | Poster | 2409.17073 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.986.bib | https://aclanthology.org/2024.emnlp-main.986/ | @inproceedings{hirota-etal-2024-descriptive,
title = "From Descriptive Richness to Bias: Unveiling the Dark Side of Generative Image Caption Enrichment",
author = "Hirota, Yusuke and
Hachiuma, Ryo and
Yang, Chao-Han Huck and
Nakashima, Yuta",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.986",
pages = "17807--17816",
abstract = "Large language models (LLMs) have enhanced the capacity of vision-language models to caption visual text. This generative approach to image caption enrichment further makes textual captions more descriptive, improving alignment with the visual context. However, while many studies focus on the benefits of generative caption enrichment (GCE), are there any negative side effects? We compare standard-format captions and recent GCE processes from the perspectives of gender bias and hallucination, showing that enriched captions suffer from increased gender bias and hallucination. Furthermore, models trained on these enriched captions amplify gender bias by an average of 30.9{\%} and increase hallucination by 59.5{\%}. This study serves as a caution against the trend of making captions more descriptive.",
}
| Large language models (LLMs) have enhanced the capacity of vision-language models to caption visual text. This generative approach to image caption enrichment further makes textual captions more descriptive, improving alignment with the visual context. However, while many studies focus on the benefits of generative caption enrichment (GCE), are there any negative side effects? We compare standard-format captions and recent GCE processes from the perspectives of gender bias and hallucination, showing that enriched captions suffer from increased gender bias and hallucination. Furthermore, models trained on these enriched captions amplify gender bias by an average of 30.9{\%} and increase hallucination by 59.5{\%}. This study serves as a caution against the trend of making captions more descriptive. | [
"Hirota, Yusuke",
"Hachiuma, Ryo",
"Yang, Chao-Han Huck",
"Nakashima, Yuta"
] | From Descriptive Richness to Bias: Unveiling the Dark Side of Generative Image Caption Enrichment | emnlp-main.986 | Poster | 2406.13912 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.987.bib | https://aclanthology.org/2024.emnlp-main.987/ | @inproceedings{liu-etal-2024-pruning,
title = "Pruning via Merging: Compressing {LLM}s via Manifold Alignment Based Layer Merging",
author = "Liu, Deyuan and
Qin, Zhanyue and
Wang, Hairu and
Yang, Zhao and
Wang, Zecheng and
Rong, Fangying and
Liu, Qingbin and
Hao, Yanchao and
Li, Bo and
Chen, Xi and
Fan, Cunhang and
Lv, Zhao and
Chu, Dianhui and
Tu, Zhiying and
Sui, Dianbo",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.987",
pages = "17817--17829",
abstract = "While large language models (LLMs) excel in many domains, their complexity and scale challenge deployment in resource-limited environments. Current compression techniques, such as parameter pruning, often fail to effectively utilize the knowledge from pruned parameters. To address these challenges, we propose Manifold-Based Knowledge Alignment and Layer Merging Compression (MKA), a novel approach that uses manifold learning and the Information Bottleneck (IB) measure to merge similar layers, reducing model size while preserving essential performance. We evaluate MKA on multiple benchmark datasets and various LLMs. Our findings show that MKA not only preserves model performance but also achieves substantial compression ratios, outperforming traditional pruning methods. Moreover, when coupled with quantization, MKA delivers even greater compression. Specifically, on the MMLU dataset using the Llama3-8B model, MKA achieves a compression ratio of 43.75{\%} with a minimal performance decrease of only 2.82{\%}. The proposed MKA method offers a resource-efficient and performance-preserving model compression technique for LLMs. We make our code available at https://github.com/SempraETY/Pruning-via-Merging",
}
| While large language models (LLMs) excel in many domains, their complexity and scale challenge deployment in resource-limited environments. Current compression techniques, such as parameter pruning, often fail to effectively utilize the knowledge from pruned parameters. To address these challenges, we propose Manifold-Based Knowledge Alignment and Layer Merging Compression (MKA), a novel approach that uses manifold learning and the Information Bottleneck (IB) measure to merge similar layers, reducing model size while preserving essential performance. We evaluate MKA on multiple benchmark datasets and various LLMs. Our findings show that MKA not only preserves model performance but also achieves substantial compression ratios, outperforming traditional pruning methods. Moreover, when coupled with quantization, MKA delivers even greater compression. Specifically, on the MMLU dataset using the Llama3-8B model, MKA achieves a compression ratio of 43.75{\%} with a minimal performance decrease of only 2.82{\%}. The proposed MKA method offers a resource-efficient and performance-preserving model compression technique for LLMs. We make our code available at https://github.com/SempraETY/Pruning-via-Merging | [
"Liu, Deyuan",
"Qin, Zhanyue",
"Wang, Hairu",
"Yang, Zhao",
"Wang, Zecheng",
"Rong, Fangying",
"Liu, Qingbin",
"Hao, Yanchao",
"Li, Bo",
"Chen, Xi",
"Fan, Cunhang",
"Lv, Zhao",
"Chu, Dianhui",
"Tu, Zhiying",
"Sui, Dianbo"
] | Pruning via Merging: Compressing LLMs via Manifold Alignment Based Layer Merging | emnlp-main.987 | Poster | 2406.16330 | [
""
] | https://huggingface.co/papers/2406.16330 | 0 | 0 | 0 | 15 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.988.bib | https://aclanthology.org/2024.emnlp-main.988/ | @inproceedings{popovic-farber-2024-embedded,
title = "Embedded Named Entity Recognition using Probing Classifiers",
author = {Popovic, Nicholas and
F{\"a}rber, Michael},
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.988",
pages = "17830--17850",
abstract = "Streaming text generation, has become a common way of increasing the responsiveness of language model powered applications such as chat assistants. At the same time, extracting semantic information from generated text is a useful tool for applications such as automated fact checking or retrieval augmented generation. Currently, this requires either separate models during inference, which increases computational cost, or destructive fine-tuning of the language model. Instead, we propose an approach called EMBER which enables streaming named entity recognition in decoder-only language models without fine-tuning them and while incurring minimal additional computational cost at inference time. Specifically, our experiments show that EMBER maintains high token generation rates, with only a negligible decrease in speed of around 1{\%} compared to a 43.64{\%} slowdown measured for a baseline. We make our code and data available online, including a toolkit for training, testing, and deploying efficient token classification models optimized for streaming text generation.",
}
| Streaming text generation, has become a common way of increasing the responsiveness of language model powered applications such as chat assistants. At the same time, extracting semantic information from generated text is a useful tool for applications such as automated fact checking or retrieval augmented generation. Currently, this requires either separate models during inference, which increases computational cost, or destructive fine-tuning of the language model. Instead, we propose an approach called EMBER which enables streaming named entity recognition in decoder-only language models without fine-tuning them and while incurring minimal additional computational cost at inference time. Specifically, our experiments show that EMBER maintains high token generation rates, with only a negligible decrease in speed of around 1{\%} compared to a 43.64{\%} slowdown measured for a baseline. We make our code and data available online, including a toolkit for training, testing, and deploying efficient token classification models optimized for streaming text generation. | [
"Popovic, Nicholas",
"F{\\\"a}rber, Michael"
] | Embedded Named Entity Recognition using Probing Classifiers | emnlp-main.988 | Poster | 2403.11747 | [
"https://github.com/nicpopovic/stoke"
] | https://huggingface.co/papers/2403.11747 | 0 | 0 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.989.bib | https://aclanthology.org/2024.emnlp-main.989/ | @inproceedings{zhang-etal-2024-unleashing-power,
title = "Unleashing the Power of Emojis in Texts via Self-supervised Graph Pre-Training",
author = "Zhang, Zhou and
Tan, Dongzeng and
Wang, Jiaan and
Chen, Yilong and
Xu, Jiarong",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.989",
pages = "17851--17863",
abstract = "Emojis have gained immense popularity on social platforms, serving as a common means to supplement or replace text. However, existing data mining approaches generally either completely ignore or simply treat emojis as ordinary Unicode characters, which may limit the model{'}s ability to grasp the rich semantic information in emojis and the interaction between emojis and texts. Thus, it is necessary to release the emoji{'}s power in social media data mining. To this end, we first construct a heterogeneous graph consisting of three types of nodes, i.e. post, word and emoji nodes to improve the representation of different elements in posts. The edges are also well-defined to model how these three elements interact with each other. To facilitate the sharing of information among post, word and emoji nodes, we propose a graph pre-train framework for text and emoji co-modeling, which contains two graph pre-training tasks: node-level graph contrastive learning and edge-level link reconstruction learning. Extensive experiments on the Xiaohongshu and Twitter datasets with two types of downstream tasks demonstrate that our approach proves significant improvement over previous strong baseline methods.",
}
| Emojis have gained immense popularity on social platforms, serving as a common means to supplement or replace text. However, existing data mining approaches generally either completely ignore or simply treat emojis as ordinary Unicode characters, which may limit the model{'}s ability to grasp the rich semantic information in emojis and the interaction between emojis and texts. Thus, it is necessary to release the emoji{'}s power in social media data mining. To this end, we first construct a heterogeneous graph consisting of three types of nodes, i.e. post, word and emoji nodes to improve the representation of different elements in posts. The edges are also well-defined to model how these three elements interact with each other. To facilitate the sharing of information among post, word and emoji nodes, we propose a graph pre-train framework for text and emoji co-modeling, which contains two graph pre-training tasks: node-level graph contrastive learning and edge-level link reconstruction learning. Extensive experiments on the Xiaohongshu and Twitter datasets with two types of downstream tasks demonstrate that our approach proves significant improvement over previous strong baseline methods. | [
"Zhang, Zhou",
"Tan, Dongzeng",
"Wang, Jiaan",
"Chen, Yilong",
"Xu, Jiarong"
] | Unleashing the Power of Emojis in Texts via Self-supervised Graph Pre-Training | emnlp-main.989 | Poster | 2409.14552 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.990.bib | https://aclanthology.org/2024.emnlp-main.990/ | @inproceedings{yao-etal-2024-data,
title = "Data Contamination Can Cross Language Barriers",
author = "Yao, Feng and
Zhuang, Yufan and
Sun, Zihao and
Xu, Sunan and
Kumar, Animesh and
Shang, Jingbo",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.990",
pages = "17864--17875",
abstract = "The opacity in developing large language models (LLMs) is raising growing concerns about the potential contamination of public benchmarks in the pre-training data. Existing contamination detection methods are typically based on the text overlap between training and evaluation data, which can be too superficial to reflect deeper forms of contamination. In this paper, we first present a cross-lingual form of contamination that inflates LLMs{'} performance while evading current detection methods, deliberately injected by overfitting LLMs on the translated versions of benchmark test sets. Then, we propose generalization-based approaches to unmask such deeply concealed contamination. Specifically, we examine the LLM{'}s performance change after modifying the original benchmark by replacing the false answer choices with correct ones from other questions. Contaminated models can hardly generalize to such easier situations, where the false choices can be \textit{not even wrong}, as all choices are correct in their memorization. Experimental results demonstrate that cross-lingual contamination can easily fool existing detection methods, but not ours. In addition, we discuss the potential utilization of cross-lingual contamination in interpreting LLMs{'} working mechanisms and in post-training LLMs for enhanced multilingual capabilities.",
}
| The opacity in developing large language models (LLMs) is raising growing concerns about the potential contamination of public benchmarks in the pre-training data. Existing contamination detection methods are typically based on the text overlap between training and evaluation data, which can be too superficial to reflect deeper forms of contamination. In this paper, we first present a cross-lingual form of contamination that inflates LLMs{'} performance while evading current detection methods, deliberately injected by overfitting LLMs on the translated versions of benchmark test sets. Then, we propose generalization-based approaches to unmask such deeply concealed contamination. Specifically, we examine the LLM{'}s performance change after modifying the original benchmark by replacing the false answer choices with correct ones from other questions. Contaminated models can hardly generalize to such easier situations, where the false choices can be \textit{not even wrong}, as all choices are correct in their memorization. Experimental results demonstrate that cross-lingual contamination can easily fool existing detection methods, but not ours. In addition, we discuss the potential utilization of cross-lingual contamination in interpreting LLMs{'} working mechanisms and in post-training LLMs for enhanced multilingual capabilities. | [
"Yao, Feng",
"Zhuang, Yufan",
"Sun, Zihao",
"Xu, Sunan",
"Kumar, Animesh",
"Shang, Jingbo"
] | Data Contamination Can Cross Language Barriers | emnlp-main.990 | Poster | 2406.13236 | [
"https://github.com/shangdatalab/deep-contam"
] | https://huggingface.co/papers/2406.13236 | 3 | 8 | 2 | 6 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.991.bib | https://aclanthology.org/2024.emnlp-main.991/ | @inproceedings{li-ng-2024-automated,
title = "Automated Essay Scoring: A Reflection on the State of the Art",
author = "Li, Shengjie and
Ng, Vincent",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.991",
pages = "17876--17888",
abstract = "While steady progress has been made on the task of automated essay scoring (AES) in the past decade, much of the recent work in this area has focused on developing models that beat existing models on a standard evaluation dataset. While improving performance numbers remains an important goal in the short term, such a focus is not necessarily beneficial for the long-term development of the field. We reflect on the state of the art in AES research, discussing issues that we believe can encourage researchers to think bigger than improving performance numbers with the ultimate goal of triggering discussion among AES researchers on how we should move forward.",
}
| While steady progress has been made on the task of automated essay scoring (AES) in the past decade, much of the recent work in this area has focused on developing models that beat existing models on a standard evaluation dataset. While improving performance numbers remains an important goal in the short term, such a focus is not necessarily beneficial for the long-term development of the field. We reflect on the state of the art in AES research, discussing issues that we believe can encourage researchers to think bigger than improving performance numbers with the ultimate goal of triggering discussion among AES researchers on how we should move forward. | [
"Li, Shengjie",
"Ng, Vincent"
] | Automated Essay Scoring: A Reflection on the State of the Art | emnlp-main.991 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.992.bib | https://aclanthology.org/2024.emnlp-main.992/ | @inproceedings{liang-etal-2024-encouraging,
title = "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate",
author = "Liang, Tian and
He, Zhiwei and
Jiao, Wenxiang and
Wang, Xing and
Wang, Yan and
Wang, Rui and
Yang, Yujiu and
Shi, Shuming and
Tu, Zhaopeng",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.992",
pages = "17889--17904",
abstract = "Modern large language models (LLMs) like ChatGPT have shown remarkable performance on general language tasks but still struggle on complex reasoning tasks, which drives the research on cognitive behaviors of LLMs to explore human-like problem-solving strategies. Along this direction, one representative strategy is self-reflection, which asks an LLM to refine the solution with the feedback generated by itself iteratively. However, our study shows that such reflection-style methods suffer from the Degeneration-of-Thought (DoT) problem: once the LLM has established confidence in its solutions, it is unable to generate novel thoughts later through reflection even if its initial stance is incorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD) framework, in which multiple agents express their arguments in the state of {``}tit for tat{''} and a judge manages the debate process to obtain a final solution. Clearly, our MAD framework encourages divergent thinking in LLMs which would be helpful for tasks that require deep levels of contemplation. Experiment results on two challenging datasets, commonsense machine translation and counter-intuitive arithmetic reasoning, demonstrate the effectiveness of our MAD framework. Extensive analyses suggest that the adaptive break of debate and the modest level of {``}tit for tat{''} state are required for MAD to obtain good performance. Moreover, we find that LLMs might not be a fair judge if different LLMs are used for agents.",
}
| Modern large language models (LLMs) like ChatGPT have shown remarkable performance on general language tasks but still struggle on complex reasoning tasks, which drives the research on cognitive behaviors of LLMs to explore human-like problem-solving strategies. Along this direction, one representative strategy is self-reflection, which asks an LLM to refine the solution with the feedback generated by itself iteratively. However, our study shows that such reflection-style methods suffer from the Degeneration-of-Thought (DoT) problem: once the LLM has established confidence in its solutions, it is unable to generate novel thoughts later through reflection even if its initial stance is incorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD) framework, in which multiple agents express their arguments in the state of {``}tit for tat{''} and a judge manages the debate process to obtain a final solution. Clearly, our MAD framework encourages divergent thinking in LLMs which would be helpful for tasks that require deep levels of contemplation. Experiment results on two challenging datasets, commonsense machine translation and counter-intuitive arithmetic reasoning, demonstrate the effectiveness of our MAD framework. Extensive analyses suggest that the adaptive break of debate and the modest level of {``}tit for tat{''} state are required for MAD to obtain good performance. Moreover, we find that LLMs might not be a fair judge if different LLMs are used for agents. | [
"Liang, Tian",
"He, Zhiwei",
"Jiao, Wenxiang",
"Wang, Xing",
"Wang, Yan",
"Wang, Rui",
"Yang, Yujiu",
"Shi, Shuming",
"Tu, Zhaopeng"
] | Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate | emnlp-main.992 | Poster | 2305.19118 | [
"https://github.com/skytliang/multi-agents-debate"
] | https://huggingface.co/papers/2305.19118 | 0 | 0 | 0 | 9 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.emnlp-main.993.bib | https://aclanthology.org/2024.emnlp-main.993/ | @inproceedings{zhou-etal-2024-unveiling,
title = "Unveiling and Consulting Core Experts in Retrieval-Augmented {M}o{E}-based {LLM}s",
author = "Zhou, Xin and
Nie, Ping and
Guo, Yiwen and
Wei, Haojie and
Zhang, Zhanqiu and
Minervini, Pasquale and
Ma, Ruotian and
Gui, Tao and
Zhang, Qi and
Huang, Xuanjing",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.993",
pages = "17905--17923",
abstract = "Retrieval-Augmented Generation (RAG) significantly improved the ability of Large Language Models (LLMs) to solve knowledge-intensive tasks. While existing research seeks to enhance RAG performance by retrieving higher-quality documents or designing RAG-specific LLMs, the internal mechanisms within LLMs that contribute to RAG{'}s effectiveness remain underexplored. In this paper, we aim to investigate these internal mechanisms within the popular Mixture-of-Expert (MoE)-based LLMs and demonstrate how to improve RAG by examining expert activations in these LLMs. Our controlled experiments reveal that several core groups of experts are primarily responsible for RAG-related behaviors. The activation of these core experts can signify the model{'}s inclination towards external/internal knowledge and adjust its behavior. For instance, we identify core experts that can (1) indicate the sufficiency of the model{'}s internal knowledge, (2) assess the quality of retrieved documents, and (3) enhance the model{'}s ability to utilize context. Based on these findings, we propose several strategies to enhance RAG{'}s efficiency and effectiveness through expert activation. Experimental results across various datasets and MoE LLMs show the effectiveness of our method.",
}
| Retrieval-Augmented Generation (RAG) significantly improved the ability of Large Language Models (LLMs) to solve knowledge-intensive tasks. While existing research seeks to enhance RAG performance by retrieving higher-quality documents or designing RAG-specific LLMs, the internal mechanisms within LLMs that contribute to RAG{'}s effectiveness remain underexplored. In this paper, we aim to investigate these internal mechanisms within the popular Mixture-of-Expert (MoE)-based LLMs and demonstrate how to improve RAG by examining expert activations in these LLMs. Our controlled experiments reveal that several core groups of experts are primarily responsible for RAG-related behaviors. The activation of these core experts can signify the model{'}s inclination towards external/internal knowledge and adjust its behavior. For instance, we identify core experts that can (1) indicate the sufficiency of the model{'}s internal knowledge, (2) assess the quality of retrieved documents, and (3) enhance the model{'}s ability to utilize context. Based on these findings, we propose several strategies to enhance RAG{'}s efficiency and effectiveness through expert activation. Experimental results across various datasets and MoE LLMs show the effectiveness of our method. | [
"Zhou, Xin",
"Nie, Ping",
"Guo, Yiwen",
"Wei, Haojie",
"Zhang, Zhanqiu",
"Minervini, Pasquale",
"Ma, Ruotian",
"Gui, Tao",
"Zhang, Qi",
"Huang, Xuanjing"
] | Unveiling and Consulting Core Experts in Retrieval-Augmented MoE-based LLMs | emnlp-main.993 | Poster | 2410.15438 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.994.bib | https://aclanthology.org/2024.emnlp-main.994/ | @inproceedings{kang-etal-2024-cure,
title = "{CURE}: Context- and Uncertainty-Aware Mental Disorder Detection",
author = "Kang, Migyeong and
Choi, Goun and
Jeon, Hyolim and
An, Ji Hyun and
Choi, Daejin and
Han, Jinyoung",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.994",
pages = "17924--17940",
abstract = "As the explainability of mental disorder detection models has become important, symptom-based methods that predict disorders from identified symptoms have been widely utilized. However, since these approaches focused on the presence of symptoms, the context of symptoms can be often ignored, leading to missing important contextual information related to detecting mental disorders. Furthermore, the result of disorder detection can be vulnerable to errors that may occur in identifying symptoms. To address these issues, we propose a novel framework that detects mental disorders by leveraging symptoms and their context while mitigating potential errors in symptom identification. In this way, we propose to use large language models to effectively extract contextual information and introduce an uncertainty-aware decision fusion network that combines predictions of multiple models based on quantified uncertainty values. To evaluate the proposed method, we constructed a new Korean mental health dataset annotated by experts, named KoMOS. Experimental results demonstrate that the proposed model accurately detects mental disorders even in situations where symptom information is incomplete.",
}
| As the explainability of mental disorder detection models has become important, symptom-based methods that predict disorders from identified symptoms have been widely utilized. However, since these approaches focused on the presence of symptoms, the context of symptoms can be often ignored, leading to missing important contextual information related to detecting mental disorders. Furthermore, the result of disorder detection can be vulnerable to errors that may occur in identifying symptoms. To address these issues, we propose a novel framework that detects mental disorders by leveraging symptoms and their context while mitigating potential errors in symptom identification. In this way, we propose to use large language models to effectively extract contextual information and introduce an uncertainty-aware decision fusion network that combines predictions of multiple models based on quantified uncertainty values. To evaluate the proposed method, we constructed a new Korean mental health dataset annotated by experts, named KoMOS. Experimental results demonstrate that the proposed model accurately detects mental disorders even in situations where symptom information is incomplete. | [
"Kang, Migyeong",
"Choi, Goun",
"Jeon, Hyolim",
"An, Ji Hyun",
"Choi, Daejin",
"Han, Jinyoung"
] | CURE: Context- and Uncertainty-Aware Mental Disorder Detection | emnlp-main.994 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.995.bib | https://aclanthology.org/2024.emnlp-main.995/ | @inproceedings{yu-etal-2024-peprec,
title = "{P}ep{R}ec: Progressive Enhancement of Prompting for Recommendation",
author = "Yu, Yakun and
Qi, Shi-ang and
Li, Baochun and
Niu, Di",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.995",
pages = "17941--17953",
abstract = "With large language models (LLMs) achieving remarkable breakthroughs in natural language processing (NLP) domains, recent researchers have actively explored the potential of LLMs for recommendation systems by converting the input data into textual sentences through prompt templates. Although semantic knowledge from LLMs can help enrich the content information of items, to date it is still hard for them to achieve comparable performance to traditional deep learning recommendation models, partly due to a lack of ability to leverage collaborative filtering. In this paper, we propose a novel training-free prompting framework, PepRec, which aims to capture knowledge from both content-based filtering and collaborative filtering to boost recommendation performance with LLMs, while providing interpretation for the recommendation. Experiments based on two real-world datasets from different domains show that PepRec significantly outperforms various traditional deep learning recommendation models and prompt-based recommendation systems.",
}
| With large language models (LLMs) achieving remarkable breakthroughs in natural language processing (NLP) domains, recent researchers have actively explored the potential of LLMs for recommendation systems by converting the input data into textual sentences through prompt templates. Although semantic knowledge from LLMs can help enrich the content information of items, to date it is still hard for them to achieve comparable performance to traditional deep learning recommendation models, partly due to a lack of ability to leverage collaborative filtering. In this paper, we propose a novel training-free prompting framework, PepRec, which aims to capture knowledge from both content-based filtering and collaborative filtering to boost recommendation performance with LLMs, while providing interpretation for the recommendation. Experiments based on two real-world datasets from different domains show that PepRec significantly outperforms various traditional deep learning recommendation models and prompt-based recommendation systems. | [
"Yu, Yakun",
"Qi, Shi-ang",
"Li, Baochun",
"Niu, Di"
] | PepRec: Progressive Enhancement of Prompting for Recommendation | emnlp-main.995 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.996.bib | https://aclanthology.org/2024.emnlp-main.996/ | @inproceedings{li-etal-2024-context,
title = "In-Context Compositional Generalization for Large Vision-Language Models",
author = "Li, Chuanhao and
Jing, Chenchen and
Li, Zhen and
Zhai, Mingliang and
Wu, Yuwei and
Jia, Yunde",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.996",
pages = "17954--17966",
abstract = "Recent work has revealed that in-context learning for large language models exhibits compositional generalization capacity, which can be enhanced by selecting in-context demonstrations similar to test cases to provide contextual information. However, how to exhibit in-context compositional generalization (ICCG) of large vision-language models (LVLMs) is non-trival. Due to the inherent asymmetry between visual and linguistic modalities, ICCG in LVLMs faces an inevitable challenge{---}redundant information on the visual modality. The redundant information affects in-context learning from two aspects: (1) Similarity calculation may be dominated by redundant information, resulting in sub-optimal demonstration selection. (2) Redundant information in in-context demonstrations brings misleading contextual information to in-context learning. To alleviate these problems, we propose a demonstration selection method to achieve ICCG for LVLMs, by considering two key factors of demonstrations: content and structure, from a multimodal perspective. Specifically, we design a diversity-coverage-based matching score to select demonstrations with maximum coverage, and avoid selecting demonstrations with redundant information via their content redundancy and structural complexity. We build a GQA-ICCG dataset to simulate the ICCG setting, and conduct experiments on GQA-ICCG and the VQA v2 dataset. Experimental results demonstrate the effectiveness of our method.",
}
| Recent work has revealed that in-context learning for large language models exhibits compositional generalization capacity, which can be enhanced by selecting in-context demonstrations similar to test cases to provide contextual information. However, how to exhibit in-context compositional generalization (ICCG) of large vision-language models (LVLMs) is non-trival. Due to the inherent asymmetry between visual and linguistic modalities, ICCG in LVLMs faces an inevitable challenge{---}redundant information on the visual modality. The redundant information affects in-context learning from two aspects: (1) Similarity calculation may be dominated by redundant information, resulting in sub-optimal demonstration selection. (2) Redundant information in in-context demonstrations brings misleading contextual information to in-context learning. To alleviate these problems, we propose a demonstration selection method to achieve ICCG for LVLMs, by considering two key factors of demonstrations: content and structure, from a multimodal perspective. Specifically, we design a diversity-coverage-based matching score to select demonstrations with maximum coverage, and avoid selecting demonstrations with redundant information via their content redundancy and structural complexity. We build a GQA-ICCG dataset to simulate the ICCG setting, and conduct experiments on GQA-ICCG and the VQA v2 dataset. Experimental results demonstrate the effectiveness of our method. | [
"Li, Chuanhao",
"Jing, Chenchen",
"Li, Zhen",
"Zhai, Mingliang",
"Wu, Yuwei",
"Jia, Yunde"
] | In-Context Compositional Generalization for Large Vision-Language Models | emnlp-main.996 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.997.bib | https://aclanthology.org/2024.emnlp-main.997/ | @inproceedings{yuan-etal-2024-improving,
title = "Improving Zero-shot {LLM} Re-Ranker with Risk Minimization",
author = "Yuan, Xiaowei and
Yang, Zhao and
Wang, Yequan and
Zhao, Jun and
Liu, Kang",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.997",
pages = "17967--17983",
}
| No abstract found | [
"Yuan, Xiaowei",
"Yang, Zhao",
"Wang, Yequan",
"Zhao, Jun",
"Liu, Kang"
] | Improving Zero-shot LLM Re-Ranker with Risk Minimization | emnlp-main.997 | Poster | 2406.13331 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.998.bib | https://aclanthology.org/2024.emnlp-main.998/ | @inproceedings{zhuang-etal-2024-game,
title = "Game on Tree: Visual Hallucination Mitigation via Coarse-to-Fine View Tree and Game Theory",
author = "Zhuang, Xianwei and
Zhu, Zhihong and
Chen, Zhanpeng and
Xie, Yuxin and
Liang, Liming and
Zou, Yuexian",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.998",
pages = "17984--18003",
abstract = "Large Vision-Language Models (LVLMs) may produce outputs that are unfaithful to reality, also known as visual hallucinations (VH), which hinders their application in multimodal understanding and decision-making. In this work, we introduce a novel plug-and-play train-free decoding algorithm named Game and Tree based Hallucination Mitigation (GTHM), designed for mitigating VH. GTHM is inspired by empirical observations that the fuzziness of multi-granularity view perception exacerbates VH. Based on this, GTHM leverages visual information to construct a coarse-to-fine visual view tree (CFTree) that organizes visual objects, attributes, and relationships in a hierarchical manner. Additionally, we innovatively model the optimal visual-token matching process on the CFTree as the cooperative game. Specifically, we define the Tree-based Shapley Value (TSV) for each visual view on the CFTree to assess its significant contribution to the overall visual understanding, thereby determining the optimal visual granularity. Subsequently, we utilize the TSV as guidance to implement adaptive weight contrastive decoding to achieve vision-aware decoding. Extensive experiments on four popular benchmarks confirm the effectiveness of our GTHM in alleviating VH across different LVLM families without additional training or post-processing. Our code is published at https://github.com/mengchuang123/GTHM.",
}
| Large Vision-Language Models (LVLMs) may produce outputs that are unfaithful to reality, also known as visual hallucinations (VH), which hinders their application in multimodal understanding and decision-making. In this work, we introduce a novel plug-and-play train-free decoding algorithm named Game and Tree based Hallucination Mitigation (GTHM), designed for mitigating VH. GTHM is inspired by empirical observations that the fuzziness of multi-granularity view perception exacerbates VH. Based on this, GTHM leverages visual information to construct a coarse-to-fine visual view tree (CFTree) that organizes visual objects, attributes, and relationships in a hierarchical manner. Additionally, we innovatively model the optimal visual-token matching process on the CFTree as the cooperative game. Specifically, we define the Tree-based Shapley Value (TSV) for each visual view on the CFTree to assess its significant contribution to the overall visual understanding, thereby determining the optimal visual granularity. Subsequently, we utilize the TSV as guidance to implement adaptive weight contrastive decoding to achieve vision-aware decoding. Extensive experiments on four popular benchmarks confirm the effectiveness of our GTHM in alleviating VH across different LVLM families without additional training or post-processing. Our code is published at https://github.com/mengchuang123/GTHM. | [
"Zhuang, Xianwei",
"Zhu, Zhihong",
"Chen, Zhanpeng",
"Xie, Yuxin",
"Liang, Liming",
"Zou, Yuexian"
] | Game on Tree: Visual Hallucination Mitigation via Coarse-to-Fine View Tree and Game Theory | emnlp-main.998 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.emnlp-main.999.bib | https://aclanthology.org/2024.emnlp-main.999/ | @inproceedings{qiu-zhang-2024-label,
title = "Label Confidence Weighted Learning for Target-level Sentence Simplification",
author = "Qiu, Xin Ying and
Zhang, Jingshen",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.999",
pages = "18004--18019",
abstract = "Multi-level sentence simplification generates simplified sentences with varying language proficiency levels. We propose Label Confidence Weighted Learning (LCWL), a novel approach that incorporates a label confidence weighting scheme in the training loss of the encoder-decoder model, setting it apart from existing confidence-weighting methods primarily designed for classification. Experimentation on English grade-level simplification dataset shows that LCWL outperforms state-of-the-art unsupervised baselines. Fine-tuning the LCWL model on in-domain data and combining with Symmetric Cross Entropy (SCE) consistently delivers better simplifications compared to strong supervised methods. Our results highlight the effectiveness of label confidence weighting techniques for text simplification tasks with encoder-decoder architectures.",
}
| Multi-level sentence simplification generates simplified sentences with varying language proficiency levels. We propose Label Confidence Weighted Learning (LCWL), a novel approach that incorporates a label confidence weighting scheme in the training loss of the encoder-decoder model, setting it apart from existing confidence-weighting methods primarily designed for classification. Experimentation on English grade-level simplification dataset shows that LCWL outperforms state-of-the-art unsupervised baselines. Fine-tuning the LCWL model on in-domain data and combining with Symmetric Cross Entropy (SCE) consistently delivers better simplifications compared to strong supervised methods. Our results highlight the effectiveness of label confidence weighting techniques for text simplification tasks with encoder-decoder architectures. | [
"Qiu, Xin Ying",
"Zhang, Jingshen"
] | Label Confidence Weighted Learning for Target-level Sentence Simplification | emnlp-main.999 | Poster | 2410.05748 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.emnlp-main.1000.bib | https://aclanthology.org/2024.emnlp-main.1000/ | @inproceedings{xu-etal-2024-quantum,
title = "Quantum Recurrent Architectures for Text Classification",
author = "Xu, Wenduan and
Clark, Stephen and
Brown, Douglas and
Matos, Gabriel and
Meichanetzidis, Konstantinos",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.1000",
pages = "18020--18027",
abstract = "We develop quantum RNNs with cells based on Parametrised Quantum Circuits (PQCs). PQCs can provide a form of hybrid quantum-classical computation where the input and the output is in the form of classical data. The previous {``}hidden{''} state is the quantum state from the previous time-step, and an angle encoding is used to define a (non-linear) mapping from a classical word embedding into the quantum Hilbert space. Measurements of the quantum state provide classical statistics which are used for classification. We report results which are competitive with various RNN baselines on the Rotten Tomatoes dataset, as well as emulator results which demonstrate the feasibility of running such models on quantum hardware.",
}
| We develop quantum RNNs with cells based on Parametrised Quantum Circuits (PQCs). PQCs can provide a form of hybrid quantum-classical computation where the input and the output is in the form of classical data. The previous {``}hidden{''} state is the quantum state from the previous time-step, and an angle encoding is used to define a (non-linear) mapping from a classical word embedding into the quantum Hilbert space. Measurements of the quantum state provide classical statistics which are used for classification. We report results which are competitive with various RNN baselines on the Rotten Tomatoes dataset, as well as emulator results which demonstrate the feasibility of running such models on quantum hardware. | [
"Xu, Wenduan",
"Clark, Stephen",
"Brown, Douglas",
"Matos, Gabriel",
"Meichanetzidis, Konstantinos"
] | Quantum Recurrent Architectures for Text Classification | emnlp-main.1000 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.