bibtex_url
stringlengths 41
53
| proceedings
stringlengths 38
50
| bibtext
stringlengths 528
3.02k
| abstract
stringlengths 17
2.35k
| authors
sequencelengths 1
44
| title
stringlengths 18
190
| id
stringlengths 7
19
| arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 528
values | n_linked_authors
int64 -1
15
| upvotes
int64 -1
77
| num_comments
int64 -1
10
| n_authors
int64 -1
52
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
15
| Spaces
sequencelengths 0
46
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.emnlp-main.501.bib | https://aclanthology.org/2023.emnlp-main.501/ | @inproceedings{han-etal-2023-dialcot,
title = "{D}ial{C}o{T} Meets {PPO}: Decomposing and Exploring Reasoning Paths in Smaller Language Models",
author = "Han, Chengcheng and
Du, Xiaowei and
Zhang, Che and
Lian, Yixin and
Li, Xiang and
Gao, Ming and
Wang, Baoyuan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.501",
doi = "10.18653/v1/2023.emnlp-main.501",
pages = "8055--8068",
abstract = "Chain-of-Thought (CoT) prompting has successfully enhanced the reasoning capabilities of Large Language Models (LLMs) with at least 100 billion parameters. However, it is ineffective, or even detrimental, to the performance on reasoning tasks in Smaller Language Models (SLMs) with less than 10 billion parameters. In this paper, we propose Dialogue-guided Chain-of-Thought (DialCoT) to improve the reasoning capabilities of SLMs, with the aim of generating intermediate reasoning steps in a dialogue format to guide the model to the final answer. Furthermore, we optimize the model to choose the optimal reasoning path through the Proximal Policy Optimization (PPO) algorithm, further enhancing its reasoning capabilities. Compared to previous methods, our advantages lie in: 1) We transform the process of solving complex reasoning problems into decomposing problems and solving a series of simpler sub-questions, significantly reducing task difficulty and making it more suitable for SLMs. 2) We optimize the model to choose the optimal reasoning path through the PPO algorithm. Comprehensive experiments on four arithmetic reasoning datasets show that our method can achieve significant performance gains over state-of-the-art competitors.",
}
| Chain-of-Thought (CoT) prompting has successfully enhanced the reasoning capabilities of Large Language Models (LLMs) with at least 100 billion parameters. However, it is ineffective, or even detrimental, to the performance on reasoning tasks in Smaller Language Models (SLMs) with less than 10 billion parameters. In this paper, we propose Dialogue-guided Chain-of-Thought (DialCoT) to improve the reasoning capabilities of SLMs, with the aim of generating intermediate reasoning steps in a dialogue format to guide the model to the final answer. Furthermore, we optimize the model to choose the optimal reasoning path through the Proximal Policy Optimization (PPO) algorithm, further enhancing its reasoning capabilities. Compared to previous methods, our advantages lie in: 1) We transform the process of solving complex reasoning problems into decomposing problems and solving a series of simpler sub-questions, significantly reducing task difficulty and making it more suitable for SLMs. 2) We optimize the model to choose the optimal reasoning path through the PPO algorithm. Comprehensive experiments on four arithmetic reasoning datasets show that our method can achieve significant performance gains over state-of-the-art competitors. | [
"Han, Chengcheng",
"Du, Xiaowei",
"Zhang, Che",
"Lian, Yixin",
"Li, Xiang",
"Gao, Ming",
"Wang, Baoyuan"
] | DialCoT Meets PPO: Decomposing and Exploring Reasoning Paths in Smaller Language Models | emnlp-main.501 | 2310.05074 | [
"https://github.com/hccngu/dialcot"
] | https://huggingface.co/papers/2310.05074 | 0 | 1 | 0 | 7 | [] | [] | [] | 1 | Oral |
https://aclanthology.org/2023.emnlp-main.502.bib | https://aclanthology.org/2023.emnlp-main.502/ | @inproceedings{svete-cotterell-2023-recurrent,
title = "Recurrent Neural Language Models as Probabilistic Finite-state Automata",
author = "Svete, Anej and
Cotterell, Ryan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.502",
doi = "10.18653/v1/2023.emnlp-main.502",
pages = "8069--8086",
abstract = "Studying language models (LMs) in terms of well-understood formalisms allows us to precisely characterize their abilities and limitations. Previous work has investigated the expressive power of recurrent neural network (RNN) LMs in terms of their capacity to recognize unweighted formal languages. However, LMs do not describe unweighted formal languages{---}rather, they define probability distributions over strings. In this work, we study what classes of such probability distributions RNN LMs can represent, which allows us to make more direct statements about their capabilities. We show that simple RNNs are equivalent to a subclass of probabilistic finite-state automata, and can thus model a strict subset of probability distributions expressible by finite-state models. Furthermore, we study the space complexity of representing finite-state LMs with RNNs. We show that, to represent an arbitrary deterministic finite-state LM with $N$ states over an alphabet $\Sigma$, an RNN requires $\Omega\left(N |\Sigma|\right)$ neurons. These results present a first step towards characterizing the classes of distributions RNN LMs can represent and thus help us understand their capabilities and limitations.",
}
| Studying language models (LMs) in terms of well-understood formalisms allows us to precisely characterize their abilities and limitations. Previous work has investigated the expressive power of recurrent neural network (RNN) LMs in terms of their capacity to recognize unweighted formal languages. However, LMs do not describe unweighted formal languages{---}rather, they define probability distributions over strings. In this work, we study what classes of such probability distributions RNN LMs can represent, which allows us to make more direct statements about their capabilities. We show that simple RNNs are equivalent to a subclass of probabilistic finite-state automata, and can thus model a strict subset of probability distributions expressible by finite-state models. Furthermore, we study the space complexity of representing finite-state LMs with RNNs. We show that, to represent an arbitrary deterministic finite-state LM with $N$ states over an alphabet $\Sigma$, an RNN requires $\Omega\left(N |\Sigma|\right)$ neurons. These results present a first step towards characterizing the classes of distributions RNN LMs can represent and thus help us understand their capabilities and limitations. | [
"Svete, Anej",
"Cotterell, Ryan"
] | Recurrent Neural Language Models as Probabilistic Finite-state Automata | emnlp-main.502 | 2310.05161 | [
"https://github.com/rycolab/weighted-minsky"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.503.bib | https://aclanthology.org/2023.emnlp-main.503/ | @inproceedings{li-etal-2023-revisiting,
title = "Revisiting Source Context in Nearest Neighbor Machine Translation",
author = "Li, Xuanhong and
Li, Peng and
Hu, Po",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.503",
doi = "10.18653/v1/2023.emnlp-main.503",
pages = "8087--8098",
abstract = "Nearest neighbor machine translation ($k$NN-MT), which interpolates target token probabilities with estimates derived from additional examples, has achieved significant improvements and attracted extensive interest in recent years. However, existing research does not explicitly consider the source context when retrieving similar examples, potentially leading to suboptimal performance. To address this, we comprehensively revisit the role of source context and propose a simple and effective method for improving neural machine translation via source context enhancement, demonstrating its crucial role in both retrieving superior examples and determining more suitable interpolation coefficients. Furthermore, we reveal that the probability estimation can be further optimized by incorporating a source-aware distance calibration module. Comprehensive experiments show that our proposed approach can be seamlessly integrated with representative $k$NN-MT baselines, resulting in substantial improvements over these strong baselines across a number of settings and domains. Remarkably, these improvements can reach up to 1.6 BLEU points.",
}
| Nearest neighbor machine translation ($k$NN-MT), which interpolates target token probabilities with estimates derived from additional examples, has achieved significant improvements and attracted extensive interest in recent years. However, existing research does not explicitly consider the source context when retrieving similar examples, potentially leading to suboptimal performance. To address this, we comprehensively revisit the role of source context and propose a simple and effective method for improving neural machine translation via source context enhancement, demonstrating its crucial role in both retrieving superior examples and determining more suitable interpolation coefficients. Furthermore, we reveal that the probability estimation can be further optimized by incorporating a source-aware distance calibration module. Comprehensive experiments show that our proposed approach can be seamlessly integrated with representative $k$NN-MT baselines, resulting in substantial improvements over these strong baselines across a number of settings and domains. Remarkably, these improvements can reach up to 1.6 BLEU points. | [
"Li, Xuanhong",
"Li, Peng",
"Hu, Po"
] | Revisiting Source Context in Nearest Neighbor Machine Translation | emnlp-main.503 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.504.bib | https://aclanthology.org/2023.emnlp-main.504/ | @inproceedings{oguz-etal-2023-find,
title = "Find-2-Find: Multitask Learning for Anaphora Resolution and Object Localization",
author = "Oguz, Cennet and
Denis, Pascal and
Vincent, Emmanuel and
Ostermann, Simon and
van Genabith, Josef",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.504",
doi = "10.18653/v1/2023.emnlp-main.504",
pages = "8099--8110",
abstract = "In multimodal understanding tasks, visual and linguistic ambiguities can arise. Visual ambiguity can occur when visual objects require a model to ground a referring expression in a video without strong supervision, while linguistic ambiguity can occur from changes in entities in action flows. As an example from the cooking domain, {``}oil{''} mixed with {``}salt{''} and {``}pepper{''} could later be referred to as a {``}mixture{''}. Without a clear visual-linguistic alignment, we cannot know which among several objects shown is referred to by the language expression {``}mixture{''}, and without resolved antecedents, we cannot pinpoint what the mixture is. We define this chicken-and-egg problem as Visual-linguistic Ambiguity. In this paper, we present Find2Find, a joint anaphora resolution and object localization dataset targeting the problem of \textit{visual-linguistic ambiguity}, consisting of 500 anaphora-annotated recipes with corresponding videos. We present experimental results of a novel end-to-end joint multitask learning framework for Find2Find that fuses visual and textual information and shows improvements both for anaphora resolution and object localization with one joint model in multitask learning, as compared to a strong single-task baseline.",
}
| In multimodal understanding tasks, visual and linguistic ambiguities can arise. Visual ambiguity can occur when visual objects require a model to ground a referring expression in a video without strong supervision, while linguistic ambiguity can occur from changes in entities in action flows. As an example from the cooking domain, {``}oil{''} mixed with {``}salt{''} and {``}pepper{''} could later be referred to as a {``}mixture{''}. Without a clear visual-linguistic alignment, we cannot know which among several objects shown is referred to by the language expression {``}mixture{''}, and without resolved antecedents, we cannot pinpoint what the mixture is. We define this chicken-and-egg problem as Visual-linguistic Ambiguity. In this paper, we present Find2Find, a joint anaphora resolution and object localization dataset targeting the problem of \textit{visual-linguistic ambiguity}, consisting of 500 anaphora-annotated recipes with corresponding videos. We present experimental results of a novel end-to-end joint multitask learning framework for Find2Find that fuses visual and textual information and shows improvements both for anaphora resolution and object localization with one joint model in multitask learning, as compared to a strong single-task baseline. | [
"Oguz, Cennet",
"Denis, Pascal",
"Vincent, Emmanuel",
"Ostermann, Simon",
"van Genabith, Josef"
] | Find-2-Find: Multitask Learning for Anaphora Resolution and Object Localization | emnlp-main.504 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.505.bib | https://aclanthology.org/2023.emnlp-main.505/ | @inproceedings{pratapa-etal-2023-background,
title = "Background Summarization of Event Timelines",
author = "Pratapa, Adithya and
Small, Kevin and
Dreyer, Markus",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.505",
doi = "10.18653/v1/2023.emnlp-main.505",
pages = "8111--8136",
abstract = "Generating concise summaries of news events is a challenging natural language processing task. While journalists often curate timelines to highlight key sub-events, newcomers to a news event face challenges in catching up on its historical context. In this paper, we address this need by introducing the task of background news summarization, which complements each timeline update with a background summary of relevant preceding events. We construct a dataset by merging existing timeline datasets and asking human annotators to write a background summary for each timestep of each news event. We establish strong baseline performance using state-of-the-art summarization systems and propose a query-focused variant to generate background summaries. To evaluate background summary quality, we present a question-answering-based evaluation metric, Background Utility Score (BUS), which measures the percentage of questions about a current event timestep that a background summary answers. Our experiments show the effectiveness of instruction fine-tuned systems such as Flan-T5, in addition to strong zero-shot performance using GPT-3.5.",
}
| Generating concise summaries of news events is a challenging natural language processing task. While journalists often curate timelines to highlight key sub-events, newcomers to a news event face challenges in catching up on its historical context. In this paper, we address this need by introducing the task of background news summarization, which complements each timeline update with a background summary of relevant preceding events. We construct a dataset by merging existing timeline datasets and asking human annotators to write a background summary for each timestep of each news event. We establish strong baseline performance using state-of-the-art summarization systems and propose a query-focused variant to generate background summaries. To evaluate background summary quality, we present a question-answering-based evaluation metric, Background Utility Score (BUS), which measures the percentage of questions about a current event timestep that a background summary answers. Our experiments show the effectiveness of instruction fine-tuned systems such as Flan-T5, in addition to strong zero-shot performance using GPT-3.5. | [
"Pratapa, Adithya",
"Small, Kevin",
"Dreyer, Markus"
] | Background Summarization of Event Timelines | emnlp-main.505 | 2310.16197 | [
"https://github.com/amazon-science/background-summaries"
] | https://huggingface.co/papers/2310.16197 | 0 | 0 | 0 | 3 | [] | [
"adithya7/background-summaries"
] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.506.bib | https://aclanthology.org/2023.emnlp-main.506/ | @inproceedings{berdicevskis-etal-2023-superlim,
title = "Superlim: A {S}wedish Language Understanding Evaluation Benchmark",
author = {Berdicevskis, Aleksandrs and
Bouma, Gerlof and
Kurtz, Robin and
Morger, Felix and
{\"O}hman, Joey and
Adesam, Yvonne and
Borin, Lars and
Dann{\'e}lls, Dana and
Forsberg, Markus and
Isbister, Tim and
Lindahl, Anna and
Malmsten, Martin and
Rekathati, Faton and
Sahlgren, Magnus and
Volodina, Elena and
B{\"o}rjeson, Love and
Hengchen, Simon and
Tahmasebi, Nina},
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.506",
doi = "10.18653/v1/2023.emnlp-main.506",
pages = "8137--8153",
abstract = "We present Superlim, a multi-task NLP benchmark and analysis platform for evaluating Swedish language models, a counterpart to the English-language (Super)GLUE suite. We describe the dataset, the tasks, the leaderboard and report the baseline results yielded by a reference implementation. The tested models do not approach ceiling performance on any of the tasks, which suggests that Superlim is truly difficult, a desirable quality for a benchmark. We address methodological challenges, such as mitigating the Anglocentric bias when creating datasets for a less-resourced language; choosing the most appropriate measures; documenting the datasets and making the leaderboard convenient and transparent. We also highlight other potential usages of the dataset, such as, for instance, the evaluation of cross-lingual transfer learning.",
}
| We present Superlim, a multi-task NLP benchmark and analysis platform for evaluating Swedish language models, a counterpart to the English-language (Super)GLUE suite. We describe the dataset, the tasks, the leaderboard and report the baseline results yielded by a reference implementation. The tested models do not approach ceiling performance on any of the tasks, which suggests that Superlim is truly difficult, a desirable quality for a benchmark. We address methodological challenges, such as mitigating the Anglocentric bias when creating datasets for a less-resourced language; choosing the most appropriate measures; documenting the datasets and making the leaderboard convenient and transparent. We also highlight other potential usages of the dataset, such as, for instance, the evaluation of cross-lingual transfer learning. | [
"Berdicevskis, Aleks",
"rs",
"Bouma, Gerlof",
"Kurtz, Robin",
"Morger, Felix",
"{\\\"O}hman, Joey",
"Adesam, Yvonne",
"Borin, Lars",
"Dann{\\'e}lls, Dana",
"Forsberg, Markus",
"Isbister, Tim",
"Lindahl, Anna",
"Malmsten, Martin",
"Rekathati, Faton",
"Sahlgren, Magnus",
"Volodina, Elena",
"B{\\\"o}rjeson, Love",
"Hengchen, Simon",
"Tahmasebi, Nina"
] | Superlim: A Swedish Language Understanding Evaluation Benchmark | emnlp-main.506 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.507.bib | https://aclanthology.org/2023.emnlp-main.507/ | @inproceedings{hao-etal-2023-reasoning,
title = "Reasoning with Language Model is Planning with World Model",
author = "Hao, Shibo and
Gu, Yi and
Ma, Haodi and
Hong, Joshua and
Wang, Zhen and
Wang, Daisy and
Hu, Zhiting",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.507",
doi = "10.18653/v1/2023.emnlp-main.507",
pages = "8154--8173",
abstract = "Large language models (LLMs) have shown remarkable reasoning capabilities, particularly with Chain-of-Thought-style prompts. However, LLMs can still struggle with problems that are easy for humans, such as generating action plans for executing tasks or performing complex math or logical reasoning. This is due to LLMs{'} absence of an internal world model for predicting world states (e.g., environment status, variable values) and simulating long-term action outcomes of actions. This prevents LLMs from performing deliberate planning akin to human brains, which involves exploring alternative reasoning paths, anticipating future states and rewards, and iteratively refining existing reasoning steps. To overcome the limitations, we propose a new LLM reasoning framework, Reasoning via Planning (RAP). RAP repurposes the LLM as both a world model and a reasoning agent, and incorporates a principled planning algorithm (based on Monte Carlo Tree Search) for strategic exploration in the vast reasoning space. During reasoning, the LLM (as agent) incrementally builds a reasoning tree under the guidance of the LLM (as world model) and task-specific rewards, properly balancing exploration v.s. exploitation to achieve a high-reward reasoning path efficiently. We apply RAP to a variety of challenging reasoning problems, such as plan generation, math reasoning, and logical inference. Empirical results demonstrate the superiority of RAP over various strong baselines, including CoT and least-to-most prompting with self-consistency, e.g., RAP on LLaMA-33B surpasses CoT on GPT-4 with 33{\%} relative improvement in plan generation.",
}
| Large language models (LLMs) have shown remarkable reasoning capabilities, particularly with Chain-of-Thought-style prompts. However, LLMs can still struggle with problems that are easy for humans, such as generating action plans for executing tasks or performing complex math or logical reasoning. This is due to LLMs{'} absence of an internal world model for predicting world states (e.g., environment status, variable values) and simulating long-term action outcomes of actions. This prevents LLMs from performing deliberate planning akin to human brains, which involves exploring alternative reasoning paths, anticipating future states and rewards, and iteratively refining existing reasoning steps. To overcome the limitations, we propose a new LLM reasoning framework, Reasoning via Planning (RAP). RAP repurposes the LLM as both a world model and a reasoning agent, and incorporates a principled planning algorithm (based on Monte Carlo Tree Search) for strategic exploration in the vast reasoning space. During reasoning, the LLM (as agent) incrementally builds a reasoning tree under the guidance of the LLM (as world model) and task-specific rewards, properly balancing exploration v.s. exploitation to achieve a high-reward reasoning path efficiently. We apply RAP to a variety of challenging reasoning problems, such as plan generation, math reasoning, and logical inference. Empirical results demonstrate the superiority of RAP over various strong baselines, including CoT and least-to-most prompting with self-consistency, e.g., RAP on LLaMA-33B surpasses CoT on GPT-4 with 33{\%} relative improvement in plan generation. | [
"Hao, Shibo",
"Gu, Yi",
"Ma, Haodi",
"Hong, Joshua",
"Wang, Zhen",
"Wang, Daisy",
"Hu, Zhiting"
] | Reasoning with Language Model is Planning with World Model | emnlp-main.507 | 2305.14992 | [
"https://github.com/ber666/llm-reasoners"
] | https://huggingface.co/papers/2312.05230 | 0 | 0 | 0 | 2 | [] | [] | [] | 1 | Oral |
https://aclanthology.org/2023.emnlp-main.508.bib | https://aclanthology.org/2023.emnlp-main.508/ | @inproceedings{li-etal-2023-llm,
title = "{LLM}-enhanced Self-training for Cross-domain Constituency Parsing",
author = "Li, Jianling and
Zhang, Meishan and
Guo, Peiming and
Zhang, Min and
Zhang, Yue",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.508",
doi = "10.18653/v1/2023.emnlp-main.508",
pages = "8174--8185",
abstract = "Self-training has proven to be an effective approach for cross-domain tasks, and in this study, we explore its application to cross-domain constituency parsing. Traditional self-training methods rely on limited and potentially low-quality raw corpora. To overcome this limitation, we propose enhancing self-training with the large language model (LLM) to generate domain-specific raw corpora iteratively. For the constituency parsing, we introduce grammar rules that guide the LLM in generating raw corpora and establish criteria for selecting pseudo instances. Our experimental results demonstrate that self-training for constituency parsing, equipped with an LLM, outperforms traditional methods regardless of the LLM{'}s performance. Moreover, the combination of grammar rules and confidence criteria for pseudo-data selection yields the highest performance in the cross-domain constituency parsing.",
}
| Self-training has proven to be an effective approach for cross-domain tasks, and in this study, we explore its application to cross-domain constituency parsing. Traditional self-training methods rely on limited and potentially low-quality raw corpora. To overcome this limitation, we propose enhancing self-training with the large language model (LLM) to generate domain-specific raw corpora iteratively. For the constituency parsing, we introduce grammar rules that guide the LLM in generating raw corpora and establish criteria for selecting pseudo instances. Our experimental results demonstrate that self-training for constituency parsing, equipped with an LLM, outperforms traditional methods regardless of the LLM{'}s performance. Moreover, the combination of grammar rules and confidence criteria for pseudo-data selection yields the highest performance in the cross-domain constituency parsing. | [
"Li, Jianling",
"Zhang, Meishan",
"Guo, Peiming",
"Zhang, Min",
"Zhang, Yue"
] | LLM-enhanced Self-training for Cross-domain Constituency Parsing | emnlp-main.508 | 2311.02660 | [
"https://github.com/jianlingl/llm_st_constparsing"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
|
https://aclanthology.org/2023.emnlp-main.509.bib | https://aclanthology.org/2023.emnlp-main.509/ | @inproceedings{zhang-etal-2023-continual-named,
title = "Continual Named Entity Recognition without Catastrophic Forgetting",
author = "Zhang, Duzhen and
Cong, Wei and
Dong, Jiahua and
Yu, Yahan and
Chen, Xiuyi and
Zhang, Yonggang and
Fang, Zhen",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.509",
doi = "10.18653/v1/2023.emnlp-main.509",
pages = "8186--8197",
abstract = "Continual Named Entity Recognition (CNER) is a burgeoning area, which involves updating an existing model by incorporating new entity types sequentially. Nevertheless, continual learning approaches are often severely afflicted by catastrophic forgetting. This issue is intensified in CNER due to the consolidation of old entity types from previous steps into the non-entity type at each step, leading to what is known as the semantic shift problem of the non-entity type. In this paper, we introduce a pooled feature distillation loss that skillfully navigates the trade-off between retaining knowledge of old entity types and acquiring new ones, thereby more effectively mitigating the problem of catastrophic forgetting. Additionally, we develop a confidence-based pseudo-labeling for the non-entity type, i.e., predicting entity types using the old model to handle the semantic shift of the non-entity type. Following the pseudo-labeling process, we suggest an adaptive re-weighting type-balanced learning strategy to handle the issue of biased type distribution. We carried out comprehensive experiments on ten CNER settings using three different datasets. The results illustrate that our method significantly outperforms prior state-of-the-art approaches, registering an average improvement of 6.3{\%} and 8.0{\%} in Micro and Macro F1 scores, respectively.",
}
| Continual Named Entity Recognition (CNER) is a burgeoning area, which involves updating an existing model by incorporating new entity types sequentially. Nevertheless, continual learning approaches are often severely afflicted by catastrophic forgetting. This issue is intensified in CNER due to the consolidation of old entity types from previous steps into the non-entity type at each step, leading to what is known as the semantic shift problem of the non-entity type. In this paper, we introduce a pooled feature distillation loss that skillfully navigates the trade-off between retaining knowledge of old entity types and acquiring new ones, thereby more effectively mitigating the problem of catastrophic forgetting. Additionally, we develop a confidence-based pseudo-labeling for the non-entity type, i.e., predicting entity types using the old model to handle the semantic shift of the non-entity type. Following the pseudo-labeling process, we suggest an adaptive re-weighting type-balanced learning strategy to handle the issue of biased type distribution. We carried out comprehensive experiments on ten CNER settings using three different datasets. The results illustrate that our method significantly outperforms prior state-of-the-art approaches, registering an average improvement of 6.3{\%} and 8.0{\%} in Micro and Macro F1 scores, respectively. | [
"Zhang, Duzhen",
"Cong, Wei",
"Dong, Jiahua",
"Yu, Yahan",
"Chen, Xiuyi",
"Zhang, Yonggang",
"Fang, Zhen"
] | Continual Named Entity Recognition without Catastrophic Forgetting | emnlp-main.509 | 2310.14541 | [
"https://github.com/bladedancer957/cpfd"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.510.bib | https://aclanthology.org/2023.emnlp-main.510/ | @inproceedings{mehta-etal-2023-dsi,
title = "{DSI}++: Updating Transformer Memory with New Documents",
author = "Mehta, Sanket Vaibhav and
Gupta, Jai and
Tay, Yi and
Dehghani, Mostafa and
Tran, Vinh Q. and
Rao, Jinfeng and
Najork, Marc and
Strubell, Emma and
Metzler, Donald",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.510",
doi = "10.18653/v1/2023.emnlp-main.510",
pages = "8198--8213",
abstract = "Differentiable Search Indices (DSIs) encode a corpus of documents in the parameters of a model and use the same model to map queries directly to relevant document identifiers. Despite the solid performance of DSI models, successfully deploying them in scenarios where document corpora change with time is an open problem. In this work, we introduce DSI++, a continual learning challenge for DSI with the goal of continuously indexing new documents while being able to answer queries related to both previously and newly indexed documents. Across different model scales and document identifier representations, we show that continual indexing of new documents leads to considerable forgetting of previously indexed documents. We also hypothesize and verify that the model experiences forgetting events during training, leading to unstable learning. To mitigate these issues, we investigate two approaches. The first focuses on modifying the training dynamics. Flatter minima implicitly alleviates forgetting, so we explicitly optimize for flatter loss basins and show that the model stably memorizes more documents (+12{\%}). Next, we introduce a parametric memory to generate pseudo-queries for documents and supplement them during incremental indexing to prevent forgetting for the retrieval task. Extensive experiments on a novel continual indexing benchmark based on Natural Questions demonstrate that our proposed solution mitigates the forgetting in DSI++ by a significant margin and improves the average Hits@10 by +21.1{\%} over competitive baselines.",
}
| Differentiable Search Indices (DSIs) encode a corpus of documents in the parameters of a model and use the same model to map queries directly to relevant document identifiers. Despite the solid performance of DSI models, successfully deploying them in scenarios where document corpora change with time is an open problem. In this work, we introduce DSI++, a continual learning challenge for DSI with the goal of continuously indexing new documents while being able to answer queries related to both previously and newly indexed documents. Across different model scales and document identifier representations, we show that continual indexing of new documents leads to considerable forgetting of previously indexed documents. We also hypothesize and verify that the model experiences forgetting events during training, leading to unstable learning. To mitigate these issues, we investigate two approaches. The first focuses on modifying the training dynamics. Flatter minima implicitly alleviates forgetting, so we explicitly optimize for flatter loss basins and show that the model stably memorizes more documents (+12{\%}). Next, we introduce a parametric memory to generate pseudo-queries for documents and supplement them during incremental indexing to prevent forgetting for the retrieval task. Extensive experiments on a novel continual indexing benchmark based on Natural Questions demonstrate that our proposed solution mitigates the forgetting in DSI++ by a significant margin and improves the average Hits@10 by +21.1{\%} over competitive baselines. | [
"Mehta, Sanket Vaibhav",
"Gupta, Jai",
"Tay, Yi",
"Dehghani, Mostafa",
"Tran, Vinh Q.",
"Rao, Jinfeng",
"Najork, Marc",
"Strubell, Emma",
"Metzler, Donald"
] | DSI++: Updating Transformer Memory with New Documents | emnlp-main.510 | 2212.09744 | [
""
] | https://huggingface.co/papers/2212.09744 | 0 | 1 | 0 | 9 | [] | [] | [] | 1 | Oral |
https://aclanthology.org/2023.emnlp-main.511.bib | https://aclanthology.org/2023.emnlp-main.511/ | @inproceedings{gupta-etal-2023-editing,
title = "Editing Common Sense in Transformers",
author = "Gupta, Anshita and
Mondal, Debanjan and
Sheshadri, Akshay and
Zhao, Wenlong and
Li, Xiang and
Wiegreffe, Sarah and
Tandon, Niket",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.511",
doi = "10.18653/v1/2023.emnlp-main.511",
pages = "8214--8232",
abstract = "Editing model parameters directly in Transformers makes updating open-source transformer-based models possible without re-training. However, these editing methods have only been evaluated on statements about encyclopedic knowledge with a single correct answer. Commonsense knowledge with multiple correct answers, e.g., an apple can be green or red but not transparent, has not been studied but is as essential for enhancing transformers{'} reliability and usefulness. In this paper, we investigate whether commonsense judgments are causally associated with localized, editable parameters in Transformers, and we provide an affirmative answer. We find that directly applying the MEMIT editing algorithm results in sub-par performance and improve it for the commonsense domain by varying edit tokens and improving the layer selection strategy, i.e., $MEMIT_{CSK}$. GPT-2 Large and XL models edited using $MEMIT_{CSK}$ outperform best-fine-tuned baselines by 10.97{\%} and 10.73{\%} F1 scores on PEP3k and 20Q datasets. In addition, we propose a novel evaluation dataset, $PROBE\ SET$, that contains unaffected and affected neighborhoods, affected paraphrases, and affected reasoning challenges. $MEMIT_{CSK}$ performs well across the metrics while fine-tuning baselines show significant trade-offs between unaffected and affected metrics. These results suggest a compelling future direction for incorporating feedback about common sense into Transformers through direct model editing.",
}
| Editing model parameters directly in Transformers makes updating open-source transformer-based models possible without re-training. However, these editing methods have only been evaluated on statements about encyclopedic knowledge with a single correct answer. Commonsense knowledge with multiple correct answers, e.g., an apple can be green or red but not transparent, has not been studied but is as essential for enhancing transformers{'} reliability and usefulness. In this paper, we investigate whether commonsense judgments are causally associated with localized, editable parameters in Transformers, and we provide an affirmative answer. We find that directly applying the MEMIT editing algorithm results in sub-par performance and improve it for the commonsense domain by varying edit tokens and improving the layer selection strategy, i.e., $MEMIT_{CSK}$. GPT-2 Large and XL models edited using $MEMIT_{CSK}$ outperform best-fine-tuned baselines by 10.97{\%} and 10.73{\%} F1 scores on PEP3k and 20Q datasets. In addition, we propose a novel evaluation dataset, $PROBE\ SET$, that contains unaffected and affected neighborhoods, affected paraphrases, and affected reasoning challenges. $MEMIT_{CSK}$ performs well across the metrics while fine-tuning baselines show significant trade-offs between unaffected and affected metrics. These results suggest a compelling future direction for incorporating feedback about common sense into Transformers through direct model editing. | [
"Gupta, Anshita",
"Mondal, Debanjan",
"Sheshadri, Akshay",
"Zhao, Wenlong",
"Li, Xiang",
"Wiegreffe, Sarah",
"T",
"on, Niket"
] | Editing Common Sense in Transformers | emnlp-main.511 | 2305.14956 | [
"https://github.com/anshitag/memit_csk"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.512.bib | https://aclanthology.org/2023.emnlp-main.512/ | @inproceedings{zhong-etal-2023-air,
title = "Air-Decoding: Attribute Distribution Reconstruction for Decoding-Time Controllable Text Generation",
author = "Zhong, Tianqi and
Wang, Quan and
Han, Jingxuan and
Zhang, Yongdong and
Mao, Zhendong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.512",
doi = "10.18653/v1/2023.emnlp-main.512",
pages = "8233--8248",
abstract = "Controllable text generation (CTG) aims to generate text with desired attributes, and decoding-time-based methods have shown promising performance on this task. However, in this paper, we identify the phenomenon of Attribute Collapse for the first time. It causes the fluency of generated text to rapidly decrease when the control strength exceeds a critical value, rendering the text completely unusable. This limitation hinders the effectiveness of decoding methods in achieving high levels of controllability. To address this problem, we propose a novel lightweight decoding framework named Air-Decoding. Its main idea is reconstructing the attribute distributions to balance the weights between attribute words and non-attribute words to generate more fluent text. Specifically, we train prefixes by prefix-tuning to obtain attribute distributions. Then we design a novel attribute distribution reconstruction method to balance the obtained distributions and use the reconstructed distributions to guide language models for generation, effectively avoiding the issue of Attribute Collapse. Experiments on multiple CTG tasks prove that our method achieves a new state-of-the-art control performance.",
}
| Controllable text generation (CTG) aims to generate text with desired attributes, and decoding-time-based methods have shown promising performance on this task. However, in this paper, we identify the phenomenon of Attribute Collapse for the first time. It causes the fluency of generated text to rapidly decrease when the control strength exceeds a critical value, rendering the text completely unusable. This limitation hinders the effectiveness of decoding methods in achieving high levels of controllability. To address this problem, we propose a novel lightweight decoding framework named Air-Decoding. Its main idea is reconstructing the attribute distributions to balance the weights between attribute words and non-attribute words to generate more fluent text. Specifically, we train prefixes by prefix-tuning to obtain attribute distributions. Then we design a novel attribute distribution reconstruction method to balance the obtained distributions and use the reconstructed distributions to guide language models for generation, effectively avoiding the issue of Attribute Collapse. Experiments on multiple CTG tasks prove that our method achieves a new state-of-the-art control performance. | [
"Zhong, Tianqi",
"Wang, Quan",
"Han, Jingxuan",
"Zhang, Yongdong",
"Mao, Zhendong"
] | Air-Decoding: Attribute Distribution Reconstruction for Decoding-Time Controllable Text Generation | emnlp-main.512 | 2310.14892 | [
"https://github.com/r1047/air-decoding"
] | https://huggingface.co/papers/2310.14892 | 0 | 1 | 0 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.513.bib | https://aclanthology.org/2023.emnlp-main.513/ | @inproceedings{mohebbi-etal-2023-homophone,
title = "Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers",
author = "Mohebbi, Hosein and
Chrupa{\l}a, Grzegorz and
Zuidema, Willem and
Alishahi, Afra",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.513",
doi = "10.18653/v1/2023.emnlp-main.513",
pages = "8249--8260",
abstract = "Transformers have become a key architecture in speech processing, but our understanding of how they build up representations of acoustic and linguistic structure is limited. In this study, we address this gap by investigating how measures of {`}context-mixing{'} developed for text models can be adapted and applied to models of spoken language. We identify a linguistic phenomenon that is ideal for such a case study: homophony in French (e.g. livre vs livres), where a speech recognition model has to attend to syntactic cues such as determiners and pronouns in order to disambiguate spoken words with identical pronunciations and transcribe them while respecting grammatical agreement. We perform a series of controlled experiments and probing analyses on Transformer-based speech models. Our findings reveal that representations in encoder-only models effectively incorporate these cues to identify the correct transcription, whereas encoders in encoder-decoder models mainly relegate the task of capturing contextual dependencies to decoder modules.",
}
| Transformers have become a key architecture in speech processing, but our understanding of how they build up representations of acoustic and linguistic structure is limited. In this study, we address this gap by investigating how measures of {`}context-mixing{'} developed for text models can be adapted and applied to models of spoken language. We identify a linguistic phenomenon that is ideal for such a case study: homophony in French (e.g. livre vs livres), where a speech recognition model has to attend to syntactic cues such as determiners and pronouns in order to disambiguate spoken words with identical pronunciations and transcribe them while respecting grammatical agreement. We perform a series of controlled experiments and probing analyses on Transformer-based speech models. Our findings reveal that representations in encoder-only models effectively incorporate these cues to identify the correct transcription, whereas encoders in encoder-decoder models mainly relegate the task of capturing contextual dependencies to decoder modules. | [
"Mohebbi, Hosein",
"Chrupa{\\l}a, Grzegorz",
"Zuidema, Willem",
"Alishahi, Afra"
] | Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers | emnlp-main.513 | 2310.09925 | [
"https://github.com/hmohebbi/ContextMixingASR"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
|
https://aclanthology.org/2023.emnlp-main.514.bib | https://aclanthology.org/2023.emnlp-main.514/ | @inproceedings{shen-etal-2023-retrieval,
title = "Retrieval-Generation Alignment for End-to-End Task-Oriented Dialogue System",
author = "Shen, Weizhou and
Gao, Yingqi and
Huang, Canbin and
Wan, Fanqi and
Quan, Xiaojun and
Bi, Wei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.514",
doi = "10.18653/v1/2023.emnlp-main.514",
pages = "8261--8275",
abstract = "Developing an efficient retriever to retrieve knowledge from a large-scale knowledge base (KB) is critical for task-oriented dialogue systems to effectively handle localized and specialized tasks. However, widely used generative models such as T5 and ChatGPT often struggle to differentiate subtle differences among the retrieved KB records when generating responses, resulting in suboptimal quality of generated responses. In this paper, we propose the application of maximal marginal likelihood to train a perceptive retriever by utilizing signals from response generation for supervision. In addition, our approach goes beyond considering solely retrieved entities and incorporates various meta knowledge to guide the generator, thus improving the utilization of knowledge. We evaluate our approach on three task-oriented dialogue datasets using T5 and ChatGPT as the backbone models. The results demonstrate that when combined with meta knowledge, the response generator can effectively leverage high-quality knowledge records from the retriever and enhance the quality of generated responses. The code of this work is available at https://github.com/shenwzh3/MK-TOD.",
}
| Developing an efficient retriever to retrieve knowledge from a large-scale knowledge base (KB) is critical for task-oriented dialogue systems to effectively handle localized and specialized tasks. However, widely used generative models such as T5 and ChatGPT often struggle to differentiate subtle differences among the retrieved KB records when generating responses, resulting in suboptimal quality of generated responses. In this paper, we propose the application of maximal marginal likelihood to train a perceptive retriever by utilizing signals from response generation for supervision. In addition, our approach goes beyond considering solely retrieved entities and incorporates various meta knowledge to guide the generator, thus improving the utilization of knowledge. We evaluate our approach on three task-oriented dialogue datasets using T5 and ChatGPT as the backbone models. The results demonstrate that when combined with meta knowledge, the response generator can effectively leverage high-quality knowledge records from the retriever and enhance the quality of generated responses. The code of this work is available at https://github.com/shenwzh3/MK-TOD. | [
"Shen, Weizhou",
"Gao, Yingqi",
"Huang, Canbin",
"Wan, Fanqi",
"Quan, Xiaojun",
"Bi, Wei"
] | Retrieval-Generation Alignment for End-to-End Task-Oriented Dialogue System | emnlp-main.514 | 2310.08877 | [
"https://github.com/shenwzh3/mk-tod"
] | https://huggingface.co/papers/2310.08877 | 1 | 0 | 0 | 6 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.515.bib | https://aclanthology.org/2023.emnlp-main.515/ | @inproceedings{yu-etal-2023-ifqa,
title = "{I}f{QA}: A Dataset for Open-domain Question Answering under Counterfactual Presuppositions",
author = "Yu, Wenhao and
Jiang, Meng and
Clark, Peter and
Sabharwal, Ashish",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.515",
doi = "10.18653/v1/2023.emnlp-main.515",
pages = "8276--8288",
abstract = "Although counterfactual reasoning is a fundamental aspect of intelligence, the lack of large-scale counterfactual open-domain question-answering (QA) benchmarks makes it difficult to evaluate and improve models on this ability. To address this void, we introduce the first such dataset, named IfQA, where each question is based on a counterfactual presupposition via an {``}if{''} clause. Such questions require models to go beyond retrieving direct factual knowledge from the Web: they must identify the right information to retrieve and reason about an imagined situation that may even go against the facts built into their parameters. The IfQA dataset contains 3,800 questions that were annotated by crowdworkers on relevant Wikipedia passages. Empirical analysis reveals that the IfQA dataset is highly challenging for existing open-domain QA methods, including supervised retrieve-then-read pipeline methods (F1 score 44.5), as well as recent few-shot approaches such as chain-of-thought prompting with ChatGPT (F1 score 57.2). We hope the unique challenges posed by IfQA will push open-domain QA research on both retrieval and reasoning fronts, while also helping endow counterfactual reasoning abilities to today{'}s language understanding models.",
}
| Although counterfactual reasoning is a fundamental aspect of intelligence, the lack of large-scale counterfactual open-domain question-answering (QA) benchmarks makes it difficult to evaluate and improve models on this ability. To address this void, we introduce the first such dataset, named IfQA, where each question is based on a counterfactual presupposition via an {``}if{''} clause. Such questions require models to go beyond retrieving direct factual knowledge from the Web: they must identify the right information to retrieve and reason about an imagined situation that may even go against the facts built into their parameters. The IfQA dataset contains 3,800 questions that were annotated by crowdworkers on relevant Wikipedia passages. Empirical analysis reveals that the IfQA dataset is highly challenging for existing open-domain QA methods, including supervised retrieve-then-read pipeline methods (F1 score 44.5), as well as recent few-shot approaches such as chain-of-thought prompting with ChatGPT (F1 score 57.2). We hope the unique challenges posed by IfQA will push open-domain QA research on both retrieval and reasoning fronts, while also helping endow counterfactual reasoning abilities to today{'}s language understanding models. | [
"Yu, Wenhao",
"Jiang, Meng",
"Clark, Peter",
"Sabharwal, Ashish"
] | IfQA: A Dataset for Open-domain Question Answering under Counterfactual Presuppositions | emnlp-main.515 | 2305.14010 | [
""
] | https://huggingface.co/papers/2305.14010 | 1 | 0 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.516.bib | https://aclanthology.org/2023.emnlp-main.516/ | @inproceedings{zhang-etal-2023-large,
title = "How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances",
author = "Zhang, Zihan and
Fang, Meng and
Chen, Ling and
Namazi-Rad, Mohammad-Reza and
Wang, Jun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.516",
doi = "10.18653/v1/2023.emnlp-main.516",
pages = "8289--8311",
abstract = "Although large language models (LLMs) are impressive in solving various tasks, they can quickly be outdated after deployment. Maintaining their up-to-date status is a pressing concern in the current era. This paper provides a comprehensive review of recent advances in aligning deployed LLMs with the ever-changing world knowledge. We categorize research works systemically and provide in-depth comparisons and discussions. We also discuss existing challenges and highlight future directions to facilitate research in this field.",
}
| Although large language models (LLMs) are impressive in solving various tasks, they can quickly be outdated after deployment. Maintaining their up-to-date status is a pressing concern in the current era. This paper provides a comprehensive review of recent advances in aligning deployed LLMs with the ever-changing world knowledge. We categorize research works systemically and provide in-depth comparisons and discussions. We also discuss existing challenges and highlight future directions to facilitate research in this field. | [
"Zhang, Zihan",
"Fang, Meng",
"Chen, Ling",
"Namazi-Rad, Mohammad-Reza",
"Wang, Jun"
] | How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances | emnlp-main.516 | 2310.07343 | [
"https://github.com/hyintell/awesome-refreshing-llms"
] | https://huggingface.co/papers/2310.07343 | 0 | 0 | 0 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.517.bib | https://aclanthology.org/2023.emnlp-main.517/ | @inproceedings{han-etal-2023-prewome,
title = "{P}re{W}o{M}e: Exploiting Presuppositions as Working Memory for Long Form Question Answering",
author = "Han, Wookje and
Park, Jinsol and
Lee, Kyungjae",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.517",
doi = "10.18653/v1/2023.emnlp-main.517",
pages = "8312--8322",
abstract = "Information-seeking questions in long-form question answering (LFQA) often prove misleading due to ambiguity or false presupposition in the question. While many existing approaches handle misleading questions, they are tailored to limited questions, which are insufficient in a real-world setting with unpredictable input characteristics. In this work, we propose PreWoMe, a unified approach capable of handling any type of information-seeking question. The key idea of PreWoMe involves extracting presuppositions in the question and exploiting them as working memory to generate feedback and action about the question. Our experiment shows that PreWoMe is effective not only in tackling misleading questions but also in handling normal ones, thereby demonstrating the effectiveness of leveraging presuppositions, feedback, and action for real-world QA settings.",
}
| Information-seeking questions in long-form question answering (LFQA) often prove misleading due to ambiguity or false presupposition in the question. While many existing approaches handle misleading questions, they are tailored to limited questions, which are insufficient in a real-world setting with unpredictable input characteristics. In this work, we propose PreWoMe, a unified approach capable of handling any type of information-seeking question. The key idea of PreWoMe involves extracting presuppositions in the question and exploiting them as working memory to generate feedback and action about the question. Our experiment shows that PreWoMe is effective not only in tackling misleading questions but also in handling normal ones, thereby demonstrating the effectiveness of leveraging presuppositions, feedback, and action for real-world QA settings. | [
"Han, Wookje",
"Park, Jinsol",
"Lee, Kyungjae"
] | PreWoMe: Exploiting Presuppositions as Working Memory for Long Form Question Answering | emnlp-main.517 | 2310.16147 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.518.bib | https://aclanthology.org/2023.emnlp-main.518/ | @inproceedings{dankers-etal-2023-memorisation,
title = "Memorisation Cartography: Mapping out the Memorisation-Generalisation Continuum in Neural Machine Translation",
author = "Dankers, Verna and
Titov, Ivan and
Hupkes, Dieuwke",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.518",
doi = "10.18653/v1/2023.emnlp-main.518",
pages = "8323--8343",
abstract = "When training a neural network, it will quickly memorise some source-target mappings from your dataset but never learn some others. Yet, memorisation is not easily expressed as a binary feature that is good or bad: individual datapoints lie on a memorisation-generalisation continuum. What determines a datapoint{'}s position on that spectrum, and how does that spectrum influence neural models{'} performance? We address these two questions for neural machine translation (NMT) models. We use the counterfactual memorisation metric to (1) build a resource that places 5M NMT datapoints on a memorisation-generalisation map, (2) illustrate how the datapoints{'} surface-level characteristics and a models{'} per-datum training signals are predictive of memorisation in NMT, (3) and describe the influence that subsets of that map have on NMT systems{'} performance.",
}
| When training a neural network, it will quickly memorise some source-target mappings from your dataset but never learn some others. Yet, memorisation is not easily expressed as a binary feature that is good or bad: individual datapoints lie on a memorisation-generalisation continuum. What determines a datapoint{'}s position on that spectrum, and how does that spectrum influence neural models{'} performance? We address these two questions for neural machine translation (NMT) models. We use the counterfactual memorisation metric to (1) build a resource that places 5M NMT datapoints on a memorisation-generalisation map, (2) illustrate how the datapoints{'} surface-level characteristics and a models{'} per-datum training signals are predictive of memorisation in NMT, (3) and describe the influence that subsets of that map have on NMT systems{'} performance. | [
"Dankers, Verna",
"Titov, Ivan",
"Hupkes, Dieuwke"
] | Memorisation Cartography: Mapping out the Memorisation-Generalisation Continuum in Neural Machine Translation | emnlp-main.518 | 2311.05379 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.519.bib | https://aclanthology.org/2023.emnlp-main.519/ | @inproceedings{hu-etal-2023-decipherpref,
title = "{D}ecipher{P}ref: Analyzing Influential Factors in Human Preference Judgments via {GPT}-4",
author = "Hu, Yebowen and
Song, Kaiqiang and
Cho, Sangwoo and
Wang, Xiaoyang and
Foroosh, Hassan and
Liu, Fei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.519",
doi = "10.18653/v1/2023.emnlp-main.519",
pages = "8344--8357",
abstract = "Human preference judgments are pivotal in guiding large language models (LLMs) to produce outputs that align with human values. Human evaluations are also used in summarization tasks to compare outputs from various systems, complementing existing automatic metrics. Despite their significance, however, there has been limited research probing these pairwise or $k$-wise comparisons. The collective impact and relative importance of factors such as output length, informativeness, fluency, and factual consistency are still not well understood. It is also unclear if there are other hidden factors influencing human judgments. In this paper, we conduct an in-depth examination of a collection of pairwise human judgments released by OpenAI. Utilizing the Bradley-Terry-Luce (BTL) model, we reveal the inherent preferences embedded in these human judgments. We find that the most favored factors vary across tasks and genres, whereas the least favored factors tend to be consistent, e.g., outputs are too brief, contain excessive off-focus content or hallucinated facts. Our findings have implications on the construction of balanced datasets in human preference evaluations, which is a crucial step in shaping the behaviors of future LLMs.",
}
| Human preference judgments are pivotal in guiding large language models (LLMs) to produce outputs that align with human values. Human evaluations are also used in summarization tasks to compare outputs from various systems, complementing existing automatic metrics. Despite their significance, however, there has been limited research probing these pairwise or $k$-wise comparisons. The collective impact and relative importance of factors such as output length, informativeness, fluency, and factual consistency are still not well understood. It is also unclear if there are other hidden factors influencing human judgments. In this paper, we conduct an in-depth examination of a collection of pairwise human judgments released by OpenAI. Utilizing the Bradley-Terry-Luce (BTL) model, we reveal the inherent preferences embedded in these human judgments. We find that the most favored factors vary across tasks and genres, whereas the least favored factors tend to be consistent, e.g., outputs are too brief, contain excessive off-focus content or hallucinated facts. Our findings have implications on the construction of balanced datasets in human preference evaluations, which is a crucial step in shaping the behaviors of future LLMs. | [
"Hu, Yebowen",
"Song, Kaiqiang",
"Cho, Sangwoo",
"Wang, Xiaoyang",
"Foroosh, Hassan",
"Liu, Fei"
] | DecipherPref: Analyzing Influential Factors in Human Preference Judgments via GPT-4 | emnlp-main.519 | 2305.14702 | [
""
] | https://huggingface.co/papers/2305.14702 | 3 | 1 | 0 | 6 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.520.bib | https://aclanthology.org/2023.emnlp-main.520/ | @inproceedings{qiu-etal-2023-gender,
title = "Gender Biases in Automatic Evaluation Metrics for Image Captioning",
author = "Qiu, Haoyi and
Dou, Zi-Yi and
Wang, Tianlu and
Celikyilmaz, Asli and
Peng, Nanyun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.520",
doi = "10.18653/v1/2023.emnlp-main.520",
pages = "8358--8375",
abstract = "Model-based evaluation metrics (e.g., CLIPScore and GPTScore) have demonstrated decent correlations with human judgments in various language generation tasks. However, their impact on fairness remains largely unexplored. It is widely recognized that pretrained models can inadvertently encode societal biases, thus employing these models for evaluation purposes may inadvertently perpetuate and amplify biases. For example, an evaluation metric may favor the caption {``}a woman is calculating an account book{''} over {``}a man is calculating an account book,{''} even if the image only shows male accountants. In this paper, we conduct a systematic study of gender biases in model-based automatic evaluation metrics for image captioning tasks. We start by curating a dataset comprising profession, activity, and object concepts associated with stereotypical gender associations. Then, we demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations, as well as the propagation of biases to generation models through reinforcement learning. Finally, we present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments. Our dataset and framework lay the foundation for understanding the potential harm of model-based evaluation metrics, and facilitate future works to develop more inclusive evaluation metrics.",
}
| Model-based evaluation metrics (e.g., CLIPScore and GPTScore) have demonstrated decent correlations with human judgments in various language generation tasks. However, their impact on fairness remains largely unexplored. It is widely recognized that pretrained models can inadvertently encode societal biases, thus employing these models for evaluation purposes may inadvertently perpetuate and amplify biases. For example, an evaluation metric may favor the caption {``}a woman is calculating an account book{''} over {``}a man is calculating an account book,{''} even if the image only shows male accountants. In this paper, we conduct a systematic study of gender biases in model-based automatic evaluation metrics for image captioning tasks. We start by curating a dataset comprising profession, activity, and object concepts associated with stereotypical gender associations. Then, we demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations, as well as the propagation of biases to generation models through reinforcement learning. Finally, we present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments. Our dataset and framework lay the foundation for understanding the potential harm of model-based evaluation metrics, and facilitate future works to develop more inclusive evaluation metrics. | [
"Qiu, Haoyi",
"Dou, Zi-Yi",
"Wang, Tianlu",
"Celikyilmaz, Asli",
"Peng, Nanyun"
] | Gender Biases in Automatic Evaluation Metrics for Image Captioning | emnlp-main.520 | 2305.14711 | [
"https://github.com/pluslabnlp/clipscore-bias"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.521.bib | https://aclanthology.org/2023.emnlp-main.521/ | @inproceedings{aly-etal-2023-qa,
title = "{QA}-{N}at{V}er: Question Answering for Natural Logic-based Fact Verification",
author = "Aly, Rami and
Strong, Marek and
Vlachos, Andreas",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.521",
doi = "10.18653/v1/2023.emnlp-main.521",
pages = "8376--8391",
abstract = "Fact verification systems assess a claim{'}s veracity based on evidence. An important consideration in designing them is faithfulness, i.e. generating explanations that accurately reflect the reasoning of the model. Recent works have focused on natural logic, which operates directly on natural language by capturing the semantic relation of spans between an aligned claim with its evidence via set-theoretic operators. However, these approaches rely on substantial resources for training, which are only available for high-resource languages. To this end, we propose to use question answering to predict natural logic operators, taking advantage of the generalization capabilities of instruction-tuned language models. Thus, we obviate the need for annotated training data while still relying on a deterministic inference system. In a few-shot setting on FEVER, our approach outperforms the best baseline by 4.3 accuracy points, including a state-of-the-art pre-trained seq2seq natural logic system, as well as a state-of-the-art prompt-based classifier. Our system demonstrates its robustness and portability, achieving competitive performance on a counterfactual dataset and surpassing all approaches without further annotation on a Danish verification dataset. A human evaluation indicates that our approach produces more plausible proofs with fewer erroneous natural logic operators than previous natural logic-based systems.",
}
| Fact verification systems assess a claim{'}s veracity based on evidence. An important consideration in designing them is faithfulness, i.e. generating explanations that accurately reflect the reasoning of the model. Recent works have focused on natural logic, which operates directly on natural language by capturing the semantic relation of spans between an aligned claim with its evidence via set-theoretic operators. However, these approaches rely on substantial resources for training, which are only available for high-resource languages. To this end, we propose to use question answering to predict natural logic operators, taking advantage of the generalization capabilities of instruction-tuned language models. Thus, we obviate the need for annotated training data while still relying on a deterministic inference system. In a few-shot setting on FEVER, our approach outperforms the best baseline by 4.3 accuracy points, including a state-of-the-art pre-trained seq2seq natural logic system, as well as a state-of-the-art prompt-based classifier. Our system demonstrates its robustness and portability, achieving competitive performance on a counterfactual dataset and surpassing all approaches without further annotation on a Danish verification dataset. A human evaluation indicates that our approach produces more plausible proofs with fewer erroneous natural logic operators than previous natural logic-based systems. | [
"Aly, Rami",
"Strong, Marek",
"Vlachos, Andreas"
] | QA-NatVer: Question Answering for Natural Logic-based Fact Verification | emnlp-main.521 | 2310.14198 | [
"https://github.com/raldir/qa-natver"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.522.bib | https://aclanthology.org/2023.emnlp-main.522/ | @inproceedings{wiegreffe-etal-2023-increasing,
title = "Increasing Probability Mass on Answer Choices Does Not Always Improve Accuracy",
author = "Wiegreffe, Sarah and
Finlayson, Matthew and
Tafjord, Oyvind and
Clark, Peter and
Sabharwal, Ashish",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.522",
doi = "10.18653/v1/2023.emnlp-main.522",
pages = "8392--8417",
abstract = "When pretrained language models (LMs) are applied to discriminative tasks such as multiple-choice questions, they place probability mass on vocabulary tokens that aren{'}t among the given answer choices. Spreading probability mass across multiple surface forms with identical meaning (such as {``}bath{''} and {``}bathtub{''}) is thought to cause an underestimation of a model{'}s true performance, referred to as the {``}surface form competition{''} (SFC) hypothesis. This has motivated the introduction of various probability normalization methods. However, many core questions remain unanswered. How do we measure SFC? Are there direct ways of reducing it, and does doing so improve task performance? We propose a mathematical formalism for SFC which allows us to quantify and bound its impact for the first time. We identify a simple method for reducing it{---}namely, increasing probability mass on the given answer choices by a) including them in the prompt and b) using in-context learning with even just one example. We show this method eliminates the impact of SFC in the majority of instances. Our experiments on three diverse datasets and six LMs reveal several additional surprising findings. For example, both normalization and prompting methods for reducing SFC can be ineffective or even detrimental to task performance for some LMs. We conclude with practical insights for effectively prompting LMs for multiple-choice tasks.",
}
| When pretrained language models (LMs) are applied to discriminative tasks such as multiple-choice questions, they place probability mass on vocabulary tokens that aren{'}t among the given answer choices. Spreading probability mass across multiple surface forms with identical meaning (such as {``}bath{''} and {``}bathtub{''}) is thought to cause an underestimation of a model{'}s true performance, referred to as the {``}surface form competition{''} (SFC) hypothesis. This has motivated the introduction of various probability normalization methods. However, many core questions remain unanswered. How do we measure SFC? Are there direct ways of reducing it, and does doing so improve task performance? We propose a mathematical formalism for SFC which allows us to quantify and bound its impact for the first time. We identify a simple method for reducing it{---}namely, increasing probability mass on the given answer choices by a) including them in the prompt and b) using in-context learning with even just one example. We show this method eliminates the impact of SFC in the majority of instances. Our experiments on three diverse datasets and six LMs reveal several additional surprising findings. For example, both normalization and prompting methods for reducing SFC can be ineffective or even detrimental to task performance for some LMs. We conclude with practical insights for effectively prompting LMs for multiple-choice tasks. | [
"Wiegreffe, Sarah",
"Finlayson, Matthew",
"Tafjord, Oyvind",
"Clark, Peter",
"Sabharwal, Ashish"
] | Increasing Probability Mass on Answer Choices Does Not Always Improve Accuracy | emnlp-main.522 | 2305.14596 | [
"https://github.com/allenai/revisiting_surface_form_competition"
] | https://huggingface.co/papers/2305.14596 | 1 | 1 | 0 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.523.bib | https://aclanthology.org/2023.emnlp-main.523/ | @inproceedings{ye-etal-2023-generating,
title = "Generating Data for Symbolic Language with Large Language Models",
author = "Ye, Jiacheng and
Li, Chengzu and
Kong, Lingpeng and
Yu, Tao",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.523",
doi = "10.18653/v1/2023.emnlp-main.523",
pages = "8418--8443",
abstract = "While large language models (LLMs) bring not only performance but also complexity, recent work has started to turn LLMs into data generators rather than task inferencers, where another affordable task model is trained for efficient deployment and inference. However, such an approach has primarily been applied to natural language tasks, and has not yet been explored for symbolic language tasks with complex structured outputs (e.g., semantic parsing and code generation). In this paper, we propose SymGen which utilizes LLMs for generating various annotation-expensive symbolic language data. SymGen consists of an informative prompt to steer generation and an agreement-based verifier to improve data correctness. We conduct extensive experiments on six symbolic language tasks across various settings. Compared with the LLMs, we demonstrate the 1{\%}-sized task model can achieve comparable or better performance, largely cutting inference and deployment costs. We also show that generated data with only a few human demonstrations can be as effective as over 10 times the amount of human-annotated data when training the task model, saving a considerable amount of annotation effort. SymGen takes a step toward data generation for annotation-expensive complex tasks, and we release the code at URL.",
}
| While large language models (LLMs) bring not only performance but also complexity, recent work has started to turn LLMs into data generators rather than task inferencers, where another affordable task model is trained for efficient deployment and inference. However, such an approach has primarily been applied to natural language tasks, and has not yet been explored for symbolic language tasks with complex structured outputs (e.g., semantic parsing and code generation). In this paper, we propose SymGen which utilizes LLMs for generating various annotation-expensive symbolic language data. SymGen consists of an informative prompt to steer generation and an agreement-based verifier to improve data correctness. We conduct extensive experiments on six symbolic language tasks across various settings. Compared with the LLMs, we demonstrate the 1{\%}-sized task model can achieve comparable or better performance, largely cutting inference and deployment costs. We also show that generated data with only a few human demonstrations can be as effective as over 10 times the amount of human-annotated data when training the task model, saving a considerable amount of annotation effort. SymGen takes a step toward data generation for annotation-expensive complex tasks, and we release the code at URL. | [
"Ye, Jiacheng",
"Li, Chengzu",
"Kong, Lingpeng",
"Yu, Tao"
] | Generating Data for Symbolic Language with Large Language Models | emnlp-main.523 | 2305.13917 | [
"https://github.com/hkunlp/symgen"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.524.bib | https://aclanthology.org/2023.emnlp-main.524/ | @inproceedings{saxena-etal-2023-idtraffickers,
title = "{IDT}raffickers: An Authorship Attribution Dataset to link and connect Potential Human-Trafficking Operations on Text Escort Advertisements",
author = "Saxena, Vageesh and
Ashpole, Benjamin and
van Dijck, Gijs and
Spanakis, Gerasimos",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.524",
doi = "10.18653/v1/2023.emnlp-main.524",
pages = "8444--8464",
abstract = "Human trafficking (HT) is a pervasive global issue affecting vulnerable individuals, violating their fundamental human rights. Investigations reveal that a significant number of HT cases are associated with online advertisements (ads), particularly in escort markets. Consequently, identifying and connecting HT vendors has become increasingly challenging for Law Enforcement Agencies (LEAs). To address this issue, we introduce IDTraffickers, an extensive dataset consisting of 87,595 text ads and 5,244 vendor labels to enable the verification and identification of potential HT vendors on online escort markets. To establish a benchmark for authorship identification, we train a DeCLUTR-small model, achieving a macro-F1 score of 0.8656 in a closed-set classification environment. Next, we leverage the style representations extracted from the trained classifier to conduct authorship verification, resulting in a mean r-precision score of 0.8852 in an open-set ranking environment. Finally, to encourage further research and ensure responsible data sharing, we plan to release IDTraffickers for the authorship attribution task to researchers under specific conditions, considering the sensitive nature of the data. We believe that the availability of our dataset and benchmarks will empower future researchers to utilize our findings, thereby facilitating the effective linkage of escort ads and the development of more robust approaches for identifying HT indicators.",
}
| Human trafficking (HT) is a pervasive global issue affecting vulnerable individuals, violating their fundamental human rights. Investigations reveal that a significant number of HT cases are associated with online advertisements (ads), particularly in escort markets. Consequently, identifying and connecting HT vendors has become increasingly challenging for Law Enforcement Agencies (LEAs). To address this issue, we introduce IDTraffickers, an extensive dataset consisting of 87,595 text ads and 5,244 vendor labels to enable the verification and identification of potential HT vendors on online escort markets. To establish a benchmark for authorship identification, we train a DeCLUTR-small model, achieving a macro-F1 score of 0.8656 in a closed-set classification environment. Next, we leverage the style representations extracted from the trained classifier to conduct authorship verification, resulting in a mean r-precision score of 0.8852 in an open-set ranking environment. Finally, to encourage further research and ensure responsible data sharing, we plan to release IDTraffickers for the authorship attribution task to researchers under specific conditions, considering the sensitive nature of the data. We believe that the availability of our dataset and benchmarks will empower future researchers to utilize our findings, thereby facilitating the effective linkage of escort ads and the development of more robust approaches for identifying HT indicators. | [
"Saxena, Vageesh",
"Ashpole, Benjamin",
"van Dijck, Gijs",
"Spanakis, Gerasimos"
] | IDTraffickers: An Authorship Attribution Dataset to link and connect Potential Human-Trafficking Operations on Text Escort Advertisements | emnlp-main.524 | 2310.05484 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
|
https://aclanthology.org/2023.emnlp-main.525.bib | https://aclanthology.org/2023.emnlp-main.525/ | @inproceedings{cabello-etal-2023-evaluating,
title = "Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models",
author = "Cabello, Laura and
Bugliarello, Emanuele and
Brandl, Stephanie and
Elliott, Desmond",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.525",
doi = "10.18653/v1/2023.emnlp-main.525",
pages = "8465--8483",
abstract = "Pretrained machine learning models are known to perpetuate and even amplify existing biases in data, which can result in unfair outcomes that ultimately impact user experience. Therefore, it is crucial to understand the mechanisms behind those prejudicial biases to ensure that model performance does not result in discriminatory behaviour toward certain groups or populations. In this work, we define gender bias as our case study. We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models. We investigate the connection, if any, between the two learning stages, and evaluate how bias amplification reflects on model performance. Overall, we find that bias amplification in pretraining and after fine-tuning are independent. We then examine the effect of continued pretraining on gender-neutral data, finding that this reduces group disparities, i.e., promotes fairness, on VQAv2 and retrieval tasks without significantly compromising task performance.",
}
| Pretrained machine learning models are known to perpetuate and even amplify existing biases in data, which can result in unfair outcomes that ultimately impact user experience. Therefore, it is crucial to understand the mechanisms behind those prejudicial biases to ensure that model performance does not result in discriminatory behaviour toward certain groups or populations. In this work, we define gender bias as our case study. We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models. We investigate the connection, if any, between the two learning stages, and evaluate how bias amplification reflects on model performance. Overall, we find that bias amplification in pretraining and after fine-tuning are independent. We then examine the effect of continued pretraining on gender-neutral data, finding that this reduces group disparities, i.e., promotes fairness, on VQAv2 and retrieval tasks without significantly compromising task performance. | [
"Cabello, Laura",
"Bugliarello, Emanuele",
"Br",
"l, Stephanie",
"Elliott, Desmond"
] | Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models | emnlp-main.525 | 2310.17530 | [
"https://github.com/coastalcph/gender-neutral-vl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.526.bib | https://aclanthology.org/2023.emnlp-main.526/ | @inproceedings{fan-etal-2023-improving,
title = "Improving Dialogue Discourse Parsing via Reply-to Structures of Addressee Recognition",
author = "Fan, Yaxin and
Jiang, Feng and
Li, Peifeng and
Kong, Fang and
Zhu, Qiaoming",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.526",
doi = "10.18653/v1/2023.emnlp-main.526",
pages = "8484--8495",
abstract = "Dialogue discourse parsing aims to reflect the relation-based structure of dialogue by establishing discourse links according to discourse relations. To alleviate data sparsity, previous studies have adopted multitasking approaches to jointly learn dialogue discourse parsing with related tasks (e.g., reading comprehension) that require additional human annotation, thus limiting their generality. In this paper, we propose a multitasking framework that integrates dialogue discourse parsing with its neighboring task addressee recognition. Addressee recognition reveals the reply-to structure that partially overlaps with the relation-based structure, which can be exploited to facilitate relation-based structure learning. To this end, we first proposed a reinforcement learning agent to identify training examples from addressee recognition that are most helpful for dialog discourse parsing. Then, a task-aware structure transformer is designed to capture the shared and private dialogue structure of different tasks, thereby further promoting dialogue discourse parsing. Experimental results on both the Molweni and STAC datasets show that our proposed method can outperform the SOTA baselines. The code will be available at https://github.com/yxfanSuda/RLTST.",
}
| Dialogue discourse parsing aims to reflect the relation-based structure of dialogue by establishing discourse links according to discourse relations. To alleviate data sparsity, previous studies have adopted multitasking approaches to jointly learn dialogue discourse parsing with related tasks (e.g., reading comprehension) that require additional human annotation, thus limiting their generality. In this paper, we propose a multitasking framework that integrates dialogue discourse parsing with its neighboring task addressee recognition. Addressee recognition reveals the reply-to structure that partially overlaps with the relation-based structure, which can be exploited to facilitate relation-based structure learning. To this end, we first proposed a reinforcement learning agent to identify training examples from addressee recognition that are most helpful for dialog discourse parsing. Then, a task-aware structure transformer is designed to capture the shared and private dialogue structure of different tasks, thereby further promoting dialogue discourse parsing. Experimental results on both the Molweni and STAC datasets show that our proposed method can outperform the SOTA baselines. The code will be available at https://github.com/yxfanSuda/RLTST. | [
"Fan, Yaxin",
"Jiang, Feng",
"Li, Peifeng",
"Kong, Fang",
"Zhu, Qiaoming"
] | Improving Dialogue Discourse Parsing via Reply-to Structures of Addressee Recognition | emnlp-main.526 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
||
https://aclanthology.org/2023.emnlp-main.527.bib | https://aclanthology.org/2023.emnlp-main.527/ | @inproceedings{jang-lukasiewicz-2023-improving,
title = "Improving Language Models{'} Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary",
author = "Jang, Myeongjun and
Lukasiewicz, Thomas",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.527",
doi = "10.18653/v1/2023.emnlp-main.527",
pages = "8496--8510",
abstract = "The non-humanlike behaviour of contemporary pre-trained language models (PLMs) is a leading cause undermining their trustworthiness. A striking phenomenon of such faulty behaviours is the generation of inconsistent predictions, which produces logically contradictory results, such as generating different predictions for texts delivering the same meaning or violating logical properties. Previous studies exploited data augmentation or implemented specialised loss functions to alleviate the issue. However, their usage is limited, because they consume expensive training resources for large-sized PLMs and can only handle a certain consistency type. To this end, we propose a practical approach that alleviates the inconsistent behaviour issue by fundamentally improving PLMs{'} meaning awareness. Based on the conceptual role theory, our method allows PLMs to capture accurate meaning by learning precise interrelationships between concepts from word-definition pairs in a dictionary. Next, we propose an efficient parameter integration technique that updates only a few additional parameters to combine the learned interrelationship with PLMs{'} pre-trained knowledge. Our experimental results reveal that the approach can concurrently improve multiple types of consistency, enables efficient knowledge integration, and easily applies to other languages.",
}
| The non-humanlike behaviour of contemporary pre-trained language models (PLMs) is a leading cause undermining their trustworthiness. A striking phenomenon of such faulty behaviours is the generation of inconsistent predictions, which produces logically contradictory results, such as generating different predictions for texts delivering the same meaning or violating logical properties. Previous studies exploited data augmentation or implemented specialised loss functions to alleviate the issue. However, their usage is limited, because they consume expensive training resources for large-sized PLMs and can only handle a certain consistency type. To this end, we propose a practical approach that alleviates the inconsistent behaviour issue by fundamentally improving PLMs{'} meaning awareness. Based on the conceptual role theory, our method allows PLMs to capture accurate meaning by learning precise interrelationships between concepts from word-definition pairs in a dictionary. Next, we propose an efficient parameter integration technique that updates only a few additional parameters to combine the learned interrelationship with PLMs{'} pre-trained knowledge. Our experimental results reveal that the approach can concurrently improve multiple types of consistency, enables efficient knowledge integration, and easily applies to other languages. | [
"Jang, Myeongjun",
"Lukasiewicz, Thomas"
] | Improving Language Models' Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary | emnlp-main.527 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
||
https://aclanthology.org/2023.emnlp-main.528.bib | https://aclanthology.org/2023.emnlp-main.528/ | @inproceedings{ghosh-etal-2023-dale,
title = "{DALE}: Generative Data Augmentation for Low-Resource Legal {NLP}",
author = "Ghosh, Sreyan and
Evuru, Chandra Kiran Reddy and
Kumar, Sonal and
Ramaneswaran, S and
Sakshi, S and
Tyagi, Utkarsh and
Manocha, Dinesh",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.528",
doi = "10.18653/v1/2023.emnlp-main.528",
pages = "8511--8565",
abstract = "We present DALE, a novel and effective generative Data Augmentation framework for low-resource LEgal NLP. DALE addresses the challenges existing frameworks pose in generating effective data augmentations of legal documents - legal language, with its specialized vocabulary and complex semantics, morphology, and syntax, does not benefit from data augmentations that merely rephrase the source sentence. To address this, DALE, built on an Encoder-Decoder Language Model, is pre-trained on a novel unsupervised text denoising objective based on selective masking - our masking strategy exploits the domain-specific language characteristics of templatized legal documents to mask collocated spans of text. Denoising these spans help DALE acquire broad legal knowledge and develop the ability to generate coherent and diverse augmentations with novel contexts. Finally, DALE performs conditional generation to generate synthetic augmentations for low-resource Legal NLP tasks. We demonstrate the effectiveness of DALE on 13 datasets spanning 6 tasks and 4 low-resource settings. DALE outperforms all our baselines, including LLMs, qualitatively and quantitatively, with absolute improvements of 1{\%}-50{\%}.",
}
| We present DALE, a novel and effective generative Data Augmentation framework for low-resource LEgal NLP. DALE addresses the challenges existing frameworks pose in generating effective data augmentations of legal documents - legal language, with its specialized vocabulary and complex semantics, morphology, and syntax, does not benefit from data augmentations that merely rephrase the source sentence. To address this, DALE, built on an Encoder-Decoder Language Model, is pre-trained on a novel unsupervised text denoising objective based on selective masking - our masking strategy exploits the domain-specific language characteristics of templatized legal documents to mask collocated spans of text. Denoising these spans help DALE acquire broad legal knowledge and develop the ability to generate coherent and diverse augmentations with novel contexts. Finally, DALE performs conditional generation to generate synthetic augmentations for low-resource Legal NLP tasks. We demonstrate the effectiveness of DALE on 13 datasets spanning 6 tasks and 4 low-resource settings. DALE outperforms all our baselines, including LLMs, qualitatively and quantitatively, with absolute improvements of 1{\%}-50{\%}. | [
"Ghosh, Sreyan",
"Evuru, Ch",
"ra Kiran Reddy",
"Kumar, Sonal",
"Ramaneswaran, S",
"Sakshi, S",
"Tyagi, Utkarsh",
"Manocha, Dinesh"
] | DALE: Generative Data Augmentation for Low-Resource Legal NLP | emnlp-main.528 | 2310.15799 | [
"https://github.com/sreyan88/dale"
] | https://huggingface.co/papers/2310.15799 | 3 | 0 | 0 | 7 | [
"ckevuru/DALE"
] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.529.bib | https://aclanthology.org/2023.emnlp-main.529/ | @inproceedings{ma-etal-2023-fedid,
title = "{F}ed{ID}: Federated Interactive Distillation for Large-Scale Pretraining Language Models",
author = "Ma, Xinge and
Liu, Jiangming and
Wang, Jin and
Zhang, Xuejie",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.529",
doi = "10.18653/v1/2023.emnlp-main.529",
pages = "8566--8577",
abstract = "The growing concerns and regulations surrounding the protection of user data privacy have necessitated decentralized training paradigms. To this end, federated learning (FL) is widely studied in user-related natural language processing (NLP). However, it suffers from several critical limitations including extensive communication overhead, inability to handle heterogeneity, and vulnerability to white-box inference attacks. Federated distillation (FD) is proposed to alleviate these limitations, but its performance is faded by confirmation bias. To tackle this issue, we propose Federated Interactive Distillation (FedID), which utilizes a small amount of labeled data retained by the server to further rectify the local models during knowledge transfer. Additionally, based on the GLUE benchmark, we develop a benchmarking framework across multiple tasks with diverse data distributions to contribute to the research of FD in NLP community. Experiments show that our proposed FedID framework achieves the best results in homogeneous and heterogeneous federated scenarios. The code for this paper is available at: https://github.com/maxinge8698/FedID.",
}
| The growing concerns and regulations surrounding the protection of user data privacy have necessitated decentralized training paradigms. To this end, federated learning (FL) is widely studied in user-related natural language processing (NLP). However, it suffers from several critical limitations including extensive communication overhead, inability to handle heterogeneity, and vulnerability to white-box inference attacks. Federated distillation (FD) is proposed to alleviate these limitations, but its performance is faded by confirmation bias. To tackle this issue, we propose Federated Interactive Distillation (FedID), which utilizes a small amount of labeled data retained by the server to further rectify the local models during knowledge transfer. Additionally, based on the GLUE benchmark, we develop a benchmarking framework across multiple tasks with diverse data distributions to contribute to the research of FD in NLP community. Experiments show that our proposed FedID framework achieves the best results in homogeneous and heterogeneous federated scenarios. The code for this paper is available at: https://github.com/maxinge8698/FedID. | [
"Ma, Xinge",
"Liu, Jiangming",
"Wang, Jin",
"Zhang, Xuejie"
] | FedID: Federated Interactive Distillation for Large-Scale Pretraining Language Models | emnlp-main.529 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.530.bib | https://aclanthology.org/2023.emnlp-main.530/ | @inproceedings{havrilla-etal-2023-trlx,
title = "trl{X}: A Framework for Large Scale Reinforcement Learning from Human Feedback",
author = "Havrilla, Alexander and
Zhuravinskyi, Maksym and
Phung, Duy and
Tiwari, Aman and
Tow, Jonathan and
Biderman, Stella and
Anthony, Quentin and
Castricato, Louis",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.530",
doi = "10.18653/v1/2023.emnlp-main.530",
pages = "8578--8595",
abstract = "Reinforcement learning from human feedback (\textbf{RLHF}) utilizes human feedback to better align large language models with human preferences via online optimization against a learned reward model. Current RLHF paradigms rely on Proximal Policy Optimization (\textbf{PPO}), which quickly becomes a challenge to implement and scale up to large architectures. To address this difficulty we present the \textbf{AutoRLHF} library as a feature complete open-source framework for RLHF fine-tuning of models up to and exceeding 70 billion parameters. To do so we implement support for multiple types of distributed training including distributed data parallel, model sharded, as well as tensor, sequential, and pipeline parallelism. Additionally, we implement compute and memory saving features, giving AutoRLHF the flexibility to support users with a wide range of compute resources. This includes offline RL methods like Implicit Language Q Learning (\textbf{ILQL}) as a compute efficient alternative to PPO. We find offline fine-tuning offers competitive performance relative to online algorithms while being easier to implement, train, and scale. To evaluate our framework we train RLHF models on two separate well-known tasks using publicly available human preference data. Models trained with AutoRLHF achieve preference win-rates over baselines at rates comparable to the original works.",
}
| Reinforcement learning from human feedback (\textbf{RLHF}) utilizes human feedback to better align large language models with human preferences via online optimization against a learned reward model. Current RLHF paradigms rely on Proximal Policy Optimization (\textbf{PPO}), which quickly becomes a challenge to implement and scale up to large architectures. To address this difficulty we present the \textbf{AutoRLHF} library as a feature complete open-source framework for RLHF fine-tuning of models up to and exceeding 70 billion parameters. To do so we implement support for multiple types of distributed training including distributed data parallel, model sharded, as well as tensor, sequential, and pipeline parallelism. Additionally, we implement compute and memory saving features, giving AutoRLHF the flexibility to support users with a wide range of compute resources. This includes offline RL methods like Implicit Language Q Learning (\textbf{ILQL}) as a compute efficient alternative to PPO. We find offline fine-tuning offers competitive performance relative to online algorithms while being easier to implement, train, and scale. To evaluate our framework we train RLHF models on two separate well-known tasks using publicly available human preference data. Models trained with AutoRLHF achieve preference win-rates over baselines at rates comparable to the original works. | [
"Havrilla, Alex",
"er",
"Zhuravinskyi, Maksym",
"Phung, Duy",
"Tiwari, Aman",
"Tow, Jonathan",
"Biderman, Stella",
"Anthony, Quentin",
"Castricato, Louis"
] | trlX: A Framework for Large Scale Reinforcement Learning from Human Feedback | emnlp-main.530 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.531.bib | https://aclanthology.org/2023.emnlp-main.531/ | @inproceedings{garcia-ferrero-etal-2023-dataset,
title = "This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models",
author = "Garc{\'\i}a-Ferrero, Iker and
Altuna, Bego{\~n}a and
Alvez, Javier and
Gonzalez-Dios, Itziar and
Rigau, German",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.531",
doi = "10.18653/v1/2023.emnlp-main.531",
pages = "8596--8615",
abstract = "Although large language models (LLMs) have apparently acquired a certain level of grammatical knowledge and the ability to make generalizations, they fail to interpret negation, a crucial step in Natural Language Processing. We try to clarify the reasons for the sub-optimal performance of LLMs understanding negation. We introduce a large semi-automatically generated dataset of circa 400,000 descriptive sentences about commonsense knowledge that can be true or false in which negation is present in about 2/3 of the corpus in different forms. We have used our dataset with the largest available open LLMs in a zero-shot approach to grasp their generalization and inference capability and we have also fine-tuned some of the models to assess whether the understanding of negation can be trained. Our findings show that, while LLMs are proficient at classifying affirmative sentences, they struggle with negative sentences and lack a deep understanding of negation, often relying on superficial cues. Although fine-tuning the models on negative sentences improves their performance, the lack of generalization in handling negation is persistent, highlighting the ongoing challenges of LLMs regarding negation understanding and generalization. The dataset and code are publicly available.",
}
| Although large language models (LLMs) have apparently acquired a certain level of grammatical knowledge and the ability to make generalizations, they fail to interpret negation, a crucial step in Natural Language Processing. We try to clarify the reasons for the sub-optimal performance of LLMs understanding negation. We introduce a large semi-automatically generated dataset of circa 400,000 descriptive sentences about commonsense knowledge that can be true or false in which negation is present in about 2/3 of the corpus in different forms. We have used our dataset with the largest available open LLMs in a zero-shot approach to grasp their generalization and inference capability and we have also fine-tuned some of the models to assess whether the understanding of negation can be trained. Our findings show that, while LLMs are proficient at classifying affirmative sentences, they struggle with negative sentences and lack a deep understanding of negation, often relying on superficial cues. Although fine-tuning the models on negative sentences improves their performance, the lack of generalization in handling negation is persistent, highlighting the ongoing challenges of LLMs regarding negation understanding and generalization. The dataset and code are publicly available. | [
"Garc{\\'\\i}a-Ferrero, Iker",
"Altuna, Bego{\\~n}a",
"Alvez, Javier",
"Gonzalez-Dios, Itziar",
"Rigau, German"
] | This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models | emnlp-main.531 | 2310.15941 | [
"https://github.com/hitz-zentroa/this-is-not-a-dataset"
] | https://huggingface.co/papers/2310.15941 | 1 | 6 | 0 | 5 | [] | [
"HiTZ/This-is-not-a-dataset"
] | [] | 1 | Oral |
https://aclanthology.org/2023.emnlp-main.532.bib | https://aclanthology.org/2023.emnlp-main.532/ | @inproceedings{li-etal-2023-mt2,
title = "{MT}2: Towards a Multi-Task Machine Translation Model with Translation-Specific In-Context Learning",
author = "Li, Chunyou and
Liu, Mingtong and
Zhang, Hongxiao and
Chen, Yufeng and
Xu, Jinan and
Zhou, Ming",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.532",
doi = "10.18653/v1/2023.emnlp-main.532",
pages = "8616--8627",
abstract = "Sentence-level translation, document-level translation, translation memory, and terminology constrained translation play an important role in machine translation. Most of the previous work uses separate models or methods to solve these tasks, which is not conducive to knowledge transfer of different tasks and increases the complexity of system construction. In this work, we explore the potential of pre-trained language model in machine translation tasks and propose a Multi-Task Machine Translation (MT2) model to integrate these translation tasks. We design a novel translation-specific In-Context Learning (ICL) paradigm for model training, in which all of the translation tasks can be modeled as context-learning tasks that integrate contextual information for performance improvement. Specifically, we propose a retrieval and alignment method to obtain a large scale context-enhancement training data, then we train the model in an in-context learning manner. Furthermore, we adopt two context-dependent training strategies to encourage the model to better understand and utilize contextual information for translation. Extensive experiments on translation memory, terminology constrained translation, document-level translation, and few-shot domain-adaptation tasks demonstrate the superior performance of our model, verifying the effectiveness of our proposed approach.",
}
| Sentence-level translation, document-level translation, translation memory, and terminology constrained translation play an important role in machine translation. Most of the previous work uses separate models or methods to solve these tasks, which is not conducive to knowledge transfer of different tasks and increases the complexity of system construction. In this work, we explore the potential of pre-trained language model in machine translation tasks and propose a Multi-Task Machine Translation (MT2) model to integrate these translation tasks. We design a novel translation-specific In-Context Learning (ICL) paradigm for model training, in which all of the translation tasks can be modeled as context-learning tasks that integrate contextual information for performance improvement. Specifically, we propose a retrieval and alignment method to obtain a large scale context-enhancement training data, then we train the model in an in-context learning manner. Furthermore, we adopt two context-dependent training strategies to encourage the model to better understand and utilize contextual information for translation. Extensive experiments on translation memory, terminology constrained translation, document-level translation, and few-shot domain-adaptation tasks demonstrate the superior performance of our model, verifying the effectiveness of our proposed approach. | [
"Li, Chunyou",
"Liu, Mingtong",
"Zhang, Hongxiao",
"Chen, Yufeng",
"Xu, Jinan",
"Zhou, Ming"
] | MT2: Towards a Multi-Task Machine Translation Model with Translation-Specific In-Context Learning | emnlp-main.532 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.533.bib | https://aclanthology.org/2023.emnlp-main.533/ | @inproceedings{rucker-akbik-2023-cleanconll,
title = "{C}lean{C}o{NLL}: A Nearly Noise-Free Named Entity Recognition Dataset",
author = {R{\"u}cker, Susanna and
Akbik, Alan},
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.533",
doi = "10.18653/v1/2023.emnlp-main.533",
pages = "8628--8645",
abstract = "The CoNLL-03 corpus is arguably the most well-known and utilized benchmark dataset for named entity recognition (NER). However, prior works found significant numbers of annotation errors, incompleteness, and inconsistencies in the data. This poses challenges to objectively comparing NER approaches and analyzing their errors, as current state-of-the-art models achieve F1-scores that are comparable to or even exceed the estimated noise level in CoNLL-03. To address this issue, we present a comprehensive relabeling effort assisted by automatic consistency checking that corrects 7.0{\%} of all labels in the English CoNLL-03. Our effort adds a layer of entity linking annotation both for better explainability of NER labels and as additional safeguard of annotation quality. Our experimental evaluation finds not only that state-of-the-art approaches reach significantly higher F1-scores (97.1{\%}) on our data, but crucially that the share of correct predictions falsely counted as errors due to annotation noise drops from 47{\%} to 6{\%}. This indicates that our resource is well suited to analyze the remaining errors made by state-of-the-art models, and that the theoretical upper bound even on high resource, coarse-grained NER is not yet reached. To facilitate such analysis, we make CleanCoNLL publicly available to the research community.",
}
| The CoNLL-03 corpus is arguably the most well-known and utilized benchmark dataset for named entity recognition (NER). However, prior works found significant numbers of annotation errors, incompleteness, and inconsistencies in the data. This poses challenges to objectively comparing NER approaches and analyzing their errors, as current state-of-the-art models achieve F1-scores that are comparable to or even exceed the estimated noise level in CoNLL-03. To address this issue, we present a comprehensive relabeling effort assisted by automatic consistency checking that corrects 7.0{\%} of all labels in the English CoNLL-03. Our effort adds a layer of entity linking annotation both for better explainability of NER labels and as additional safeguard of annotation quality. Our experimental evaluation finds not only that state-of-the-art approaches reach significantly higher F1-scores (97.1{\%}) on our data, but crucially that the share of correct predictions falsely counted as errors due to annotation noise drops from 47{\%} to 6{\%}. This indicates that our resource is well suited to analyze the remaining errors made by state-of-the-art models, and that the theoretical upper bound even on high resource, coarse-grained NER is not yet reached. To facilitate such analysis, we make CleanCoNLL publicly available to the research community. | [
"R{\\\"u}cker, Susanna",
"Akbik, Alan"
] | CleanCoNLL: A Nearly Noise-Free Named Entity Recognition Dataset | emnlp-main.533 | 2310.16225 | [
"https://github.com/flairnlp/cleanconll"
] | https://huggingface.co/papers/2310.16225 | 0 | 4 | 2 | 2 | [
"stefan-it/flair-clean-conll-1",
"stefan-it/flair-clean-conll-2",
"stefan-it/flair-clean-conll-3",
"stefan-it/flair-clean-conll-4",
"stefan-it/flair-clean-conll-5"
] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.534.bib | https://aclanthology.org/2023.emnlp-main.534/ | @inproceedings{lim-lauw-2023-disentangling,
title = "Disentangling Transformer Language Models as Superposed Topic Models",
author = "Lim, Jia Peng and
Lauw, Hady",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.534",
doi = "10.18653/v1/2023.emnlp-main.534",
pages = "8646--8666",
abstract = "Topic Modelling is an established research area where the quality of a given topic is measured using coherence metrics. Often, we infer topics from Neural Topic Models (NTM) by interpreting their decoder weights, consisting of top-activated words projected from individual neurons. Transformer-based Language Models (TLM) similarly consist of decoder weights. However, due to its hypothesised superposition properties, the final logits originating from the residual path are considered uninterpretable. Therefore, we posit that we can interpret TLM as superposed NTM by proposing a novel weight-based, model-agnostic and corpus-agnostic approach to search and disentangle decoder-only TLM, potentially mapping individual neurons to multiple coherent topics. Our results show that it is empirically feasible to disentangle coherent topics from GPT-2 models using the Wikipedia corpus. We validate this approach for GPT-2 models using Zero-Shot Topic Modelling. Finally, we extend the proposed approach to disentangle and analyse LLaMA models.",
}
| Topic Modelling is an established research area where the quality of a given topic is measured using coherence metrics. Often, we infer topics from Neural Topic Models (NTM) by interpreting their decoder weights, consisting of top-activated words projected from individual neurons. Transformer-based Language Models (TLM) similarly consist of decoder weights. However, due to its hypothesised superposition properties, the final logits originating from the residual path are considered uninterpretable. Therefore, we posit that we can interpret TLM as superposed NTM by proposing a novel weight-based, model-agnostic and corpus-agnostic approach to search and disentangle decoder-only TLM, potentially mapping individual neurons to multiple coherent topics. Our results show that it is empirically feasible to disentangle coherent topics from GPT-2 models using the Wikipedia corpus. We validate this approach for GPT-2 models using Zero-Shot Topic Modelling. Finally, we extend the proposed approach to disentangle and analyse LLaMA models. | [
"Lim, Jia Peng",
"Lauw, Hady"
] | Disentangling Transformer Language Models as Superposed Topic Models | emnlp-main.534 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.535.bib | https://aclanthology.org/2023.emnlp-main.535/ | @inproceedings{jain-lapata-2023-conversational,
title = "Conversational Semantic Parsing using Dynamic Context Graphs",
author = "Jain, Parag and
Lapata, Mirella",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.535",
doi = "10.18653/v1/2023.emnlp-main.535",
pages = "8667--8679",
abstract = "In this paper we consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types. We focus on models which are capable of interactively mapping user utterances into executable logical forms (e.g., Sparql) in the context of the conversational history. Our key idea is to represent information about an utterance and its context via a subgraph which is created dynamically, i.e., the number of nodes varies per utterance. Rather than treating the subgraph as a sequence, we exploit its underlying structure and encode it with a graph neural network which further allows us to represent a large number of (unseen) nodes. Experimental results show that dynamic context modeling is superior to static approaches, delivering performance improvements across the board (i.e., for simple and complex questions). Our results further confirm that modeling the structure of context is better at processing discourse information, (i.e., at handling ellipsis and resolving coreference) and longer interactions.",
}
| In this paper we consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types. We focus on models which are capable of interactively mapping user utterances into executable logical forms (e.g., Sparql) in the context of the conversational history. Our key idea is to represent information about an utterance and its context via a subgraph which is created dynamically, i.e., the number of nodes varies per utterance. Rather than treating the subgraph as a sequence, we exploit its underlying structure and encode it with a graph neural network which further allows us to represent a large number of (unseen) nodes. Experimental results show that dynamic context modeling is superior to static approaches, delivering performance improvements across the board (i.e., for simple and complex questions). Our results further confirm that modeling the structure of context is better at processing discourse information, (i.e., at handling ellipsis and resolving coreference) and longer interactions. | [
"Jain, Parag",
"Lapata, Mirella"
] | Conversational Semantic Parsing using Dynamic Context Graphs | emnlp-main.535 | 2305.06164 | [
"https://github.com/parajain/dynamic_context"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.536.bib | https://aclanthology.org/2023.emnlp-main.536/ | @inproceedings{madusanka-etal-2023-quantifiers,
title = "Not all quantifiers are equal: Probing Transformer-based language models{'} understanding of generalised quantifiers",
author = "Madusanka, Tharindu and
Zahid, Iqra and
Li, Hao and
Pratt-Hartmann, Ian and
Batista-Navarro, Riza",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.536",
doi = "10.18653/v1/2023.emnlp-main.536",
pages = "8680--8692",
abstract = "How do different generalised quantifiers affect the behaviour of transformer-based language models (TLMs)? The recent popularity of TLMs and the central role generalised quantifiers have traditionally played in linguistics and logic bring this question into particular focus. The current research investigating this subject has not utilised a task defined purely in a logical sense, and thus, has not captured the underlying logical significance of generalised quantifiers. Consequently, they have not answered the aforementioned question faithfully or adequately. Therefore, we investigate how different generalised quantifiers affect TLMs by employing a textual entailment problem defined in a purely logical sense, namely, model-checking with natural language. Our approach permits the automatic construction of datasets with respect to which we can assess the ability of TLMs to learn the meanings of generalised quantifiers. Our investigation reveals that TLMs generally can comprehend the logical semantics of the most common generalised quantifiers, but that distinct quantifiers influence TLMs in varying ways.",
}
| How do different generalised quantifiers affect the behaviour of transformer-based language models (TLMs)? The recent popularity of TLMs and the central role generalised quantifiers have traditionally played in linguistics and logic bring this question into particular focus. The current research investigating this subject has not utilised a task defined purely in a logical sense, and thus, has not captured the underlying logical significance of generalised quantifiers. Consequently, they have not answered the aforementioned question faithfully or adequately. Therefore, we investigate how different generalised quantifiers affect TLMs by employing a textual entailment problem defined in a purely logical sense, namely, model-checking with natural language. Our approach permits the automatic construction of datasets with respect to which we can assess the ability of TLMs to learn the meanings of generalised quantifiers. Our investigation reveals that TLMs generally can comprehend the logical semantics of the most common generalised quantifiers, but that distinct quantifiers influence TLMs in varying ways. | [
"Madusanka, Tharindu",
"Zahid, Iqra",
"Li, Hao",
"Pratt-Hartmann, Ian",
"Batista-Navarro, Riza"
] | Not all quantifiers are equal: Probing Transformer-based language models' understanding of generalised quantifiers | emnlp-main.536 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.537.bib | https://aclanthology.org/2023.emnlp-main.537/ | @inproceedings{zhao-etal-2023-structure,
title = "Structure-aware Knowledge Graph-to-text Generation with Planning Selection and Similarity Distinction",
author = "Zhao, Feng and
Zou, Hongzhi and
Yan, Cheng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.537",
doi = "10.18653/v1/2023.emnlp-main.537",
pages = "8693--8703",
abstract = "The knowledge graph-to-text (KG-to-text) generation task aims to synthesize coherent and engaging sentences that accurately convey the complex information derived from an input knowledge graph. One of the primary challenges in this task is bridging the gap between the diverse structures of the KG and the target text, while preserving the details of the input KG. To address this, we propose a novel approach that efficiently integrates graph structure-aware modules with pre-trained language models. Unlike conventional techniques, which only consider direct connections between first-order neighbors, our method delves deeper by incorporating Relative Distance Encoding as a bias within the graph structure-aware module. This enables our model to better capture the intricate topology information present in the KG. To further elevate the fidelity of the generated text, Planning Selection and Similarity Distinction are introduced. Our approach filters the most relevant linearized sequences by employing a planning scorer, while simultaneously distinguishing similar input KGs through contrastive learning techniques. Experiments on two datasets demonstrate the superiority of our model.",
}
| The knowledge graph-to-text (KG-to-text) generation task aims to synthesize coherent and engaging sentences that accurately convey the complex information derived from an input knowledge graph. One of the primary challenges in this task is bridging the gap between the diverse structures of the KG and the target text, while preserving the details of the input KG. To address this, we propose a novel approach that efficiently integrates graph structure-aware modules with pre-trained language models. Unlike conventional techniques, which only consider direct connections between first-order neighbors, our method delves deeper by incorporating Relative Distance Encoding as a bias within the graph structure-aware module. This enables our model to better capture the intricate topology information present in the KG. To further elevate the fidelity of the generated text, Planning Selection and Similarity Distinction are introduced. Our approach filters the most relevant linearized sequences by employing a planning scorer, while simultaneously distinguishing similar input KGs through contrastive learning techniques. Experiments on two datasets demonstrate the superiority of our model. | [
"Zhao, Feng",
"Zou, Hongzhi",
"Yan, Cheng"
] | Structure-aware Knowledge Graph-to-text Generation with Planning Selection and Similarity Distinction | emnlp-main.537 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
||
https://aclanthology.org/2023.emnlp-main.538.bib | https://aclanthology.org/2023.emnlp-main.538/ | @inproceedings{deng-etal-2023-soul,
title = "{SOUL}: Towards Sentiment and Opinion Understanding of Language",
author = "Deng, Yue and
Zhang, Wenxuan and
Pan, Sinno and
Bing, Lidong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.538",
doi = "10.18653/v1/2023.emnlp-main.538",
pages = "8704--8711",
abstract = "Sentiment analysis is a well-established natural language processing task, with sentiment polarity classification being one of its most popular and representative tasks. However, despite the success of pre-trained language models in this area, they often fall short of capturing the broader complexities of sentiment analysis. To address this issue, we propose a new task called Sentiment and Opinion Understanding of Language (SOUL). SOUL aims to evaluate sentiment understanding through two subtasks: Review Comprehension (RC) and Justification Generation (JG). RC seeks to validate statements that focus on subjective information based on a review text, while JG requires models to provide explanations for their sentiment predictions. To enable comprehensive evaluation, we annotate a new dataset comprising 15,028 statements from 3,638 reviews. Experimental results indicate that SOUL is a challenging task for both small and large language models, with a performance gap of up to 27{\%} when compared to human performance. Furthermore, evaluations conducted with both human experts and GPT-4 highlight the limitations of the small language model in generating reasoning-based justifications. These findings underscore the challenging nature of the SOUL task for existing models, emphasizing the need for further advancements in sentiment analysis to address its complexities. The new dataset and code are available at \url{https://github.com/DAMO-NLP-SG/SOUL}.",
}
| Sentiment analysis is a well-established natural language processing task, with sentiment polarity classification being one of its most popular and representative tasks. However, despite the success of pre-trained language models in this area, they often fall short of capturing the broader complexities of sentiment analysis. To address this issue, we propose a new task called Sentiment and Opinion Understanding of Language (SOUL). SOUL aims to evaluate sentiment understanding through two subtasks: Review Comprehension (RC) and Justification Generation (JG). RC seeks to validate statements that focus on subjective information based on a review text, while JG requires models to provide explanations for their sentiment predictions. To enable comprehensive evaluation, we annotate a new dataset comprising 15,028 statements from 3,638 reviews. Experimental results indicate that SOUL is a challenging task for both small and large language models, with a performance gap of up to 27{\%} when compared to human performance. Furthermore, evaluations conducted with both human experts and GPT-4 highlight the limitations of the small language model in generating reasoning-based justifications. These findings underscore the challenging nature of the SOUL task for existing models, emphasizing the need for further advancements in sentiment analysis to address its complexities. The new dataset and code are available at \url{https://github.com/DAMO-NLP-SG/SOUL}. | [
"Deng, Yue",
"Zhang, Wenxuan",
"Pan, Sinno",
"Bing, Lidong"
] | SOUL: Towards Sentiment and Opinion Understanding of Language | emnlp-main.538 | 2310.17924 | [
"https://github.com/damo-nlp-sg/soul"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.539.bib | https://aclanthology.org/2023.emnlp-main.539/ | @inproceedings{goanta-etal-2023-regulation,
title = "Regulation and {NLP} ({R}eg{NLP}): Taming Large Language Models",
author = "Goanta, Catalina and
Aletras, Nikolaos and
Chalkidis, Ilias and
Ranchord{\'a}s, Sofia and
Spanakis, Gerasimos",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.539",
doi = "10.18653/v1/2023.emnlp-main.539",
pages = "8712--8724",
abstract = "The scientific innovation in Natural Language Processing (NLP) and more broadly in artificial intelligence (AI) is at its fastest pace to date. As large language models (LLMs) unleash a new era of automation, important debates emerge regarding the benefits and risks of their development, deployment and use. Currently, these debates have been dominated by often polarized narratives mainly led by the AI Safety and AI Ethics movements. This polarization, often amplified by social media, is swaying political agendas on AI regulation and governance and posing issues of regulatory capture. Capture occurs when the regulator advances the interests of the industry it is supposed to regulate, or of special interest groups rather than pursuing the general public interest. Meanwhile in NLP research, attention has been increasingly paid to the discussion of regulating risks and harms. This often happens without systematic methodologies or sufficient rooting in the disciplines that inspire an extended scope of NLP research, jeopardizing the scientific integrity of these endeavors. Regulation studies are a rich source of knowledge on how to systematically deal with risk and uncertainty, as well as with scientific evidence, to evaluate and compare regulatory options. This resource has largely remained untapped so far. In this paper, we argue how NLP research on these topics can benefit from proximity to regulatory studies and adjacent fields. We do so by discussing basic tenets of regulation, and risk and uncertainty, and by highlighting the shortcomings of current NLP discussions dealing with risk assessment. Finally, we advocate for the development of a new multidisciplinary research space on regulation and NLP (RegNLP), focused on connecting scientific knowledge to regulatory processes based on systematic methodologies.",
}
| The scientific innovation in Natural Language Processing (NLP) and more broadly in artificial intelligence (AI) is at its fastest pace to date. As large language models (LLMs) unleash a new era of automation, important debates emerge regarding the benefits and risks of their development, deployment and use. Currently, these debates have been dominated by often polarized narratives mainly led by the AI Safety and AI Ethics movements. This polarization, often amplified by social media, is swaying political agendas on AI regulation and governance and posing issues of regulatory capture. Capture occurs when the regulator advances the interests of the industry it is supposed to regulate, or of special interest groups rather than pursuing the general public interest. Meanwhile in NLP research, attention has been increasingly paid to the discussion of regulating risks and harms. This often happens without systematic methodologies or sufficient rooting in the disciplines that inspire an extended scope of NLP research, jeopardizing the scientific integrity of these endeavors. Regulation studies are a rich source of knowledge on how to systematically deal with risk and uncertainty, as well as with scientific evidence, to evaluate and compare regulatory options. This resource has largely remained untapped so far. In this paper, we argue how NLP research on these topics can benefit from proximity to regulatory studies and adjacent fields. We do so by discussing basic tenets of regulation, and risk and uncertainty, and by highlighting the shortcomings of current NLP discussions dealing with risk assessment. Finally, we advocate for the development of a new multidisciplinary research space on regulation and NLP (RegNLP), focused on connecting scientific knowledge to regulatory processes based on systematic methodologies. | [
"Goanta, Catalina",
"Aletras, Nikolaos",
"Chalkidis, Ilias",
"Ranchord{\\'a}s, Sofia",
"Spanakis, Gerasimos"
] | Regulation and NLP (RegNLP): Taming Large Language Models | emnlp-main.539 | 2310.05553 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.540.bib | https://aclanthology.org/2023.emnlp-main.540/ | @inproceedings{he-etal-2023-medeval,
title = "{M}ed{E}val: A Multi-Level, Multi-Task, and Multi-Domain Medical Benchmark for Language Model Evaluation",
author = "He, Zexue and
Wang, Yu and
Yan, An and
Liu, Yao and
Chang, Eric and
Gentili, Amilcare and
McAuley, Julian and
Hsu, Chun-Nan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.540",
doi = "10.18653/v1/2023.emnlp-main.540",
pages = "8725--8744",
abstract = "Curated datasets for healthcare are often limited due to the need of human annotations from experts. In this paper, we present MedEval, a multi-level, multi-task, and multi-domain medical benchmark to facilitate the development of language models for healthcare. MedEval is comprehensive and consists of data from several healthcare systems and spans 35 human body regions from 8 examination modalities. With 22,779 collected sentences and 21,228 reports, we provide expert annotations at multiple levels, offering a granular potential usage of the data and supporting a wide range of tasks. Moreover, we systematically evaluated 10 generic and domain-specific language models under zero-shot and finetuning settings, from domain-adapted baselines in healthcare to general-purposed state-of-the-art large language models (e.g., ChatGPT). Our evaluations reveal varying effectiveness of the two categories of language models across different tasks, from which we notice the importance of instruction tuning for few-shot usage of large language models. Our investigation paves the way toward benchmarking language models for healthcare and provides valuable insights into the strengths and limitations of adopting large language models in medical domains, informing their practical applications and future advancements.",
}
| Curated datasets for healthcare are often limited due to the need of human annotations from experts. In this paper, we present MedEval, a multi-level, multi-task, and multi-domain medical benchmark to facilitate the development of language models for healthcare. MedEval is comprehensive and consists of data from several healthcare systems and spans 35 human body regions from 8 examination modalities. With 22,779 collected sentences and 21,228 reports, we provide expert annotations at multiple levels, offering a granular potential usage of the data and supporting a wide range of tasks. Moreover, we systematically evaluated 10 generic and domain-specific language models under zero-shot and finetuning settings, from domain-adapted baselines in healthcare to general-purposed state-of-the-art large language models (e.g., ChatGPT). Our evaluations reveal varying effectiveness of the two categories of language models across different tasks, from which we notice the importance of instruction tuning for few-shot usage of large language models. Our investigation paves the way toward benchmarking language models for healthcare and provides valuable insights into the strengths and limitations of adopting large language models in medical domains, informing their practical applications and future advancements. | [
"He, Zexue",
"Wang, Yu",
"Yan, An",
"Liu, Yao",
"Chang, Eric",
"Gentili, Amilcare",
"McAuley, Julian",
"Hsu, Chun-Nan"
] | MedEval: A Multi-Level, Multi-Task, and Multi-Domain Medical Benchmark for Language Model Evaluation | emnlp-main.540 | 2310.14088 | [
""
] | https://huggingface.co/papers/2310.14088 | 1 | 1 | 0 | 8 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.541.bib | https://aclanthology.org/2023.emnlp-main.541/ | @inproceedings{baumann-etal-2023-seeing,
title = "Seeing through the mess: evolutionary dynamics of lexical polysemy",
author = "Baumann, Andreas and
Stephan, Andreas and
Roth, Benjamin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.541",
doi = "10.18653/v1/2023.emnlp-main.541",
pages = "8745--8762",
abstract = "Evidently, words can have multiple senses. For example, the word mess refers to a place to have food or to a confusing situation. How exactly multiple senses emerge is less clear. In this work, we propose and analyze a mathematical model of the evolution of lexical meaning to investigate mechanisms leading to polysemy. This model features factors that have been discussed to impact the semantic processing and transmission of words: word frequency, non-conformism, and semantic discriminability. We formally derive conditions under which a sense of a word tends to diversify itself into multiple senses that coexist stably. The model predicts that diversification is promoted by low frequency, a strong bias for non-conformist usage, and high semantic discriminability. We statistically validate these predictions with historical language data covering semantic developments of a set of English words. Multiple alternative measures are used to operationalize each variable involved, and we confirm the predicted tendencies for twelve combinations of measures.",
}
| Evidently, words can have multiple senses. For example, the word mess refers to a place to have food or to a confusing situation. How exactly multiple senses emerge is less clear. In this work, we propose and analyze a mathematical model of the evolution of lexical meaning to investigate mechanisms leading to polysemy. This model features factors that have been discussed to impact the semantic processing and transmission of words: word frequency, non-conformism, and semantic discriminability. We formally derive conditions under which a sense of a word tends to diversify itself into multiple senses that coexist stably. The model predicts that diversification is promoted by low frequency, a strong bias for non-conformist usage, and high semantic discriminability. We statistically validate these predictions with historical language data covering semantic developments of a set of English words. Multiple alternative measures are used to operationalize each variable involved, and we confirm the predicted tendencies for twelve combinations of measures. | [
"Baumann, Andreas",
"Stephan, Andreas",
"Roth, Benjamin"
] | Seeing through the mess: evolutionary dynamics of lexical polysemy | emnlp-main.541 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.542.bib | https://aclanthology.org/2023.emnlp-main.542/ | @inproceedings{cheng-etal-2023-embedded,
title = "Are Embedded Potatoes Still Vegetables? On the Limitations of {W}ord{N}et Embeddings for Lexical Semantics",
author = "Cheng, Xuyou and
Schlichtkrull, Michael and
Emerson, Guy",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.542",
doi = "10.18653/v1/2023.emnlp-main.542",
pages = "8763--8775",
abstract = "Knowledge Base Embedding (KBE) models have been widely used to encode structured information from knowledge bases, including WordNet. However, the existing literature has predominantly focused on link prediction as the evaluation task, often neglecting exploration of the models{'} semantic capabilities. In this paper, we investigate the potential disconnect between the performance of KBE models of WordNet on link prediction and their ability to encode semantic information, highlighting the limitations of current evaluation protocols. Our findings reveal that some top-performing KBE models on the WN18RR benchmark exhibit subpar results on two semantic tasks and two downstream tasks. These results demonstrate the inadequacy of link prediction benchmarks for evaluating the semantic capabilities of KBE models, suggesting the need for a more targeted assessment approach.",
}
| Knowledge Base Embedding (KBE) models have been widely used to encode structured information from knowledge bases, including WordNet. However, the existing literature has predominantly focused on link prediction as the evaluation task, often neglecting exploration of the models{'} semantic capabilities. In this paper, we investigate the potential disconnect between the performance of KBE models of WordNet on link prediction and their ability to encode semantic information, highlighting the limitations of current evaluation protocols. Our findings reveal that some top-performing KBE models on the WN18RR benchmark exhibit subpar results on two semantic tasks and two downstream tasks. These results demonstrate the inadequacy of link prediction benchmarks for evaluating the semantic capabilities of KBE models, suggesting the need for a more targeted assessment approach. | [
"Cheng, Xuyou",
"Schlichtkrull, Michael",
"Emerson, Guy"
] | Are Embedded Potatoes Still Vegetables? On the Limitations of WordNet Embeddings for Lexical Semantics | emnlp-main.542 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.543.bib | https://aclanthology.org/2023.emnlp-main.543/ | @inproceedings{sottana-etal-2023-evaluation,
title = "Evaluation Metrics in the Era of {GPT}-4: Reliably Evaluating Large Language Models on Sequence to Sequence Tasks",
author = "Sottana, Andrea and
Liang, Bin and
Zou, Kai and
Yuan, Zheng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.543",
doi = "10.18653/v1/2023.emnlp-main.543",
pages = "8776--8788",
abstract = "Large Language Models (LLMs) evaluation is a patchy and inconsistent landscape, and it is becoming clear that the quality of automatic evaluation metrics is not keeping up with the pace of development of generative models. We aim to improve the understanding of current models{'} performance by providing a preliminary and hybrid evaluation on a range of open and closed-source generative LLMs on three NLP benchmarks: text summarisation, text simplification and grammatical error correction (GEC), using both automatic and human evaluation. We also explore the potential of the recently released GPT-4 to act as an evaluator. We find that ChatGPT consistently outperforms many other popular models according to human reviewers on the majority of metrics, while scoring much more poorly when using classic automatic evaluation metrics. We also find that human reviewers rate the gold reference as much worse than the best models{'} outputs, indicating the poor quality of many popular benchmarks. Finally, we find that GPT-4 is capable of ranking models{'} outputs in a way which aligns reasonably closely to human judgement despite task-specific variations, with a lower alignment in the GEC task.",
}
| Large Language Models (LLMs) evaluation is a patchy and inconsistent landscape, and it is becoming clear that the quality of automatic evaluation metrics is not keeping up with the pace of development of generative models. We aim to improve the understanding of current models{'} performance by providing a preliminary and hybrid evaluation on a range of open and closed-source generative LLMs on three NLP benchmarks: text summarisation, text simplification and grammatical error correction (GEC), using both automatic and human evaluation. We also explore the potential of the recently released GPT-4 to act as an evaluator. We find that ChatGPT consistently outperforms many other popular models according to human reviewers on the majority of metrics, while scoring much more poorly when using classic automatic evaluation metrics. We also find that human reviewers rate the gold reference as much worse than the best models{'} outputs, indicating the poor quality of many popular benchmarks. Finally, we find that GPT-4 is capable of ranking models{'} outputs in a way which aligns reasonably closely to human judgement despite task-specific variations, with a lower alignment in the GEC task. | [
"Sottana, Andrea",
"Liang, Bin",
"Zou, Kai",
"Yuan, Zheng"
] | Evaluation Metrics in the Era of GPT-4: Reliably Evaluating Large Language Models on Sequence to Sequence Tasks | emnlp-main.543 | 2310.13800 | [
"https://github.com/protagolabs/seq2seq_llm_evaluation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.544.bib | https://aclanthology.org/2023.emnlp-main.544/ | @inproceedings{wagner-etal-2023-event,
title = "Event-Location Tracking in Narratives: A Case Study on Holocaust Testimonies",
author = "Wagner, Eitan and
Keydar, Renana and
Abend, Omri",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.544",
doi = "10.18653/v1/2023.emnlp-main.544",
pages = "8789--8805",
abstract = "This work focuses on the spatial dimension of narrative understanding and presents the task of event-location tracking in narrative texts. The task intends to extract the sequence of locations where the narrative is set through its progression. We present several architectures for the task that seeks to model the global structure of the sequence, with varying levels of context awareness. We compare these methods to several baselines, including the use of strong methods applied over narrow contexts. We also develop methods for the generation of location embeddings and show that learning to predict a sequence of continuous embeddings, rather than a string of locations, is advantageous in terms of performance. We focus on the test case of Holocaust survivor testimonies. We argue for the moral and historical importance of studying this dataset in computational means and that it provides a unique case of a large set of narratives with a relatively restricted set of location trajectories. Our results show that models that are aware of the larger context of the narrative can generate more accurate location chains. We further corroborate the effectiveness of our methods by showing similar trends from experiments on an additional domain.",
}
| This work focuses on the spatial dimension of narrative understanding and presents the task of event-location tracking in narrative texts. The task intends to extract the sequence of locations where the narrative is set through its progression. We present several architectures for the task that seeks to model the global structure of the sequence, with varying levels of context awareness. We compare these methods to several baselines, including the use of strong methods applied over narrow contexts. We also develop methods for the generation of location embeddings and show that learning to predict a sequence of continuous embeddings, rather than a string of locations, is advantageous in terms of performance. We focus on the test case of Holocaust survivor testimonies. We argue for the moral and historical importance of studying this dataset in computational means and that it provides a unique case of a large set of narratives with a relatively restricted set of location trajectories. Our results show that models that are aware of the larger context of the narrative can generate more accurate location chains. We further corroborate the effectiveness of our methods by showing similar trends from experiments on an additional domain. | [
"Wagner, Eitan",
"Keydar, Renana",
"Abend, Omri"
] | Event-Location Tracking in Narratives: A Case Study on Holocaust Testimonies | emnlp-main.544 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
||
https://aclanthology.org/2023.emnlp-main.545.bib | https://aclanthology.org/2023.emnlp-main.545/ | @inproceedings{hwang-etal-2023-dialogizer,
title = "Dialogizer: Context-aware Conversational-{QA} Dataset Generation from Textual Sources",
author = "Hwang, Yerin and
Kim, Yongil and
Bae, Hyunkyung and
Lee, Hwanhee and
Bang, Jeesoo and
Jung, Kyomin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.545",
doi = "10.18653/v1/2023.emnlp-main.545",
pages = "8806--8828",
abstract = "To address the data scarcity issue in Conversational question answering (ConvQA), a dialog inpainting method, which utilizes documents to generate ConvQA datasets, has been proposed. However, the original dialog inpainting model is trained solely on the dialog reconstruction task, resulting in the generation of questions with low contextual relevance due to insufficient learning of question-answer alignment. To overcome this limitation, we propose a novel framework called Dialogizer, which has the capability to automatically generate ConvQA datasets with high contextual relevance from textual sources. The framework incorporates two training tasks: question-answer matching (QAM) and topic-aware dialog generation (TDG). Moreover, re-ranking is conducted during the inference phase based on the contextual relevance of the generated questions. Using our framework, we produce four ConvQA datasets by utilizing documents from multiple domains as the primary source. Through automatic evaluation using diverse metrics, as well as human evaluation, we validate that our proposed framework exhibits the ability to generate datasets of higher quality compared to the baseline dialog inpainting model.",
}
| To address the data scarcity issue in Conversational question answering (ConvQA), a dialog inpainting method, which utilizes documents to generate ConvQA datasets, has been proposed. However, the original dialog inpainting model is trained solely on the dialog reconstruction task, resulting in the generation of questions with low contextual relevance due to insufficient learning of question-answer alignment. To overcome this limitation, we propose a novel framework called Dialogizer, which has the capability to automatically generate ConvQA datasets with high contextual relevance from textual sources. The framework incorporates two training tasks: question-answer matching (QAM) and topic-aware dialog generation (TDG). Moreover, re-ranking is conducted during the inference phase based on the contextual relevance of the generated questions. Using our framework, we produce four ConvQA datasets by utilizing documents from multiple domains as the primary source. Through automatic evaluation using diverse metrics, as well as human evaluation, we validate that our proposed framework exhibits the ability to generate datasets of higher quality compared to the baseline dialog inpainting model. | [
"Hwang, Yerin",
"Kim, Yongil",
"Bae, Hyunkyung",
"Lee, Hwanhee",
"Bang, Jeesoo",
"Jung, Kyomin"
] | Dialogizer: Context-aware Conversational-QA Dataset Generation from Textual Sources | emnlp-main.545 | 2311.07589 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.546.bib | https://aclanthology.org/2023.emnlp-main.546/ | @inproceedings{feng-2023-learning,
title = "Learning to Predict Task Transferability via Soft Prompt",
author = "Feng, Lingyun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.546",
doi = "10.18653/v1/2023.emnlp-main.546",
pages = "8829--8844",
abstract = "Fine-tuning pretrained language models on helpful intermediate tasks often greatly improves the performance of target tasks. However, how to efficiently find the source tasks that can successfully transfer still remains under-explored. In this work, we propose to learn an affinity scoring function to predict transferability between tasks. Specifically, we conduct prompt tuning and regard soft prompts as task embeddings that summarize task-specific information. Then we randomly sample task pairs to train an affinity scoring function. The goal is to predict the transfer gain (i.e., affinity) between a task pair, by conditioning on their task embeddings. Once the scoring function is trained, given a novel target task, we use it to predict the most transferable source tasks, without a brute-force search for all possible source-target pairs. Experimental results across 50 tasks show that our method efficiently identifies beneficial tasks for transfer learning.",
}
| Fine-tuning pretrained language models on helpful intermediate tasks often greatly improves the performance of target tasks. However, how to efficiently find the source tasks that can successfully transfer still remains under-explored. In this work, we propose to learn an affinity scoring function to predict transferability between tasks. Specifically, we conduct prompt tuning and regard soft prompts as task embeddings that summarize task-specific information. Then we randomly sample task pairs to train an affinity scoring function. The goal is to predict the transfer gain (i.e., affinity) between a task pair, by conditioning on their task embeddings. Once the scoring function is trained, given a novel target task, we use it to predict the most transferable source tasks, without a brute-force search for all possible source-target pairs. Experimental results across 50 tasks show that our method efficiently identifies beneficial tasks for transfer learning. | [
"Feng, Lingyun"
] | Learning to Predict Task Transferability via Soft Prompt | emnlp-main.546 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
||
https://aclanthology.org/2023.emnlp-main.547.bib | https://aclanthology.org/2023.emnlp-main.547/ | @inproceedings{zhu-etal-2023-chain,
title = "Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering",
author = "Zhu, Wang and
Thomason, Jesse and
Jia, Robin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.547",
doi = "10.18653/v1/2023.emnlp-main.547",
pages = "8845--8860",
abstract = "We propose Chain-of-Questions, a framework that trains a model to robustly answer multistep questions by generating and answering sub-questions. We obtain supervision for sub-questions from human-annotated question decomposition meaning representation (QDMR), but QDMR does not include annotated answers to sub-questions. To overcome this technical challenge, we treat sub-answers as latent variables and infer them with a novel dynamic mixture of Hard-EM and MAPO. Chain-of-Questions is effective and robust, greatly outperforming strong neuro-symbolic methods by 9.0 F1 on a DROP contrast set and GPT-3.5 by 24.3 F1 on a HotpotQA adversarial set.",
}
| We propose Chain-of-Questions, a framework that trains a model to robustly answer multistep questions by generating and answering sub-questions. We obtain supervision for sub-questions from human-annotated question decomposition meaning representation (QDMR), but QDMR does not include annotated answers to sub-questions. To overcome this technical challenge, we treat sub-answers as latent variables and infer them with a novel dynamic mixture of Hard-EM and MAPO. Chain-of-Questions is effective and robust, greatly outperforming strong neuro-symbolic methods by 9.0 F1 on a DROP contrast set and GPT-3.5 by 24.3 F1 on a HotpotQA adversarial set. | [
"Zhu, Wang",
"Thomason, Jesse",
"Jia, Robin"
] | Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering | emnlp-main.547 | 2305.14901 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.548.bib | https://aclanthology.org/2023.emnlp-main.548/ | @inproceedings{zhu-etal-2023-mirror,
title = "Mirror: A Universal Framework for Various Information Extraction Tasks",
author = "Zhu, Tong and
Ren, Junfei and
Yu, Zijian and
Wu, Mengsong and
Zhang, Guoliang and
Qu, Xiaoye and
Chen, Wenliang and
Wang, Zhefeng and
Huai, Baoxing and
Zhang, Min",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.548",
doi = "10.18653/v1/2023.emnlp-main.548",
pages = "8861--8876",
abstract = "Sharing knowledge between information extraction tasks has always been a challenge due to the diverse data formats and task variations. Meanwhile, this divergence leads to information waste and increases difficulties in building complex applications in real scenarios. Recent studies often formulate IE tasks as a triplet extraction problem. However, such a paradigm does not support multi-span and n-ary extraction, leading to weak versatility. To this end, we reorganize IE problems into unified multi-slot tuples and propose a universal framework for various IE tasks, namely Mirror. Specifically, we recast existing IE tasks as a multi-span cyclic graph extraction problem and devise a non-autoregressive graph decoding algorithm to extract all spans in a single step. It is worth noting that this graph structure is incredibly versatile, and it supports not only complex IE tasks, but also machine reading comprehension and classification tasks. We manually construct a corpus containing 57 datasets for model pretraining, and conduct experiments on 30 datasets across 8 downstream tasks. The experimental results demonstrate that our model has decent compatibility and outperforms or reaches competitive performance with SOTA systems under few-shot and zero-shot settings. The code, model weights, and pretraining corpus are available at https://github.com/Spico197/Mirror .",
}
| Sharing knowledge between information extraction tasks has always been a challenge due to the diverse data formats and task variations. Meanwhile, this divergence leads to information waste and increases difficulties in building complex applications in real scenarios. Recent studies often formulate IE tasks as a triplet extraction problem. However, such a paradigm does not support multi-span and n-ary extraction, leading to weak versatility. To this end, we reorganize IE problems into unified multi-slot tuples and propose a universal framework for various IE tasks, namely Mirror. Specifically, we recast existing IE tasks as a multi-span cyclic graph extraction problem and devise a non-autoregressive graph decoding algorithm to extract all spans in a single step. It is worth noting that this graph structure is incredibly versatile, and it supports not only complex IE tasks, but also machine reading comprehension and classification tasks. We manually construct a corpus containing 57 datasets for model pretraining, and conduct experiments on 30 datasets across 8 downstream tasks. The experimental results demonstrate that our model has decent compatibility and outperforms or reaches competitive performance with SOTA systems under few-shot and zero-shot settings. The code, model weights, and pretraining corpus are available at https://github.com/Spico197/Mirror . | [
"Zhu, Tong",
"Ren, Junfei",
"Yu, Zijian",
"Wu, Mengsong",
"Zhang, Guoliang",
"Qu, Xiaoye",
"Chen, Wenliang",
"Wang, Zhefeng",
"Huai, Baoxing",
"Zhang, Min"
] | Mirror: A Universal Framework for Various Information Extraction Tasks | emnlp-main.548 | 2311.05419 | [
"https://github.com/Spico197/Mirror"
] | https://huggingface.co/papers/2311.05419 | 1 | 0 | 0 | 10 | [
"Spico/mirror-chinese-mrcqa-alpha"
] | [] | [
"Spico/Mirror"
] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.549.bib | https://aclanthology.org/2023.emnlp-main.549/ | @inproceedings{handa-etal-2023-mistakes,
title = "{``}Mistakes Help Us Grow{''}: Facilitating and Evaluating Growth Mindset Supportive Language in Classrooms",
author = "Handa, Kunal and
Clapper, Margarett and
Boyle, Jessica and
Wang, Rose and
Yang, Diyi and
Yeager, David and
Demszky, Dorottya",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.549",
doi = "10.18653/v1/2023.emnlp-main.549",
pages = "8877--8897",
abstract = "Teachers{'} growth mindset supportive language (GMSL){---}rhetoric emphasizing that one{'}s skills can be improved over time{---}has been shown to significantly reduce disparities in academic achievement and enhance students{'} learning outcomes. Although teachers espouse growth mindset principles, most find it difficult to adopt GMSL in their practice due the lack of effective coaching in this area. We explore whether large language models (LLMs) can provide automated, personalized coaching to support teachers{'} use of GMSL. We establish an effective coaching tool to reframe unsupportive utterances to GMSL by developing (i) a parallel dataset containing GMSL-trained teacher reframings of unsupportive statements with an accompanying annotation guide, (ii) a GMSL prompt framework to revise teachers{'} unsupportive language, and (iii) an evaluation framework grounded in psychological theory for evaluating GMSL with the help of students and teachers. We conduct a large-scale evaluation involving 174 teachers and 1,006 students, finding that both teachers and students perceive GMSL-trained teacher and model reframings as more effective in fostering a growth mindset and promoting challenge-seeking behavior, among other benefits. We also find that model-generated reframings outperform those from the GMSL-trained teachers. These results show promise for harnessing LLMs to provide automated GMSL feedback for teachers and, more broadly, LLMs{'} potentiality for supporting students{'} learning in the classroom. Our findings also demonstrate the benefit of large-scale human evaluations when applying LLMs in educational domains.",
}
| Teachers{'} growth mindset supportive language (GMSL){---}rhetoric emphasizing that one{'}s skills can be improved over time{---}has been shown to significantly reduce disparities in academic achievement and enhance students{'} learning outcomes. Although teachers espouse growth mindset principles, most find it difficult to adopt GMSL in their practice due the lack of effective coaching in this area. We explore whether large language models (LLMs) can provide automated, personalized coaching to support teachers{'} use of GMSL. We establish an effective coaching tool to reframe unsupportive utterances to GMSL by developing (i) a parallel dataset containing GMSL-trained teacher reframings of unsupportive statements with an accompanying annotation guide, (ii) a GMSL prompt framework to revise teachers{'} unsupportive language, and (iii) an evaluation framework grounded in psychological theory for evaluating GMSL with the help of students and teachers. We conduct a large-scale evaluation involving 174 teachers and 1,006 students, finding that both teachers and students perceive GMSL-trained teacher and model reframings as more effective in fostering a growth mindset and promoting challenge-seeking behavior, among other benefits. We also find that model-generated reframings outperform those from the GMSL-trained teachers. These results show promise for harnessing LLMs to provide automated GMSL feedback for teachers and, more broadly, LLMs{'} potentiality for supporting students{'} learning in the classroom. Our findings also demonstrate the benefit of large-scale human evaluations when applying LLMs in educational domains. | [
"H",
"a, Kunal",
"Clapper, Margarett",
"Boyle, Jessica",
"Wang, Rose",
"Yang, Diyi",
"Yeager, David",
"Demszky, Dorottya"
] | “Mistakes Help Us Grow”: Facilitating and Evaluating Growth Mindset Supportive Language in Classrooms | emnlp-main.549 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.550.bib | https://aclanthology.org/2023.emnlp-main.550/ | @inproceedings{cao-etal-2023-unnatural,
title = "Unnatural Error Correction: {GPT}-4 Can Almost Perfectly Handle Unnatural Scrambled Text",
author = "Cao, Qi and
Kojima, Takeshi and
Matsuo, Yutaka and
Iwasawa, Yusuke",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.550",
doi = "10.18653/v1/2023.emnlp-main.550",
pages = "8898--8913",
abstract = "While Large Language Models (LLMs) have achieved remarkable performance in many tasks, much about their inner workings remains unclear. In this study, we present novel experimental insights into the resilience of LLMs, particularly GPT-4, when subjected to extensive character-level permutations. To investigate this, we first propose the Scrambled Bench, a suite designed to measure the capacity of LLMs to handle scrambled input, in terms of both recovering scrambled sentences and answering questions given scrambled context. The experimental results indicate that multiple advanced LLMs demonstrate the capability akin to typoglycemia, a phenomenon where humans can understand the meaning of words even when the letters within those words are scrambled, as long as the first and last letters remain in place. More surprisingly, we found that only GPT-4 nearly flawlessly processes inputs with unnatural errors, a task that poses significant challenges for other LLMs and often even for humans. Specifically, GPT-4 can almost perfectly reconstruct the original sentences from scrambled ones, decreasing the edit distance by 95{\%}, even when all letters within each word are entirely scrambled. It is counter-intuitive that LLMs can exhibit such resilience despite severe disruption to input tokenization caused by scrambled text.",
}
| While Large Language Models (LLMs) have achieved remarkable performance in many tasks, much about their inner workings remains unclear. In this study, we present novel experimental insights into the resilience of LLMs, particularly GPT-4, when subjected to extensive character-level permutations. To investigate this, we first propose the Scrambled Bench, a suite designed to measure the capacity of LLMs to handle scrambled input, in terms of both recovering scrambled sentences and answering questions given scrambled context. The experimental results indicate that multiple advanced LLMs demonstrate the capability akin to typoglycemia, a phenomenon where humans can understand the meaning of words even when the letters within those words are scrambled, as long as the first and last letters remain in place. More surprisingly, we found that only GPT-4 nearly flawlessly processes inputs with unnatural errors, a task that poses significant challenges for other LLMs and often even for humans. Specifically, GPT-4 can almost perfectly reconstruct the original sentences from scrambled ones, decreasing the edit distance by 95{\%}, even when all letters within each word are entirely scrambled. It is counter-intuitive that LLMs can exhibit such resilience despite severe disruption to input tokenization caused by scrambled text. | [
"Cao, Qi",
"Kojima, Takeshi",
"Matsuo, Yutaka",
"Iwasawa, Yusuke"
] | Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text | emnlp-main.550 | 2311.18805 | [
"https://github.com/ccqq77/unnatural-error-correction"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.551.bib | https://aclanthology.org/2023.emnlp-main.551/ | @inproceedings{qiu-etal-2023-detecting,
title = "Detecting and Mitigating Hallucinations in Multilingual Summarisation",
author = "Qiu, Yifu and
Ziser, Yftah and
Korhonen, Anna and
Ponti, Edoardo and
Cohen, Shay",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.551",
doi = "10.18653/v1/2023.emnlp-main.551",
pages = "8914--8932",
abstract = "Hallucinations pose a significant challenge to the reliability of neural models for abstractive summarisation. While automatically generated summaries may be fluent, they often lack faithfulness to the original document. This issue becomes even more pronounced in low-resource languages, where summarisation requires cross-lingual transfer. With the existing faithful metrics focusing on English, even measuring the extent of this phenomenon in cross-lingual settings is hard. To address this, we first develop a novel metric, mFACT, evaluating the faithfulness of non-English summaries, leveraging translation-based transfer from multiple English faithfulness metrics. Through extensive experiments in multiple languages, we demonstrate that mFACT is best suited to detect hallucinations compared to alternative metrics. With mFACT, we assess a broad range of multilingual large language models, and find that they all tend to hallucinate often in languages different from English. We then propose a simple but effective method to reduce hallucinations in cross-lingual transfer, which weighs the loss of each training example by its faithfulness score. This method drastically increases both performance and faithfulness according to both automatic and human evaluation when compared to strong baselines for cross-lingual transfer such as MAD-X. Our code and dataset are available at https://github.com/yfqiu-nlp/mfact-summ.",
}
| Hallucinations pose a significant challenge to the reliability of neural models for abstractive summarisation. While automatically generated summaries may be fluent, they often lack faithfulness to the original document. This issue becomes even more pronounced in low-resource languages, where summarisation requires cross-lingual transfer. With the existing faithful metrics focusing on English, even measuring the extent of this phenomenon in cross-lingual settings is hard. To address this, we first develop a novel metric, mFACT, evaluating the faithfulness of non-English summaries, leveraging translation-based transfer from multiple English faithfulness metrics. Through extensive experiments in multiple languages, we demonstrate that mFACT is best suited to detect hallucinations compared to alternative metrics. With mFACT, we assess a broad range of multilingual large language models, and find that they all tend to hallucinate often in languages different from English. We then propose a simple but effective method to reduce hallucinations in cross-lingual transfer, which weighs the loss of each training example by its faithfulness score. This method drastically increases both performance and faithfulness according to both automatic and human evaluation when compared to strong baselines for cross-lingual transfer such as MAD-X. Our code and dataset are available at https://github.com/yfqiu-nlp/mfact-summ. | [
"Qiu, Yifu",
"Ziser, Yftah",
"Korhonen, Anna",
"Ponti, Edoardo",
"Cohen, Shay"
] | Detecting and Mitigating Hallucinations in Multilingual Summarisation | emnlp-main.551 | 2305.13632 | [
"https://github.com/yfqiu-nlp/mfact-summ"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.552.bib | https://aclanthology.org/2023.emnlp-main.552/ | @inproceedings{kodner-etal-2023-exploring,
title = "Exploring Linguistic Probes for Morphological Generalization",
author = "Kodner, Jordan and
Khalifa, Salam and
Payne, Sarah Ruth Brogden",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.552",
doi = "10.18653/v1/2023.emnlp-main.552",
pages = "8933--8941",
abstract = "Modern work on the cross-linguistic computational modeling of morphological inflection has typically employed language-independent data splitting algorithms. In this paper, we supplement that approach with language-specific probes designed to test aspects of morphological generalization. Testing these probes on three morphologically distinct languages, English, Spanish, and Swahili, we find evidence that three leading morphological inflection systems employ distinct generalization strategies over conjugational classes and feature sets on both orthographic and phonologically transcribed inputs.",
}
| Modern work on the cross-linguistic computational modeling of morphological inflection has typically employed language-independent data splitting algorithms. In this paper, we supplement that approach with language-specific probes designed to test aspects of morphological generalization. Testing these probes on three morphologically distinct languages, English, Spanish, and Swahili, we find evidence that three leading morphological inflection systems employ distinct generalization strategies over conjugational classes and feature sets on both orthographic and phonologically transcribed inputs. | [
"Kodner, Jordan",
"Khalifa, Salam",
"Payne, Sarah Ruth Brogden"
] | Exploring Linguistic Probes for Morphological Generalization | emnlp-main.552 | 2310.13686 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
|
https://aclanthology.org/2023.emnlp-main.553.bib | https://aclanthology.org/2023.emnlp-main.553/ | @inproceedings{lou-tu-2023-amr,
title = "{AMR} Parsing with Causal Hierarchical Attention and Pointers",
author = "Lou, Chao and
Tu, Kewei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.553",
doi = "10.18653/v1/2023.emnlp-main.553",
pages = "8942--8955",
abstract = "Translation-based AMR parsers have recently gained popularity due to their simplicity and effectiveness. They predict linearized graphs as free texts, avoiding explicit structure modeling. However, this simplicity neglects structural locality in AMR graphs and introduces unnecessary tokens to represent coreferences. In this paper, we introduce new target forms of AMR parsing and a novel model, CHAP, which is equipped with causal hierarchical attention and the pointer mechanism, enabling the integration of structures into the Transformer decoder. We empirically explore various alternative modeling options. Experiments show that our model outperforms baseline models on four out of five benchmarks in the setting of no additional data.",
}
| Translation-based AMR parsers have recently gained popularity due to their simplicity and effectiveness. They predict linearized graphs as free texts, avoiding explicit structure modeling. However, this simplicity neglects structural locality in AMR graphs and introduces unnecessary tokens to represent coreferences. In this paper, we introduce new target forms of AMR parsing and a novel model, CHAP, which is equipped with causal hierarchical attention and the pointer mechanism, enabling the integration of structures into the Transformer decoder. We empirically explore various alternative modeling options. Experiments show that our model outperforms baseline models on four out of five benchmarks in the setting of no additional data. | [
"Lou, Chao",
"Tu, Kewei"
] | AMR Parsing with Causal Hierarchical Attention and Pointers | emnlp-main.553 | 2310.11964 | [
"https://github.com/louchao98/chap_amr_parser"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.554.bib | https://aclanthology.org/2023.emnlp-main.554/ | @inproceedings{lin-gu-2023-flats,
title = "{FL}at{S}: Principled Out-of-Distribution Detection with Feature-Based Likelihood Ratio Score",
author = "Lin, Haowei and
Gu, Yuntian",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.554",
doi = "10.18653/v1/2023.emnlp-main.554",
pages = "8956--8963",
abstract = "Detecting out-of-distribution (OOD) instances is crucial for NLP models in practical applications. Although numerous OOD detection methods exist, most of them are empirical. Backed by theoretical analysis, this paper advocates for the measurement of the {``}OOD-ness{''} of a test case $\boldsymbol{x}$ through the \textit{likelihood ratio} between out-distribution $\mathcal P_{\textit{out}}$ and in-distribution $\mathcal P_{\textit{in}}$. We argue that the state-of-the-art (SOTA) feature-based OOD detection methods, such as Maha and KNN, are suboptimal since they only estimate in-distribution density $p_{\textit{in}}(\boldsymbol{x})$. To address this issue, we propose \textbf{FLATS}, a principled solution for OOD detection based on likelihood ratio. Moreover, we demonstrate that FLATS can serve as a general framework capable of enhancing other OOD detection methods by incorporating out-distribution density $p_{\textit{out}}(\boldsymbol{x})$ estimation. Experiments show that FLATS establishes a new SOTA on popular benchmarks.",
}
| Detecting out-of-distribution (OOD) instances is crucial for NLP models in practical applications. Although numerous OOD detection methods exist, most of them are empirical. Backed by theoretical analysis, this paper advocates for the measurement of the {``}OOD-ness{''} of a test case $\boldsymbol{x}$ through the \textit{likelihood ratio} between out-distribution $\mathcal P_{\textit{out}}$ and in-distribution $\mathcal P_{\textit{in}}$. We argue that the state-of-the-art (SOTA) feature-based OOD detection methods, such as Maha and KNN, are suboptimal since they only estimate in-distribution density $p_{\textit{in}}(\boldsymbol{x})$. To address this issue, we propose \textbf{FLATS}, a principled solution for OOD detection based on likelihood ratio. Moreover, we demonstrate that FLATS can serve as a general framework capable of enhancing other OOD detection methods by incorporating out-distribution density $p_{\textit{out}}(\boldsymbol{x})$ estimation. Experiments show that FLATS establishes a new SOTA on popular benchmarks. | [
"Lin, Haowei",
"Gu, Yuntian"
] | FLatS: Principled Out-of-Distribution Detection with Feature-Based Likelihood Ratio Score | emnlp-main.554 | 2310.05083 | [
"https://github.com/linhaowei1/flats"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.555.bib | https://aclanthology.org/2023.emnlp-main.555/ | @inproceedings{zheng-etal-2023-self,
title = "Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks",
author = "Zheng, Haoqi and
Zhong, Qihuang and
Ding, Liang and
Tian, Zhiliang and
Niu, Xin and
Wang, Changjian and
Li, Dongsheng and
Tao, Dacheng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.555",
doi = "10.18653/v1/2023.emnlp-main.555",
pages = "8964--8974",
abstract = "Text classification tasks often encounter few-shot scenarios with limited labeled data, and addressing data scarcity is crucial. Data augmentation with mixup merges sample pairs to generate new pseudos, which can relieve the data deficiency issue in text classification. However, the quality of pseudo-samples generated by mixup exhibits significant variations. Most of the mixup methods fail to consider the varying degree of learning difficulty in different stages of training. And mixup generates new samples with one-hot labels, which encourages the model to produce a high prediction score for the correct class that is much larger than other classes, resulting in the model{'}s over-confidence. In this paper, we propose a self-evolution learning (SE) based mixup approach for data augmentation in text classification, which can generate more adaptive and model-friendly pseudo samples for the model training. SE caters to the growth of the model learning ability and adapts to the ability when generating training samples. To alleviate the model over-confidence, we introduce an instance-specific label smoothing regularization approach, which linearly interpolates the model{'}s output and one-hot labels of the original samples to generate new soft labels for label mixing up. Through experimental analysis, experiments show that our SE brings consistent and significant improvements upon different mixup methods. In-depth analyses demonstrate that SE enhances the model{'}s generalization ability.",
}
| Text classification tasks often encounter few-shot scenarios with limited labeled data, and addressing data scarcity is crucial. Data augmentation with mixup merges sample pairs to generate new pseudos, which can relieve the data deficiency issue in text classification. However, the quality of pseudo-samples generated by mixup exhibits significant variations. Most of the mixup methods fail to consider the varying degree of learning difficulty in different stages of training. And mixup generates new samples with one-hot labels, which encourages the model to produce a high prediction score for the correct class that is much larger than other classes, resulting in the model{'}s over-confidence. In this paper, we propose a self-evolution learning (SE) based mixup approach for data augmentation in text classification, which can generate more adaptive and model-friendly pseudo samples for the model training. SE caters to the growth of the model learning ability and adapts to the ability when generating training samples. To alleviate the model over-confidence, we introduce an instance-specific label smoothing regularization approach, which linearly interpolates the model{'}s output and one-hot labels of the original samples to generate new soft labels for label mixing up. Through experimental analysis, experiments show that our SE brings consistent and significant improvements upon different mixup methods. In-depth analyses demonstrate that SE enhances the model{'}s generalization ability. | [
"Zheng, Haoqi",
"Zhong, Qihuang",
"Ding, Liang",
"Tian, Zhiliang",
"Niu, Xin",
"Wang, Changjian",
"Li, Dongsheng",
"Tao, Dacheng"
] | Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks | emnlp-main.555 | 2305.13547 | [
""
] | https://huggingface.co/papers/2305.13547 | 1 | 1 | 0 | 7 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.556.bib | https://aclanthology.org/2023.emnlp-main.556/ | @inproceedings{chan-etal-2023-ic3,
title = "{IC}3: Image Captioning by Committee Consensus",
author = "Chan, David and
Myers, Austin and
Vijayanarasimhan, Sudheendra and
Ross, David and
Canny, John",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.556",
doi = "10.18653/v1/2023.emnlp-main.556",
pages = "8975--9003",
abstract = "If you ask a human to describe an image, they might do so in a thousand different ways. Traditionally, image captioning models are trained to generate a single {``}best{'} (most like a reference) image caption. Unfortunately, doing so encourages captions that are {``}informationally impoverished,{'} and focus on only a subset of the possible details, while ignoring other potentially useful information in the scene. In this work, we introduce a simple, yet novel, method: {``}Image Captioning by Committee Consensus{'} (IC3), designed to generate a single caption that captures high-level details from several annotator viewpoints. Humans rate captions produced by IC3 at least as helpful as baseline SOTA models more than two thirds of the time, and IC3 can improve the performance of SOTA automated recall systems by up to 84{\%}, outperforming single human-generated reference captions, and indicating significant improvements over SOTA approaches for visual description. Code is available at [https://davidmchan.github.io/caption-by-committee/](https://davidmchan.github.io/caption-by-committee/)",
}
| If you ask a human to describe an image, they might do so in a thousand different ways. Traditionally, image captioning models are trained to generate a single {``}best{'} (most like a reference) image caption. Unfortunately, doing so encourages captions that are {``}informationally impoverished,{'} and focus on only a subset of the possible details, while ignoring other potentially useful information in the scene. In this work, we introduce a simple, yet novel, method: {``}Image Captioning by Committee Consensus{'} (IC3), designed to generate a single caption that captures high-level details from several annotator viewpoints. Humans rate captions produced by IC3 at least as helpful as baseline SOTA models more than two thirds of the time, and IC3 can improve the performance of SOTA automated recall systems by up to 84{\%}, outperforming single human-generated reference captions, and indicating significant improvements over SOTA approaches for visual description. Code is available at [https://davidmchan.github.io/caption-by-committee/](https://davidmchan.github.io/caption-by-committee/) | [
"Chan, David",
"Myers, Austin",
"Vijayanarasimhan, Sudheendra",
"Ross, David",
"Canny, John"
] | IC3: Image Captioning by Committee Consensus | emnlp-main.556 | 2302.01328 | [
"https://github.com/davidmchan/caption-by-committee"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.557.bib | https://aclanthology.org/2023.emnlp-main.557/ | @inproceedings{manakul-etal-2023-selfcheckgpt,
title = "{S}elf{C}heck{GPT}: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models",
author = "Manakul, Potsawee and
Liusie, Adian and
Gales, Mark",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.557",
doi = "10.18653/v1/2023.emnlp-main.557",
pages = "9004--9017",
abstract = "Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate facts and make non-factual statements which can undermine trust in their output. Existing fact-checking approaches either require access to the output probability distribution (which may not be available for systems such as ChatGPT) or external databases that are interfaced via separate, often complex, modules. In this work, we propose {``}SelfCheckGPT{''}, a simple sampling-based approach that can be used to fact-check the responses of black-box models in a zero-resource fashion, i.e. without an external database. SelfCheckGPT leverages the simple idea that if an LLM has knowledge of a given concept, sampled responses are likely to be similar and contain consistent facts. However, for hallucinated facts, stochastically sampled responses are likely to diverge and contradict one another. We investigate this approach by using GPT-3 to generate passages about individuals from the WikiBio dataset, and manually annotate the factuality of the generated passages. We demonstrate that SelfCheckGPT can: i) detect non-factual and factual sentences; and ii) rank passages in terms of factuality. We compare our approach to several baselines and show that our approach has considerably higher AUC-PR scores in sentence-level hallucination detection and higher correlation scores in passage-level factuality assessment compared to grey-box methods.",
}
| Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate facts and make non-factual statements which can undermine trust in their output. Existing fact-checking approaches either require access to the output probability distribution (which may not be available for systems such as ChatGPT) or external databases that are interfaced via separate, often complex, modules. In this work, we propose {``}SelfCheckGPT{''}, a simple sampling-based approach that can be used to fact-check the responses of black-box models in a zero-resource fashion, i.e. without an external database. SelfCheckGPT leverages the simple idea that if an LLM has knowledge of a given concept, sampled responses are likely to be similar and contain consistent facts. However, for hallucinated facts, stochastically sampled responses are likely to diverge and contradict one another. We investigate this approach by using GPT-3 to generate passages about individuals from the WikiBio dataset, and manually annotate the factuality of the generated passages. We demonstrate that SelfCheckGPT can: i) detect non-factual and factual sentences; and ii) rank passages in terms of factuality. We compare our approach to several baselines and show that our approach has considerably higher AUC-PR scores in sentence-level hallucination detection and higher correlation scores in passage-level factuality assessment compared to grey-box methods. | [
"Manakul, Potsawee",
"Liusie, Adian",
"Gales, Mark"
] | SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models | emnlp-main.557 | 2303.08896 | [
"https://github.com/potsawee/selfcheckgpt"
] | https://huggingface.co/papers/2303.08896 | 1 | 4 | 0 | 3 | [
"potsawee/longformer-large-4096-answerable-squad2",
"bond005/xlm-roberta-xl-hallucination-detector",
"lIlBrother/ko-answerable"
] | [
"potsawee/wiki_bio_gpt3_hallucination"
] | [
"mithril-security/hallucination_detector"
] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.558.bib | https://aclanthology.org/2023.emnlp-main.558/ | @inproceedings{maheshwari-etal-2023-fair,
title = "Fair Without Leveling Down: A New Intersectional Fairness Definition",
author = "Maheshwari, Gaurav and
Bellet, Aur{\'e}lien and
Denis, Pascal and
Keller, Mikaela",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.558",
doi = "10.18653/v1/2023.emnlp-main.558",
pages = "9018--9032",
abstract = "In this work, we consider the problem of intersectional group fairness in the classification setting, where the objective is to learn discrimination-free models in the presence of several intersecting sensitive groups. First, we illustrate various shortcomings of existing fairness measures commonly used to capture intersectional fairness. Then, we propose a new definition called the $\alpha$-Intersectional Fairness, which combines the absolute and the relative performance across sensitive groups and can be seen as a generalization of the notion of differential fairness. We highlight several desirable properties of the proposed definition and analyze its relation to other fairness measures. Finally, we benchmark multiple popular in-processing fair machine learning approaches using our new fairness definition and show that they do not achieve any improvement over a simple baseline. Our results reveal that the increase in fairness measured by previous definitions hides a {``}leveling down{''} effect, i.e., degrading the best performance over groups rather than improving the worst one.",
}
| In this work, we consider the problem of intersectional group fairness in the classification setting, where the objective is to learn discrimination-free models in the presence of several intersecting sensitive groups. First, we illustrate various shortcomings of existing fairness measures commonly used to capture intersectional fairness. Then, we propose a new definition called the $\alpha$-Intersectional Fairness, which combines the absolute and the relative performance across sensitive groups and can be seen as a generalization of the notion of differential fairness. We highlight several desirable properties of the proposed definition and analyze its relation to other fairness measures. Finally, we benchmark multiple popular in-processing fair machine learning approaches using our new fairness definition and show that they do not achieve any improvement over a simple baseline. Our results reveal that the increase in fairness measured by previous definitions hides a {``}leveling down{''} effect, i.e., degrading the best performance over groups rather than improving the worst one. | [
"Maheshwari, Gaurav",
"Bellet, Aur{\\'e}lien",
"Denis, Pascal",
"Keller, Mikaela"
] | Fair Without Leveling Down: A New Intersectional Fairness Definition | emnlp-main.558 | 2305.12495 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.559.bib | https://aclanthology.org/2023.emnlp-main.559/ | @inproceedings{faysse-etal-2023-revisiting,
title = "Revisiting Instruction Fine-tuned Model Evaluation to Guide Industrial Applications",
author = "Faysse, Manuel and
Viaud, Gautier and
Hudelot, C{\'e}line and
Colombo, Pierre",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.559",
doi = "10.18653/v1/2023.emnlp-main.559",
pages = "9033--9048",
abstract = "Instruction Fine-Tuning (IFT) is a powerful paradigm that strengthens the zero-shot capabilities of Large Language Models (LLMs), but in doing so induces new evaluation metric requirements. We show LLM-based metrics to be well adapted to these requirements, and leverage them to conduct an investigation of task-specialization strategies, quantifying the trade-offs that emerge in practical industrial settings. Our findings offer practitioners actionable insights for real-world IFT model deployment.",
}
| Instruction Fine-Tuning (IFT) is a powerful paradigm that strengthens the zero-shot capabilities of Large Language Models (LLMs), but in doing so induces new evaluation metric requirements. We show LLM-based metrics to be well adapted to these requirements, and leverage them to conduct an investigation of task-specialization strategies, quantifying the trade-offs that emerge in practical industrial settings. Our findings offer practitioners actionable insights for real-world IFT model deployment. | [
"Faysse, Manuel",
"Viaud, Gautier",
"Hudelot, C{\\'e}line",
"Colombo, Pierre"
] | Revisiting Instruction Fine-tuned Model Evaluation to Guide Industrial Applications | emnlp-main.559 | 2310.14103 | [
"https://github.com/manuelfay/ifteval"
] | https://huggingface.co/papers/2310.14103 | 1 | 1 | 1 | 4 | [] | [
"manu/IFTEval"
] | [] | 1 | Oral |
https://aclanthology.org/2023.emnlp-main.560.bib | https://aclanthology.org/2023.emnlp-main.560/ | @inproceedings{indurthi-etal-2023-clad,
title = "{CLAD}-{ST}: Contrastive Learning with Adversarial Data for Robust Speech Translation",
author = "Indurthi, Sathish and
Chollampatt, Shamil and
Agrawal, Ravi and
Turchi, Marco",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.560",
doi = "10.18653/v1/2023.emnlp-main.560",
pages = "9049--9056",
abstract = "The cascaded approach continues to be the most popular choice for speech translation (ST). This approach consists of an automatic speech recognition (ASR) model and a machine translation (MT) model that are used in a pipeline to translate speech in one language to text in another language. MT models are often trained on the well-formed text and therefore lack robustness while translating noisy ASR outputs in the cascaded approach, degrading the overall translation quality significantly. We address this robustness problem in downstream MT models by forcing the MT encoder to bring the representations of a noisy input closer to its clean version in the semantic space. This is achieved by introducing a contrastive learning method that leverages adversarial examples in the form of ASR outputs paired with their corresponding human transcripts to optimize the network parameters. In addition, a curriculum learning strategy is then used to stabilize the training by alternating the standard MT log-likelihood loss and the contrastive losses. Our approach achieves significant gains of up to 3 BLEU scores in English-German and English-French speech translation without hurting the translation quality on clean text.",
}
| The cascaded approach continues to be the most popular choice for speech translation (ST). This approach consists of an automatic speech recognition (ASR) model and a machine translation (MT) model that are used in a pipeline to translate speech in one language to text in another language. MT models are often trained on the well-formed text and therefore lack robustness while translating noisy ASR outputs in the cascaded approach, degrading the overall translation quality significantly. We address this robustness problem in downstream MT models by forcing the MT encoder to bring the representations of a noisy input closer to its clean version in the semantic space. This is achieved by introducing a contrastive learning method that leverages adversarial examples in the form of ASR outputs paired with their corresponding human transcripts to optimize the network parameters. In addition, a curriculum learning strategy is then used to stabilize the training by alternating the standard MT log-likelihood loss and the contrastive losses. Our approach achieves significant gains of up to 3 BLEU scores in English-German and English-French speech translation without hurting the translation quality on clean text. | [
"Indurthi, Sathish",
"Chollampatt, Shamil",
"Agrawal, Ravi",
"Turchi, Marco"
] | CLAD-ST: Contrastive Learning with Adversarial Data for Robust Speech Translation | emnlp-main.560 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.561.bib | https://aclanthology.org/2023.emnlp-main.561/ | @inproceedings{zhao-etal-2023-m2df,
title = "{M}2{DF}: Multi-grained Multi-curriculum Denoising Framework for Multimodal Aspect-based Sentiment Analysis",
author = "Zhao, Fei and
Li, Chunhui and
Wu, Zhen and
Ouyang, Yawen and
Zhang, Jianbing and
Dai, Xinyu",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.561",
doi = "10.18653/v1/2023.emnlp-main.561",
pages = "9057--9070",
abstract = "Multimodal Aspect-based Sentiment Analysis (MABSA) is a fine-grained Sentiment Analysis task, which has attracted growing research interests recently. Existing work mainly utilizes image information to improve the performance of MABSA task. However, most of the studies overestimate the importance of images since there are many noise images unrelated to the text in the dataset, which will have a negative impact on model learning. Although some work attempts to filter low-quality noise images by setting thresholds, relying on thresholds will inevitably filter out a lot of useful image information. Therefore, in this work, we focus on whether the negative impact of noisy images can be reduced without modifying the data. To achieve this goal, we borrow the idea of Curriculum Learning and propose a Multi-grained Multi-curriculum Denoising Framework (M2DF), which can achieve denoising by adjusting the order of training data. Extensive experimental results show that our framework consistently outperforms state-of-the-art work on three sub-tasks of MABSA.",
}
| Multimodal Aspect-based Sentiment Analysis (MABSA) is a fine-grained Sentiment Analysis task, which has attracted growing research interests recently. Existing work mainly utilizes image information to improve the performance of MABSA task. However, most of the studies overestimate the importance of images since there are many noise images unrelated to the text in the dataset, which will have a negative impact on model learning. Although some work attempts to filter low-quality noise images by setting thresholds, relying on thresholds will inevitably filter out a lot of useful image information. Therefore, in this work, we focus on whether the negative impact of noisy images can be reduced without modifying the data. To achieve this goal, we borrow the idea of Curriculum Learning and propose a Multi-grained Multi-curriculum Denoising Framework (M2DF), which can achieve denoising by adjusting the order of training data. Extensive experimental results show that our framework consistently outperforms state-of-the-art work on three sub-tasks of MABSA. | [
"Zhao, Fei",
"Li, Chunhui",
"Wu, Zhen",
"Ouyang, Yawen",
"Zhang, Jianbing",
"Dai, Xinyu"
] | M2DF: Multi-grained Multi-curriculum Denoising Framework for Multimodal Aspect-based Sentiment Analysis | emnlp-main.561 | 2310.14605 | [
"https://github.com/grandchicken/m2df"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.562.bib | https://aclanthology.org/2023.emnlp-main.562/ | @inproceedings{chen-etal-2023-detection,
title = "Detection of Multiple Mental Disorders from Social Media with Two-Stream Psychiatric Experts",
author = "Chen, Siyuan and
Zhang, Zhiling and
Wu, Mengyue and
Zhu, Kenny",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.562",
doi = "10.18653/v1/2023.emnlp-main.562",
pages = "9071--9084",
abstract = "Existing Mental Disease Detection (MDD) research largely studies the detection of a single disorder, overlooking the fact that mental diseases might occur in tandem. Many approaches are not backed by domain knowledge (e.g., psychiatric symptoms) and thus fail to produce interpretable results. To tackle these issues, we propose an MDD framework that is capable of learning the shared clues of all diseases, while also capturing the specificity of each single disease. The two-stream architecture which simultaneously processes text and symptom features can combine the strength of both modalities and offer knowledge-based explainability. Experiments on the detection of 7 diseases show that our model can boost detection performance by more than 10{\%}, especially in relatively rare classes.",
}
| Existing Mental Disease Detection (MDD) research largely studies the detection of a single disorder, overlooking the fact that mental diseases might occur in tandem. Many approaches are not backed by domain knowledge (e.g., psychiatric symptoms) and thus fail to produce interpretable results. To tackle these issues, we propose an MDD framework that is capable of learning the shared clues of all diseases, while also capturing the specificity of each single disease. The two-stream architecture which simultaneously processes text and symptom features can combine the strength of both modalities and offer knowledge-based explainability. Experiments on the detection of 7 diseases show that our model can boost detection performance by more than 10{\%}, especially in relatively rare classes. | [
"Chen, Siyuan",
"Zhang, Zhiling",
"Wu, Mengyue",
"Zhu, Kenny"
] | Detection of Multiple Mental Disorders from Social Media with Two-Stream Psychiatric Experts | emnlp-main.562 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
||
https://aclanthology.org/2023.emnlp-main.563.bib | https://aclanthology.org/2023.emnlp-main.563/ | @inproceedings{alajrami-etal-2023-understanding,
title = "Understanding the Role of Input Token Characters in Language Models: How Does Information Loss Affect Performance?",
author = "Alajrami, Ahmed and
Margatina, Katerina and
Aletras, Nikolaos",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.563",
doi = "10.18653/v1/2023.emnlp-main.563",
pages = "9085--9108",
abstract = "Understanding how and what pre-trained language models (PLMs) learn about language is an open challenge in natural language processing. Previous work has focused on identifying whether they capture semantic and syntactic information, and how the data or the pre-training objective affects their performance. However, to the best of our knowledge, no previous work has specifically examined how information loss in input token characters affects the performance of PLMs. In this study, we address this gap by pre-training language models using small subsets of characters from individual tokens. Surprisingly, we find that pre-training even under extreme settings, i.e. using only one character of each token, the performance retention in standard NLU benchmarks and probing tasks compared to full-token models is high. For instance, a model pre-trained only on single first characters from tokens achieves performance retention of approximately 90{\%} and 77{\%} of the full-token model in SuperGLUE and GLUE tasks, respectively.",
}
| Understanding how and what pre-trained language models (PLMs) learn about language is an open challenge in natural language processing. Previous work has focused on identifying whether they capture semantic and syntactic information, and how the data or the pre-training objective affects their performance. However, to the best of our knowledge, no previous work has specifically examined how information loss in input token characters affects the performance of PLMs. In this study, we address this gap by pre-training language models using small subsets of characters from individual tokens. Surprisingly, we find that pre-training even under extreme settings, i.e. using only one character of each token, the performance retention in standard NLU benchmarks and probing tasks compared to full-token models is high. For instance, a model pre-trained only on single first characters from tokens achieves performance retention of approximately 90{\%} and 77{\%} of the full-token model in SuperGLUE and GLUE tasks, respectively. | [
"Alajrami, Ahmed",
"Margatina, Katerina",
"Aletras, Nikolaos"
] | Understanding the Role of Input Token Characters in Language Models: How Does Information Loss Affect Performance? | emnlp-main.563 | 2310.17271 | [
""
] | https://huggingface.co/papers/2310.17271 | 0 | 1 | 1 | 3 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.564.bib | https://aclanthology.org/2023.emnlp-main.564/ | @inproceedings{li-etal-2023-improved,
title = "Improved Unsupervised {C}hinese Word Segmentation Using Pre-trained Knowledge and Pseudo-labeling Transfer",
author = "Li, Hsiu-Wen and
Lin, Ying-Jia and
Li, Yi-Ting and
Lin, Chun and
Kao, Hung-Yu",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.564",
doi = "10.18653/v1/2023.emnlp-main.564",
pages = "9109--9118",
abstract = "Unsupervised Chinese word segmentation (UCWS) has made progress by incorporating linguistic knowledge from pre-trained language models using parameter-free probing techniques. However, such approaches suffer from increased training time due to the need for multiple inferences using a pre-trained language model to perform word segmentation. This work introduces a novel way to enhance UCWS performance while maintaining training efficiency. Our proposed method integrates the segmentation signal from the unsupervised segmental language model to the pre-trained BERT classifier under a pseudo-labeling framework. Experimental results demonstrate that our approach achieves state-of-the-art performance on the eight UCWS tasks while considerably reducing the training time compared to previous approaches.",
}
| Unsupervised Chinese word segmentation (UCWS) has made progress by incorporating linguistic knowledge from pre-trained language models using parameter-free probing techniques. However, such approaches suffer from increased training time due to the need for multiple inferences using a pre-trained language model to perform word segmentation. This work introduces a novel way to enhance UCWS performance while maintaining training efficiency. Our proposed method integrates the segmentation signal from the unsupervised segmental language model to the pre-trained BERT classifier under a pseudo-labeling framework. Experimental results demonstrate that our approach achieves state-of-the-art performance on the eight UCWS tasks while considerably reducing the training time compared to previous approaches. | [
"Li, Hsiu-Wen",
"Lin, Ying-Jia",
"Li, Yi-Ting",
"Lin, Chun",
"Kao, Hung-Yu"
] | Improved Unsupervised Chinese Word Segmentation Using Pre-trained Knowledge and Pseudo-labeling Transfer | emnlp-main.564 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
||
https://aclanthology.org/2023.emnlp-main.565.bib | https://aclanthology.org/2023.emnlp-main.565/ | @inproceedings{tang-etal-2023-easyquant,
title = "{E}asy{Q}uant: An Efficient Data-free Quantization Algorithm for {LLM}s",
author = "Tang, Hanlin and
Sun, Yifu and
Wu, Decheng and
Liu, Kai and
Zhu, Jianchen and
Kang, Zhanhui",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.565",
doi = "10.18653/v1/2023.emnlp-main.565",
pages = "9119--9128",
abstract = "Large language models (LLMs) have proven to be very superior to conventional methods in various tasks. However, their expensive computations and high memory requirements are prohibitive for deployment. Model quantization is an effective method for reducing this overhead. The problem is that in most previous works, the quantized model was calibrated using few samples from the training data, which might affect the generalization of the quantized LLMs to unknown cases and tasks. Hence in this work, we explore an important question: Can we design a data-independent quantization method for LLMs to guarantee its generalization performance? In this work, we propose EasyQuant, a training-free and data-independent weight-only quantization algorithm for LLMs. Our observation indicates that two factors: outliers in the weight and quantization ranges, are essential for reducing the quantization error. Therefore, in EasyQuant, we leave the outliers (less than 1{\%}) unchanged and optimize the quantization range to reduce the reconstruction error. With these methods, we surprisingly find that EasyQuant achieves comparable performance to the original model. Since EasyQuant does not depend on any training data, the generalization performance of quantized LLMs is safely guaranteed. Moreover, EasyQuant can be implemented in parallel so that the quantized model could be attained in a few minutes even for LLMs over 100B. To our best knowledge, we are the first work that achieves almost lossless quantization performance for LLMs under a data-independent setting and our algorithm runs over 10 times faster than the data-dependent methods.",
}
| Large language models (LLMs) have proven to be very superior to conventional methods in various tasks. However, their expensive computations and high memory requirements are prohibitive for deployment. Model quantization is an effective method for reducing this overhead. The problem is that in most previous works, the quantized model was calibrated using few samples from the training data, which might affect the generalization of the quantized LLMs to unknown cases and tasks. Hence in this work, we explore an important question: Can we design a data-independent quantization method for LLMs to guarantee its generalization performance? In this work, we propose EasyQuant, a training-free and data-independent weight-only quantization algorithm for LLMs. Our observation indicates that two factors: outliers in the weight and quantization ranges, are essential for reducing the quantization error. Therefore, in EasyQuant, we leave the outliers (less than 1{\%}) unchanged and optimize the quantization range to reduce the reconstruction error. With these methods, we surprisingly find that EasyQuant achieves comparable performance to the original model. Since EasyQuant does not depend on any training data, the generalization performance of quantized LLMs is safely guaranteed. Moreover, EasyQuant can be implemented in parallel so that the quantized model could be attained in a few minutes even for LLMs over 100B. To our best knowledge, we are the first work that achieves almost lossless quantization performance for LLMs under a data-independent setting and our algorithm runs over 10 times faster than the data-dependent methods. | [
"Tang, Hanlin",
"Sun, Yifu",
"Wu, Decheng",
"Liu, Kai",
"Zhu, Jianchen",
"Kang, Zhanhui"
] | EasyQuant: An Efficient Data-free Quantization Algorithm for LLMs | emnlp-main.565 | 2403.02775 | [
""
] | https://huggingface.co/papers/2403.02775 | 1 | 11 | 3 | 6 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.566.bib | https://aclanthology.org/2023.emnlp-main.566/ | @inproceedings{atzeni-etal-2023-polar,
title = "Polar Ducks and Where to Find Them: Enhancing Entity Linking with Duck Typing and Polar Box Embeddings",
author = "Atzeni, Mattia and
Plekhanov, Mikhail and
Dreyer, Frederic and
Kassner, Nora and
Merello, Simone and
Martin, Louis and
Cancedda, Nicola",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.566",
doi = "10.18653/v1/2023.emnlp-main.566",
pages = "9129--9146",
abstract = "Entity linking methods based on dense retrieval are widely adopted in large-scale applications for their efficiency, but they can fall short of generative models, as they are sensitive to the structure of the embedding space. To address this issue, this paper introduces DUCK, an approach to infusing structural information in the space of entity representations, using prior knowledge of entity types. Inspired by duck typing in programming languages, we define the type of an entity based on its relations with other entities in a knowledge graph. Then, porting the concept of box embeddings to spherical polar coordinates, we represent relations as boxes on the hypersphere. We optimize the model to place entities inside the boxes corresponding to their relations, thereby clustering together entities of similar type. Our experiments show that our method sets new state-of-the-art results on standard entity-disambiguation benchmarks. It improves the performance of the model by up to 7.9 F1 points, outperforms other type-aware approaches, and matches the results of generative models with 18 times more parameters.",
}
| Entity linking methods based on dense retrieval are widely adopted in large-scale applications for their efficiency, but they can fall short of generative models, as they are sensitive to the structure of the embedding space. To address this issue, this paper introduces DUCK, an approach to infusing structural information in the space of entity representations, using prior knowledge of entity types. Inspired by duck typing in programming languages, we define the type of an entity based on its relations with other entities in a knowledge graph. Then, porting the concept of box embeddings to spherical polar coordinates, we represent relations as boxes on the hypersphere. We optimize the model to place entities inside the boxes corresponding to their relations, thereby clustering together entities of similar type. Our experiments show that our method sets new state-of-the-art results on standard entity-disambiguation benchmarks. It improves the performance of the model by up to 7.9 F1 points, outperforms other type-aware approaches, and matches the results of generative models with 18 times more parameters. | [
"Atzeni, Mattia",
"Plekhanov, Mikhail",
"Dreyer, Frederic",
"Kassner, Nora",
"Merello, Simone",
"Martin, Louis",
"Cancedda, Nicola"
] | Polar Ducks and Where to Find Them: Enhancing Entity Linking with Duck Typing and Polar Box Embeddings | emnlp-main.566 | 2305.12027 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.567.bib | https://aclanthology.org/2023.emnlp-main.567/ | @inproceedings{wang-etal-2023-aprompt,
title = "{AP}rompt: Attention Prompt Tuning for Efficient Adaptation of Pre-trained Language Models",
author = "Wang, Qifan and
Mao, Yuning and
Wang, Jingang and
Yu, Hanchao and
Nie, Shaoliang and
Wang, Sinong and
Feng, Fuli and
Huang, Lifu and
Quan, Xiaojun and
Xu, Zenglin and
Liu, Dongfang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.567",
doi = "10.18653/v1/2023.emnlp-main.567",
pages = "9147--9160",
abstract = "With the continuous growth of large language models, the process of fine-tuning these models for new tasks has become increasingly parameter-intensive. Prompt tuning, a method that involves tuning a small set of soft prompts, has emerged as an effective and efficient approach for adapting large pre-trained language models. However, most existing prompt tuning approaches only introduce prompts at the input layer, limiting their performance and leaving large rooms for improvement. In this work, we propose a novel Attention Prompt tuning method, namely APrompt, for efficient adaptation of pre-trained language models. We first demonstrate that existing prompt tuning can be considered as a special case of attention prompt tuning. We then formally introduce APrompt, which incorporates query, key, and value prompts into the attention layer to guide the attention computation during fine-tuning. Experimental results on the SuperGLUE benchmark consistently demonstrate that our proposed approach outperforms state-of-the-art baselines and full fine-tuning method with pre-trained models at different scales. In addition, a comprehensive set of ablation studies validate the effectiveness of the prompt design, as well as the efficiency of our approach.",
}
| With the continuous growth of large language models, the process of fine-tuning these models for new tasks has become increasingly parameter-intensive. Prompt tuning, a method that involves tuning a small set of soft prompts, has emerged as an effective and efficient approach for adapting large pre-trained language models. However, most existing prompt tuning approaches only introduce prompts at the input layer, limiting their performance and leaving large rooms for improvement. In this work, we propose a novel Attention Prompt tuning method, namely APrompt, for efficient adaptation of pre-trained language models. We first demonstrate that existing prompt tuning can be considered as a special case of attention prompt tuning. We then formally introduce APrompt, which incorporates query, key, and value prompts into the attention layer to guide the attention computation during fine-tuning. Experimental results on the SuperGLUE benchmark consistently demonstrate that our proposed approach outperforms state-of-the-art baselines and full fine-tuning method with pre-trained models at different scales. In addition, a comprehensive set of ablation studies validate the effectiveness of the prompt design, as well as the efficiency of our approach. | [
"Wang, Qifan",
"Mao, Yuning",
"Wang, Jingang",
"Yu, Hanchao",
"Nie, Shaoliang",
"Wang, Sinong",
"Feng, Fuli",
"Huang, Lifu",
"Quan, Xiaojun",
"Xu, Zenglin",
"Liu, Dongfang"
] | APrompt: Attention Prompt Tuning for Efficient Adaptation of Pre-trained Language Models | emnlp-main.567 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
||
https://aclanthology.org/2023.emnlp-main.568.bib | https://aclanthology.org/2023.emnlp-main.568/ | @inproceedings{kamath-etal-2023-whats,
title = "What{'}s {``}up{''} with vision-language models? Investigating their struggle with spatial reasoning",
author = "Kamath, Amita and
Hessel, Jack and
Chang, Kai-Wei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.568",
doi = "10.18653/v1/2023.emnlp-main.568",
pages = "9161--9175",
abstract = "Recent vision-language (VL) models are powerful, but can they reliably distinguish {``}right{''} from {``}left{''}? We curate three new corpora to quantify model comprehension of such basic spatial relations. These tests isolate spatial reasoning more precisely than existing datasets like VQAv2, e.g., our What{'}sUp benchmark contains sets of photographs varying only the spatial relations of objects, keeping their identity fixed (see Figure 1: models must comprehend not only the usual case of a dog under a table, but also, the same dog on top of the same table). We evaluate 18 VL models, finding that all perform poorly, e.g., BLIP finetuned on VQAv2, which nears human parity on VQAv2, achieves 56{\%} accuracy on our benchmarks vs. humans at 99{\%}. We conclude by studying causes of this surprising behavior, finding: 1) that popular vision-language pretraining corpora like LAION-2B contain little reliable data for learning spatial relationships; and 2) that basic modeling interventions like up-weighting preposition-containing instances or fine-tuning on our corpora are not sufficient to address the challenges our benchmarks pose. We are hopeful that these corpora will facilitate further research, and we release our data and code at https://github.com/amitakamath/whatsup{\_}vlms.",
}
| Recent vision-language (VL) models are powerful, but can they reliably distinguish {``}right{''} from {``}left{''}? We curate three new corpora to quantify model comprehension of such basic spatial relations. These tests isolate spatial reasoning more precisely than existing datasets like VQAv2, e.g., our What{'}sUp benchmark contains sets of photographs varying only the spatial relations of objects, keeping their identity fixed (see Figure 1: models must comprehend not only the usual case of a dog under a table, but also, the same dog on top of the same table). We evaluate 18 VL models, finding that all perform poorly, e.g., BLIP finetuned on VQAv2, which nears human parity on VQAv2, achieves 56{\%} accuracy on our benchmarks vs. humans at 99{\%}. We conclude by studying causes of this surprising behavior, finding: 1) that popular vision-language pretraining corpora like LAION-2B contain little reliable data for learning spatial relationships; and 2) that basic modeling interventions like up-weighting preposition-containing instances or fine-tuning on our corpora are not sufficient to address the challenges our benchmarks pose. We are hopeful that these corpora will facilitate further research, and we release our data and code at https://github.com/amitakamath/whatsup{\_}vlms. | [
"Kamath, Amita",
"Hessel, Jack",
"Chang, Kai-Wei"
] | What's “up” with vision-language models? Investigating their struggle with spatial reasoning | emnlp-main.568 | [
"https://github.com/amitakamath/whatsup_vlms"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.569.bib | https://aclanthology.org/2023.emnlp-main.569/ | @inproceedings{wang-etal-2023-ibadr,
title = "{IBADR}: an Iterative Bias-Aware Dataset Refinement Framework for Debiasing {NLU} models",
author = "Wang, Xiaoyue and
Liu, Xin and
Wang, Lijie and
Wang, Yaoxiang and
Su, Jinsong and
Wu, Hua",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.569",
doi = "10.18653/v1/2023.emnlp-main.569",
pages = "9176--9186",
abstract = "As commonly-used methods for debiasing natural language understanding (NLU) models, dataset refinement approaches heavily rely on manual data analysis, and thus maybe unable to cover all the potential biased features. In this paper, we propose IBADR, an Iterative Bias-Aware Dataset Refinement framework, which debiases NLU models without predefining biased features. We maintain an iteratively expanded sample pool. Specifically, at each iteration, we first train a shallow model to quantify the bias degree of samples in the pool. Then, we pair each sample with a bias indicator representing its bias degree, and use these extended samples to train a sample generator. In this way, this generator can effectively learn the correspondence relationship between bias indicators and samples. Furthermore, we employ the generator to produce pseudo samples with fewer biased features by feeding specific bias indicators. Finally, we incorporate the generated pseudo samples into the pool. Experimental results and in-depth analyses on two NLU tasks show that IBADR not only significantly outperforms existing dataset refinement approaches, achieving SOTA, but also is compatible with model-centric methods.",
}
| As commonly-used methods for debiasing natural language understanding (NLU) models, dataset refinement approaches heavily rely on manual data analysis, and thus maybe unable to cover all the potential biased features. In this paper, we propose IBADR, an Iterative Bias-Aware Dataset Refinement framework, which debiases NLU models without predefining biased features. We maintain an iteratively expanded sample pool. Specifically, at each iteration, we first train a shallow model to quantify the bias degree of samples in the pool. Then, we pair each sample with a bias indicator representing its bias degree, and use these extended samples to train a sample generator. In this way, this generator can effectively learn the correspondence relationship between bias indicators and samples. Furthermore, we employ the generator to produce pseudo samples with fewer biased features by feeding specific bias indicators. Finally, we incorporate the generated pseudo samples into the pool. Experimental results and in-depth analyses on two NLU tasks show that IBADR not only significantly outperforms existing dataset refinement approaches, achieving SOTA, but also is compatible with model-centric methods. | [
"Wang, Xiaoyue",
"Liu, Xin",
"Wang, Lijie",
"Wang, Yaoxiang",
"Su, Jinsong",
"Wu, Hua"
] | IBADR: an Iterative Bias-Aware Dataset Refinement Framework for Debiasing NLU models | emnlp-main.569 | 2311.00292 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.570.bib | https://aclanthology.org/2023.emnlp-main.570/ | @inproceedings{huang-etal-2023-learning-preference,
title = "Learning Preference Model for {LLM}s via Automatic Preference Data Generation",
author = "Huang, Shijia and
Zhao, Jianqiao and
Li, Yanyang and
Wang, Liwei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.570",
doi = "10.18653/v1/2023.emnlp-main.570",
pages = "9187--9199",
abstract = "Despite the advanced capacities of the state-of-the-art large language models (LLMs), they suffer from issues of hallucination, stereotype, etc. Preference models play an important role in LLM alignment, yet training preference models predominantly rely on human-annotated data. This reliance limits their versatility and scalability. In this paper, we propose learning the preference model for LLMs via automatic preference data generation (AutoPM). Our approach involves both In-Breadth Data Generation, which elicits pairwise preference data from LLMs following the helpful-honest-harmless (HHH) criteria, and In-Depth Data Generation, which enriches the dataset with responses spanning a wide quality range. With HHH-guided preference data, our approach simultaneously enables the LLMs to learn human preferences and align with human values. Quantitative assessments on five benchmark datasets demonstrate the reliability and potential of AutoPM, pointing out a more general and scalable way to improve LLM performance.",
}
| Despite the advanced capacities of the state-of-the-art large language models (LLMs), they suffer from issues of hallucination, stereotype, etc. Preference models play an important role in LLM alignment, yet training preference models predominantly rely on human-annotated data. This reliance limits their versatility and scalability. In this paper, we propose learning the preference model for LLMs via automatic preference data generation (AutoPM). Our approach involves both In-Breadth Data Generation, which elicits pairwise preference data from LLMs following the helpful-honest-harmless (HHH) criteria, and In-Depth Data Generation, which enriches the dataset with responses spanning a wide quality range. With HHH-guided preference data, our approach simultaneously enables the LLMs to learn human preferences and align with human values. Quantitative assessments on five benchmark datasets demonstrate the reliability and potential of AutoPM, pointing out a more general and scalable way to improve LLM performance. | [
"Huang, Shijia",
"Zhao, Jianqiao",
"Li, Yanyang",
"Wang, Liwei"
] | Learning Preference Model for LLMs via Automatic Preference Data Generation | emnlp-main.570 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.571.bib | https://aclanthology.org/2023.emnlp-main.571/ | @inproceedings{stap-monz-2023-multilingual,
title = "Multilingual $k$-Nearest-Neighbor Machine Translation",
author = "Stap, David and
Monz, Christof",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.571",
doi = "10.18653/v1/2023.emnlp-main.571",
pages = "9200--9208",
abstract = "\textit{k}-nearest-neighbor machine translation has demonstrated remarkable improvements in machine translation quality by creating a datastore of cached examples. However, these improvements have been limited to high-resource language pairs, with large datastores, and remain a challenge for low-resource languages. In this paper, we address this issue by combining representations from multiple languages into a single datastore. Our results consistently demonstrate substantial improvements not only in low-resource translation quality (up to $+3.6$ BLEU), but also for high-resource translation quality (up to $+0.5$ BLEU). Our experiments show that it is possible to create multilingual datastores that are a quarter of the size, achieving a 5.3x speed improvement, by using linguistic similarities for datastore creation.",
}
| \textit{k}-nearest-neighbor machine translation has demonstrated remarkable improvements in machine translation quality by creating a datastore of cached examples. However, these improvements have been limited to high-resource language pairs, with large datastores, and remain a challenge for low-resource languages. In this paper, we address this issue by combining representations from multiple languages into a single datastore. Our results consistently demonstrate substantial improvements not only in low-resource translation quality (up to $+3.6$ BLEU), but also for high-resource translation quality (up to $+0.5$ BLEU). Our experiments show that it is possible to create multilingual datastores that are a quarter of the size, achieving a 5.3x speed improvement, by using linguistic similarities for datastore creation. | [
"Stap, David",
"Monz, Christof"
] | Multilingual k-Nearest-Neighbor Machine Translation | emnlp-main.571 | 2310.14644 | [
"https://github.com/davidstap/multilingual-knn-mt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.572.bib | https://aclanthology.org/2023.emnlp-main.572/ | @inproceedings{miletic-etal-2023-understanding,
title = "Understanding Computational Models of Semantic Change: New Insights from the Speech Community",
author = "Mileti{\'c}, Filip and
Przewozny-Desriaux, Anne and
Tanguy, Ludovic",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.572",
doi = "10.18653/v1/2023.emnlp-main.572",
pages = "9209--9220",
abstract = "We investigate the descriptive relevance of widely used semantic change models in linguistic descriptions of present-day speech communities. We focus on the sociolinguistic issue of contact-induced semantic shifts in Quebec English, and analyze 40 target words using type-level and token-level word embeddings, empirical linguistic properties, and {--} crucially {--} acceptability ratings and qualitative remarks by 15 speakers from Montreal. Our results confirm the overall relevance of the computational approaches, but also highlight practical issues and the complementary nature of different semantic change estimates. To our knowledge, this is the first study to substantively engage with the speech community being described using semantic change models.",
}
| We investigate the descriptive relevance of widely used semantic change models in linguistic descriptions of present-day speech communities. We focus on the sociolinguistic issue of contact-induced semantic shifts in Quebec English, and analyze 40 target words using type-level and token-level word embeddings, empirical linguistic properties, and {--} crucially {--} acceptability ratings and qualitative remarks by 15 speakers from Montreal. Our results confirm the overall relevance of the computational approaches, but also highlight practical issues and the complementary nature of different semantic change estimates. To our knowledge, this is the first study to substantively engage with the speech community being described using semantic change models. | [
"Mileti{\\'c}, Filip",
"Przewozny-Desriaux, Anne",
"Tanguy, Ludovic"
] | Understanding Computational Models of Semantic Change: New Insights from the Speech Community | emnlp-main.572 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
||
https://aclanthology.org/2023.emnlp-main.573.bib | https://aclanthology.org/2023.emnlp-main.573/ | @inproceedings{nguyen-okazaki-2023-causal,
title = "Causal Reasoning through Two Cognition Layers for Improving Generalization in Visual Question Answering",
author = "Nguyen, Trang and
Okazaki, Naoaki",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.573",
doi = "10.18653/v1/2023.emnlp-main.573",
pages = "9221--9236",
abstract = "Generalization in Visual Question Answering (VQA) requires models to answer questions about images with contexts beyond the training distribution. Existing attempts primarily refine unimodal aspects, overlooking enhancements in multimodal aspects. Besides, diverse interpretations of the input lead to various modes of answer generation, highlighting the role of causal reasoning between interpreting and answering steps in VQA. Through this lens, we propose Cognitive pathways VQA (CopVQA) improving the multimodal predictions by emphasizing causal reasoning factors. CopVQA first operates a pool of pathways that capture diverse causal reasoning flows through interpreting and answering stages. Mirroring human cognition, we decompose the responsibility of each stage into distinct experts and a cognition-enabled component (CC). The two CCs strategically execute one expert for each stage at a time. Finally, we prioritize answer predictions governed by pathways involving both CCs while disregarding answers produced by either CC, thereby emphasizing causal reasoning and supporting generalization. Our experiments on real-life and medical data consistently verify that CopVQA improves VQA performance and generalization across baselines and domains. Notably, CopVQA achieves a new state-of-the-art (SOTA) on the PathVQA dataset and comparable accuracy to the current SOTA on VQA-CPv2, VQAv2, and VQA- RAD, with one-fourth of the model size.",
}
| Generalization in Visual Question Answering (VQA) requires models to answer questions about images with contexts beyond the training distribution. Existing attempts primarily refine unimodal aspects, overlooking enhancements in multimodal aspects. Besides, diverse interpretations of the input lead to various modes of answer generation, highlighting the role of causal reasoning between interpreting and answering steps in VQA. Through this lens, we propose Cognitive pathways VQA (CopVQA) improving the multimodal predictions by emphasizing causal reasoning factors. CopVQA first operates a pool of pathways that capture diverse causal reasoning flows through interpreting and answering stages. Mirroring human cognition, we decompose the responsibility of each stage into distinct experts and a cognition-enabled component (CC). The two CCs strategically execute one expert for each stage at a time. Finally, we prioritize answer predictions governed by pathways involving both CCs while disregarding answers produced by either CC, thereby emphasizing causal reasoning and supporting generalization. Our experiments on real-life and medical data consistently verify that CopVQA improves VQA performance and generalization across baselines and domains. Notably, CopVQA achieves a new state-of-the-art (SOTA) on the PathVQA dataset and comparable accuracy to the current SOTA on VQA-CPv2, VQAv2, and VQA- RAD, with one-fourth of the model size. | [
"Nguyen, Trang",
"Okazaki, Naoaki"
] | Causal Reasoning through Two Cognition Layers for Improving Generalization in Visual Question Answering | emnlp-main.573 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.574.bib | https://aclanthology.org/2023.emnlp-main.574/ | @inproceedings{jiang-etal-2023-structgpt,
title = "{S}truct{GPT}: A General Framework for Large Language Model to Reason over Structured Data",
author = "Jiang, Jinhao and
Zhou, Kun and
Dong, Zican and
Ye, Keming and
Zhao, Xin and
Wen, Ji-Rong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.574",
doi = "10.18653/v1/2023.emnlp-main.574",
pages = "9237--9251",
abstract = "In this paper, we aim to improve the reasoning ability of large language models (LLMs) over structured data in a unified way. Inspired by the studies on tool augmentation for LLMs, we develop an Iterative Reading-then-Reasoning (IRR) framework to solve question answering tasks based on structured data, called StructGPT. In this framework, we construct the specialized interfaces to collect relevant evidence from structured data (i.e., reading), and let LLMs concentrate on the reasoning task based on the collected information (i.e., reasoning). Specially, we propose an invoking-linearization-generation procedure to support LLMs in reasoning on the structured data with the help of the interfaces. By iterating this procedure with provided interfaces, our approach can gradually approach the target answers to a given query. Experiments conducted on three types of structured data show that StructGPT greatly improves the performance of LLMs, under the few-shot and zero-shot settings.",
}
| In this paper, we aim to improve the reasoning ability of large language models (LLMs) over structured data in a unified way. Inspired by the studies on tool augmentation for LLMs, we develop an Iterative Reading-then-Reasoning (IRR) framework to solve question answering tasks based on structured data, called StructGPT. In this framework, we construct the specialized interfaces to collect relevant evidence from structured data (i.e., reading), and let LLMs concentrate on the reasoning task based on the collected information (i.e., reasoning). Specially, we propose an invoking-linearization-generation procedure to support LLMs in reasoning on the structured data with the help of the interfaces. By iterating this procedure with provided interfaces, our approach can gradually approach the target answers to a given query. Experiments conducted on three types of structured data show that StructGPT greatly improves the performance of LLMs, under the few-shot and zero-shot settings. | [
"Jiang, Jinhao",
"Zhou, Kun",
"Dong, Zican",
"Ye, Keming",
"Zhao, Xin",
"Wen, Ji-Rong"
] | StructGPT: A General Framework for Large Language Model to Reason over Structured Data | emnlp-main.574 | 2305.09645 | [
"https://github.com/rucaibox/structgpt"
] | https://huggingface.co/papers/2305.09645 | 0 | 1 | 0 | 6 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.575.bib | https://aclanthology.org/2023.emnlp-main.575/ | @inproceedings{thalken-etal-2023-modeling,
title = "Modeling Legal Reasoning: {LM} Annotation at the Edge of Human Agreement",
author = "Thalken, Rosamond and
Stiglitz, Edward and
Mimno, David and
Wilkens, Matthew",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.575",
doi = "10.18653/v1/2023.emnlp-main.575",
pages = "9252--9265",
abstract = "Generative language models (LMs) are increasingly used for document class-prediction tasks and promise enormous improvements in cost and efficiency. Existing research often examines simple classification tasks, but the capability of LMs to classify on complex or specialized tasks is less well understood. We consider a highly complex task that is challenging even for humans: the classification of legal reasoning according to jurisprudential philosophy. Using a novel dataset of historical United States Supreme Court opinions annotated by a team of domain experts, we systematically test the performance of a variety of LMs. We find that generative models perform poorly when given instructions (i.e. prompts) equal to the instructions presented to human annotators through our codebook. Our strongest results derive from fine-tuning models on the annotated dataset; the best performing model is an in-domain model, LEGAL-BERT. We apply predictions from this fine-tuned model to study historical trends in jurisprudence, an exercise that both aligns with prominent qualitative historical accounts and points to areas of possible refinement in those accounts. Our findings generally sound a note of caution in the use of generative LMs on complex tasks without fine-tuning and point to the continued relevance of human annotation-intensive classification methods.",
}
| Generative language models (LMs) are increasingly used for document class-prediction tasks and promise enormous improvements in cost and efficiency. Existing research often examines simple classification tasks, but the capability of LMs to classify on complex or specialized tasks is less well understood. We consider a highly complex task that is challenging even for humans: the classification of legal reasoning according to jurisprudential philosophy. Using a novel dataset of historical United States Supreme Court opinions annotated by a team of domain experts, we systematically test the performance of a variety of LMs. We find that generative models perform poorly when given instructions (i.e. prompts) equal to the instructions presented to human annotators through our codebook. Our strongest results derive from fine-tuning models on the annotated dataset; the best performing model is an in-domain model, LEGAL-BERT. We apply predictions from this fine-tuned model to study historical trends in jurisprudence, an exercise that both aligns with prominent qualitative historical accounts and points to areas of possible refinement in those accounts. Our findings generally sound a note of caution in the use of generative LMs on complex tasks without fine-tuning and point to the continued relevance of human annotation-intensive classification methods. | [
"Thalken, Rosamond",
"Stiglitz, Edward",
"Mimno, David",
"Wilkens, Matthew"
] | Modeling Legal Reasoning: LM Annotation at the Edge of Human Agreement | emnlp-main.575 | 2310.18440 | [
"https://github.com/rosthalken/legal-interpretation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.576.bib | https://aclanthology.org/2023.emnlp-main.576/ | @inproceedings{raman-etal-2023-model,
title = "Model-tuning Via Prompts Makes {NLP} Models Adversarially Robust",
author = "Raman, Mrigank and
Maini, Pratyush and
Kolter, J and
Lipton, Zachary and
Pruthi, Danish",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.576",
doi = "10.18653/v1/2023.emnlp-main.576",
pages = "9266--9286",
abstract = "In recent years, NLP practitioners have converged on the following practice: (i) import an off-the-shelf pretrained (masked) language model; (ii) append a multilayer perceptron atop the CLS token{'}s hidden representation (with randomly initialized weights); and (iii) fine-tune the entire model on a downstream task (MLP-FT). This procedure has produced massive gains on standard NLP benchmarks, but these models remain brittle, even to mild adversarial perturbations. In this work, we demonstrate surprising gains in adversarial robustness enjoyed by Model-tuning Via Prompts (MVP), an alternative method of adapting to downstream tasks. Rather than appending an MLP head to make output prediction, MVP appends a prompt template to the input, and makes prediction via text infilling/completion. Across 5 NLP datasets, 4 adversarial attacks, and 3 different models, MVP improves performance against adversarial substitutions by an average of 8{\%} over standard methods and even outperforms adversarial training-based state-of-art defenses by 3.5{\%}. By combining MVP with adversarial training, we achieve further improvements in adversarial robustness while maintaining performance on unperturbed examples. Finally, we conduct ablations to investigate the mechanism underlying these gains. Notably, we find that the main causes of vulnerability of MLP-FT can be attributed to the misalignment between pre-training and fine-tuning tasks, and the randomly initialized MLP parameters.",
}
| In recent years, NLP practitioners have converged on the following practice: (i) import an off-the-shelf pretrained (masked) language model; (ii) append a multilayer perceptron atop the CLS token{'}s hidden representation (with randomly initialized weights); and (iii) fine-tune the entire model on a downstream task (MLP-FT). This procedure has produced massive gains on standard NLP benchmarks, but these models remain brittle, even to mild adversarial perturbations. In this work, we demonstrate surprising gains in adversarial robustness enjoyed by Model-tuning Via Prompts (MVP), an alternative method of adapting to downstream tasks. Rather than appending an MLP head to make output prediction, MVP appends a prompt template to the input, and makes prediction via text infilling/completion. Across 5 NLP datasets, 4 adversarial attacks, and 3 different models, MVP improves performance against adversarial substitutions by an average of 8{\%} over standard methods and even outperforms adversarial training-based state-of-art defenses by 3.5{\%}. By combining MVP with adversarial training, we achieve further improvements in adversarial robustness while maintaining performance on unperturbed examples. Finally, we conduct ablations to investigate the mechanism underlying these gains. Notably, we find that the main causes of vulnerability of MLP-FT can be attributed to the misalignment between pre-training and fine-tuning tasks, and the randomly initialized MLP parameters. | [
"Raman, Mrigank",
"Maini, Pratyush",
"Kolter, J",
"Lipton, Zachary",
"Pruthi, Danish"
] | Model-tuning Via Prompts Makes NLP Models Adversarially Robust | emnlp-main.576 | 2303.07320 | [
"https://github.com/acmi-lab/mvp"
] | https://huggingface.co/papers/2303.07320 | 0 | 0 | 0 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.577.bib | https://aclanthology.org/2023.emnlp-main.577/ | @inproceedings{lee-etal-2023-learning-co,
title = "Learning Co-Speech Gesture for Multimodal Aphasia Type Detection",
author = "Lee, Daeun and
Son, Sejung and
Jeon, Hyolim and
Kim, Seungbae and
Han, Jinyoung",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.577",
doi = "10.18653/v1/2023.emnlp-main.577",
pages = "9287--9303",
abstract = "Aphasia, a language disorder resulting from brain damage, requires accurate identification of specific aphasia types, such as Broca{'}s and Wernicke{'}s aphasia, for effective treatment. However, little attention has been paid to developing methods to detect different types of aphasia. Recognizing the importance of analyzing co-speech gestures for distinguish aphasia types, we propose a multimodal graph neural network for aphasia type detection using speech and corresponding gesture patterns. By learning the correlation between the speech and gesture modalities for each aphasia type, our model can generate textual representations sensitive to gesture information, leading to accurate aphasia type detection. Extensive experiments demonstrate the superiority of our approach over existing methods, achieving state-of-the-art results (F1 84.2{\%}). We also show that gesture features outperform acoustic features, highlighting the significance of gesture expression in detecting aphasia types. We provide the codes for reproducibility purposes.",
}
| Aphasia, a language disorder resulting from brain damage, requires accurate identification of specific aphasia types, such as Broca{'}s and Wernicke{'}s aphasia, for effective treatment. However, little attention has been paid to developing methods to detect different types of aphasia. Recognizing the importance of analyzing co-speech gestures for distinguish aphasia types, we propose a multimodal graph neural network for aphasia type detection using speech and corresponding gesture patterns. By learning the correlation between the speech and gesture modalities for each aphasia type, our model can generate textual representations sensitive to gesture information, leading to accurate aphasia type detection. Extensive experiments demonstrate the superiority of our approach over existing methods, achieving state-of-the-art results (F1 84.2{\%}). We also show that gesture features outperform acoustic features, highlighting the significance of gesture expression in detecting aphasia types. We provide the codes for reproducibility purposes. | [
"Lee, Daeun",
"Son, Sejung",
"Jeon, Hyolim",
"Kim, Seungbae",
"Han, Jinyoung"
] | Learning Co-Speech Gesture for Multimodal Aphasia Type Detection | emnlp-main.577 | 2310.11710 | [
"https://github.com/dsail-skku/multimodal-aphasia-type-detection_emnlp_2023"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
|
https://aclanthology.org/2023.emnlp-main.578.bib | https://aclanthology.org/2023.emnlp-main.578/ | @inproceedings{li-etal-2023-stinmatch,
title = "{STINM}atch: Semi-Supervised Semantic-Topological Iteration Network for Financial Risk Detection via News Label Diffusion",
author = "Li, Xurui and
Qin, Yue and
Zhu, Rui and
Lin, Tianqianjin and
Fan, Yongming and
Kang, Yangyang and
Song, Kaisong and
Zhao, Fubang and
Sun, Changlong and
Tang, Haixu and
Liu, Xiaozhong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.578",
doi = "10.18653/v1/2023.emnlp-main.578",
pages = "9304--9315",
abstract = "Commercial news provide rich semantics and timely information for automated financial risk detection. However, unaffordable large-scale annotation as well as training data sparseness barrier the full exploitation of commercial news in risk detection. To address this problem, we propose a semi-supervised Semantic-Topological Iteration Network, STINMatch, along with a news-enterprise knowledge graph (NEKG) to endorse the risk detection enhancement. The proposed model incorporates a label correlation matrix and interactive consistency regularization techniques into the iterative joint learning framework of text and graph modules. The carefully designed framework takes full advantage of the labeled and unlabeled data as well as their interrelations, enabling deep label diffusion coordination between article-level semantics and label correlations following the topological structure. Extensive experiments demonstrate the superior effectiveness and generalization ability of STINMatch.",
}
| Commercial news provide rich semantics and timely information for automated financial risk detection. However, unaffordable large-scale annotation as well as training data sparseness barrier the full exploitation of commercial news in risk detection. To address this problem, we propose a semi-supervised Semantic-Topological Iteration Network, STINMatch, along with a news-enterprise knowledge graph (NEKG) to endorse the risk detection enhancement. The proposed model incorporates a label correlation matrix and interactive consistency regularization techniques into the iterative joint learning framework of text and graph modules. The carefully designed framework takes full advantage of the labeled and unlabeled data as well as their interrelations, enabling deep label diffusion coordination between article-level semantics and label correlations following the topological structure. Extensive experiments demonstrate the superior effectiveness and generalization ability of STINMatch. | [
"Li, Xurui",
"Qin, Yue",
"Zhu, Rui",
"Lin, Tianqianjin",
"Fan, Yongming",
"Kang, Yangyang",
"Song, Kaisong",
"Zhao, Fubang",
"Sun, Changlong",
"Tang, Haixu",
"Liu, Xiaozhong"
] | STINMatch: Semi-Supervised Semantic-Topological Iteration Network for Financial Risk Detection via News Label Diffusion | emnlp-main.578 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.579.bib | https://aclanthology.org/2023.emnlp-main.579/ | @inproceedings{raman-etal-2023-centering,
title = "Centering the Margins: Outlier-Based Identification of Harmed Populations in Toxicity Detection",
author = "Raman, Vyoma and
Fleisig, Eve and
Klein, Dan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.579",
doi = "10.18653/v1/2023.emnlp-main.579",
pages = "9316--9329",
abstract = "The impact of AI models on marginalized communities has traditionally been measured by identifying performance differences between specified demographic subgroups. Though this approach aims to center vulnerable groups, it risks obscuring patterns of harm faced by intersectional subgroups or shared across multiple groups. To address this, we draw on theories of marginalization from disability studies and related disciplines, which state that people farther from the norm face greater adversity, to consider the {``}margins{''} in the domain of toxicity detection. We operationalize the {``}margins{''} of a dataset by employing outlier detection to identify text about people with demographic attributes distant from the {``}norm{''}. We find that model performance is consistently worse for demographic outliers, with mean squared error (MSE) between outliers and non-outliers up to 70.4{\%} worse across toxicity types. It is also worse for text outliers, with a MSE up to 68.4{\%} higher for outliers than non-outliers. We also find text and demographic outliers to be particularly susceptible to errors in the classification of severe toxicity and identity attacks. Compared to analysis of disparities using traditional demographic breakdowns, we find that our outlier analysis frequently surfaces greater harms faced by a larger, more intersectional group, which suggests that outlier analysis is particularly beneficial for identifying harms against those groups.",
}
| The impact of AI models on marginalized communities has traditionally been measured by identifying performance differences between specified demographic subgroups. Though this approach aims to center vulnerable groups, it risks obscuring patterns of harm faced by intersectional subgroups or shared across multiple groups. To address this, we draw on theories of marginalization from disability studies and related disciplines, which state that people farther from the norm face greater adversity, to consider the {``}margins{''} in the domain of toxicity detection. We operationalize the {``}margins{''} of a dataset by employing outlier detection to identify text about people with demographic attributes distant from the {``}norm{''}. We find that model performance is consistently worse for demographic outliers, with mean squared error (MSE) between outliers and non-outliers up to 70.4{\%} worse across toxicity types. It is also worse for text outliers, with a MSE up to 68.4{\%} higher for outliers than non-outliers. We also find text and demographic outliers to be particularly susceptible to errors in the classification of severe toxicity and identity attacks. Compared to analysis of disparities using traditional demographic breakdowns, we find that our outlier analysis frequently surfaces greater harms faced by a larger, more intersectional group, which suggests that outlier analysis is particularly beneficial for identifying harms against those groups. | [
"Raman, Vyoma",
"Fleisig, Eve",
"Klein, Dan"
] | Centering the Margins: Outlier-Based Identification of Harmed Populations in Toxicity Detection | emnlp-main.579 | 2305.14735 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.580.bib | https://aclanthology.org/2023.emnlp-main.580/ | @inproceedings{noble-ilinykh-2023-describe,
title = "Describe Me an Auklet: Generating Grounded Perceptual Category Descriptions",
author = "Noble, Bill and
Ilinykh, Nikolai",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.580",
doi = "10.18653/v1/2023.emnlp-main.580",
pages = "9330--9347",
abstract = "Human speakers can generate descriptions of perceptual concepts, abstracted from the instance-level. Moreover, such descriptions can be used by other speakers to learn provisional representations of those concepts. Learning and using abstract perceptual concepts is under-investigated in the language-and-vision field. The problem is also highly relevant to the field of representation learning in multi-modal NLP. In this paper, we introduce a framework for testing category-level perceptual grounding in multi-modal language models. In particular, we train separate neural networks to **generate** and **interpret** descriptions of visual categories. We measure the *communicative success* of the two models with the zero-shot classification performance of the interpretation model, which we argue is an indicator of perceptual grounding. Using this framework, we compare the performance of *prototype*- and *exemplar*-based representations. Finally, we show that communicative success exposes performance issues in the generation model, not captured by traditional intrinsic NLG evaluation metrics, and argue that these issues stem from a failure to properly ground language in vision at the category level.",
}
| Human speakers can generate descriptions of perceptual concepts, abstracted from the instance-level. Moreover, such descriptions can be used by other speakers to learn provisional representations of those concepts. Learning and using abstract perceptual concepts is under-investigated in the language-and-vision field. The problem is also highly relevant to the field of representation learning in multi-modal NLP. In this paper, we introduce a framework for testing category-level perceptual grounding in multi-modal language models. In particular, we train separate neural networks to **generate** and **interpret** descriptions of visual categories. We measure the *communicative success* of the two models with the zero-shot classification performance of the interpretation model, which we argue is an indicator of perceptual grounding. Using this framework, we compare the performance of *prototype*- and *exemplar*-based representations. Finally, we show that communicative success exposes performance issues in the generation model, not captured by traditional intrinsic NLG evaluation metrics, and argue that these issues stem from a failure to properly ground language in vision at the category level. | [
"Noble, Bill",
"Ilinykh, Nikolai"
] | Describe Me an Auklet: Generating Grounded Perceptual Category Descriptions | emnlp-main.580 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
||
https://aclanthology.org/2023.emnlp-main.581.bib | https://aclanthology.org/2023.emnlp-main.581/ | @inproceedings{stammbach-etal-2023-revisiting,
title = "Revisiting Automated Topic Model Evaluation with Large Language Models",
author = "Stammbach, Dominik and
Zouhar, Vil{\'e}m and
Hoyle, Alexander and
Sachan, Mrinmaya and
Ash, Elliott",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.581",
doi = "10.18653/v1/2023.emnlp-main.581",
pages = "9348--9357",
abstract = "Topic models help us make sense of large text collections. Automatically evaluating their output and determining the optimal number of topics are both longstanding challenges, with no effective automated solutions to date. This paper proposes using large language models (LLMs) for these tasks. We find that LLMs appropriately assess the resulting topics, correlating more strongly with human judgments than existing automated metrics. However, the setup of the evaluation task is crucial {---} LLMs perform better on coherence ratings of word sets than on intrustion detection. We find that LLMs can also assist us in guiding us towards a reasonable number of topics. In actual applications, topic models are typically used to answer a research question related to a collection of texts. We can incorporate this research question in the prompt to the LLM, which helps estimating the optimal number of topics.",
}
| Topic models help us make sense of large text collections. Automatically evaluating their output and determining the optimal number of topics are both longstanding challenges, with no effective automated solutions to date. This paper proposes using large language models (LLMs) for these tasks. We find that LLMs appropriately assess the resulting topics, correlating more strongly with human judgments than existing automated metrics. However, the setup of the evaluation task is crucial {---} LLMs perform better on coherence ratings of word sets than on intrustion detection. We find that LLMs can also assist us in guiding us towards a reasonable number of topics. In actual applications, topic models are typically used to answer a research question related to a collection of texts. We can incorporate this research question in the prompt to the LLM, which helps estimating the optimal number of topics. | [
"Stammbach, Dominik",
"Zouhar, Vil{\\'e}m",
"Hoyle, Alex",
"er",
"Sachan, Mrinmaya",
"Ash, Elliott"
] | Revisiting Automated Topic Model Evaluation with Large Language Models | emnlp-main.581 | 2305.12152 | [
"https://github.com/dominiksinsaarland/evaluating-topic-model-output"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.582.bib | https://aclanthology.org/2023.emnlp-main.582/ | @inproceedings{zhao-etal-2023-orchid,
title = "{ORCHID}: A {C}hinese Debate Corpus for Target-Independent Stance Detection and Argumentative Dialogue Summarization",
author = "Zhao, Xiutian and
Wang, Ke and
Peng, Wei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.582",
doi = "10.18653/v1/2023.emnlp-main.582",
pages = "9358--9375",
abstract = "Dialogue agents have been receiving increasing attention for years, and this trend has been further boosted by the recent progress of large language models (LLMs). Stance detection and dialogue summarization are two core tasks of dialogue agents in application scenarios that involve argumentative dialogues. However, research on these tasks is limited by the insufficiency of public datasets, especially for non-English languages. To address this language resource gap in Chinese, we present ORCHID (Oral Chinese Debate), the first Chinese dataset for benchmarking target-independent stance detection and debate summarization. Our dataset consists of 1,218 real-world debates that were conducted in Chinese on 476 unique topics, containing 2,436 stance-specific summaries and 14,133 fully annotated utterances. Besides providing a versatile testbed for future research, we also conduct an empirical study on the dataset and propose an integrated task. The results show the challenging nature of the dataset and suggest a potential of incorporating stance detection in summarization for argumentative dialogue.",
}
| Dialogue agents have been receiving increasing attention for years, and this trend has been further boosted by the recent progress of large language models (LLMs). Stance detection and dialogue summarization are two core tasks of dialogue agents in application scenarios that involve argumentative dialogues. However, research on these tasks is limited by the insufficiency of public datasets, especially for non-English languages. To address this language resource gap in Chinese, we present ORCHID (Oral Chinese Debate), the first Chinese dataset for benchmarking target-independent stance detection and debate summarization. Our dataset consists of 1,218 real-world debates that were conducted in Chinese on 476 unique topics, containing 2,436 stance-specific summaries and 14,133 fully annotated utterances. Besides providing a versatile testbed for future research, we also conduct an empirical study on the dataset and propose an integrated task. The results show the challenging nature of the dataset and suggest a potential of incorporating stance detection in summarization for argumentative dialogue. | [
"Zhao, Xiutian",
"Wang, Ke",
"Peng, Wei"
] | ORCHID: A Chinese Debate Corpus for Target-Independent Stance Detection and Argumentative Dialogue Summarization | emnlp-main.582 | 2410.13667 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.583.bib | https://aclanthology.org/2023.emnlp-main.583/ | @inproceedings{dikkala-etal-2023-benefits,
title = "On the Benefits of Learning to Route in Mixture-of-Experts Models",
author = "Dikkala, Nishanth and
Ghosh, Nikhil and
Meka, Raghu and
Panigrahy, Rina and
Vyas, Nikhil and
Wang, Xin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.583",
doi = "10.18653/v1/2023.emnlp-main.583",
pages = "9376--9396",
abstract = "Mixture-of-Expert (MoE) Transformer models, such as the Switch Transformer, allow us to successfully scale up model sizes while keeping the amount of compute time fixed. Prior work has established the computational efficiency benefits of using these models. A core component of these models is a router that routes input tokens to different experts in a layer. We show theoretical and empirical evidence that the router{'}s ability to route tokens intelligently confers a significant advantage to MoE models. We study synthetic settings where the input data is distributed in clusters and show theoretically and empirically that the router learns to route the inputs according to these clusters. Then we perform experiments on real data using the T5X library, where we observe that a trainable router confers a non-trivial benefit instead of a non-trainable router.",
}
| Mixture-of-Expert (MoE) Transformer models, such as the Switch Transformer, allow us to successfully scale up model sizes while keeping the amount of compute time fixed. Prior work has established the computational efficiency benefits of using these models. A core component of these models is a router that routes input tokens to different experts in a layer. We show theoretical and empirical evidence that the router{'}s ability to route tokens intelligently confers a significant advantage to MoE models. We study synthetic settings where the input data is distributed in clusters and show theoretically and empirically that the router learns to route the inputs according to these clusters. Then we perform experiments on real data using the T5X library, where we observe that a trainable router confers a non-trivial benefit instead of a non-trainable router. | [
"Dikkala, Nishanth",
"Ghosh, Nikhil",
"Meka, Raghu",
"Panigrahy, Rina",
"Vyas, Nikhil",
"Wang, Xin"
] | On the Benefits of Learning to Route in Mixture-of-Experts Models | emnlp-main.583 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.584.bib | https://aclanthology.org/2023.emnlp-main.584/ | @inproceedings{clark-etal-2023-seahorse,
title = "{SEAHORSE}: A Multilingual, Multifaceted Dataset for Summarization Evaluation",
author = "Clark, Elizabeth and
Rijhwani, Shruti and
Gehrmann, Sebastian and
Maynez, Joshua and
Aharoni, Roee and
Nikolaev, Vitaly and
Sellam, Thibault and
Siddhant, Aditya and
Das, Dipanjan and
Parikh, Ankur",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.584",
doi = "10.18653/v1/2023.emnlp-main.584",
pages = "9397--9413",
abstract = "Reliable automatic evaluation of summarization systems is challenging due to the multifaceted and subjective nature of the task. This is especially the case for languages other than English, where human evaluations are scarce. In this work, we introduce SEAHORSE, a dataset for multilingual, multifaceted summarization evaluation. SEAHORSE consists of 96K summaries with human ratings along 6 dimensions of text quality: comprehensibility, repetition, grammar, attribution, main ideas, and conciseness, covering 6 languages, 9 systems, and 4 datasets. As a result of its size and scope, SEAHORSE can serve both as a benchmark to evaluate learnt metrics, as well as a large-scale resource for training such metrics. We show that metrics trained with SEAHORSE achieve strong performance on the out-of-domain meta-evaluation benchmarks TRUE (Honovich et al., 2022) and mFACE (Aharoni et al., 2022). We make the SEAHORSE dataset and metrics publicly available for future research on multilingual and multifaceted summarization evaluation.",
}
| Reliable automatic evaluation of summarization systems is challenging due to the multifaceted and subjective nature of the task. This is especially the case for languages other than English, where human evaluations are scarce. In this work, we introduce SEAHORSE, a dataset for multilingual, multifaceted summarization evaluation. SEAHORSE consists of 96K summaries with human ratings along 6 dimensions of text quality: comprehensibility, repetition, grammar, attribution, main ideas, and conciseness, covering 6 languages, 9 systems, and 4 datasets. As a result of its size and scope, SEAHORSE can serve both as a benchmark to evaluate learnt metrics, as well as a large-scale resource for training such metrics. We show that metrics trained with SEAHORSE achieve strong performance on the out-of-domain meta-evaluation benchmarks TRUE (Honovich et al., 2022) and mFACE (Aharoni et al., 2022). We make the SEAHORSE dataset and metrics publicly available for future research on multilingual and multifaceted summarization evaluation. | [
"Clark, Elizabeth",
"Rijhwani, Shruti",
"Gehrmann, Sebastian",
"Maynez, Joshua",
"Aharoni, Roee",
"Nikolaev, Vitaly",
"Sellam, Thibault",
"Siddhant, Aditya",
"Das, Dipanjan",
"Parikh, Ankur"
] | SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation | emnlp-main.584 | 2305.13194 | [
""
] | https://huggingface.co/papers/2305.13194 | 1 | 0 | 0 | 10 | [
"google/seahorse-xxl-q1",
"google/seahorse-xxl-q6",
"google/seahorse-large-q6",
"google/seahorse-xxl-q2",
"google/seahorse-xxl-q5",
"google/seahorse-xxl-q3",
"google/seahorse-xxl-q4",
"google/seahorse-large-q2",
"google/seahorse-large-q1",
"google/seahorse-large-q4",
"google/seahorse-large-q3",
"google/seahorse-large-q5"
] | [
"tasksource/seahorse_summarization_evaluation"
] | [
"mereojb/google-seahorse-large-q6"
] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.585.bib | https://aclanthology.org/2023.emnlp-main.585/ | @inproceedings{wang-etal-2023-query2doc,
title = "Query2doc: Query Expansion with Large Language Models",
author = "Wang, Liang and
Yang, Nan and
Wei, Furu",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.585",
doi = "10.18653/v1/2023.emnlp-main.585",
pages = "9414--9423",
abstract = "This paper introduces a simple yet effective query expansion approach, denoted as query2doc, to improve both sparse and dense retrieval systems. The proposed method first generates pseudo-documents by few-shot prompting large language models (LLMs), and then expands the query with generated pseudo documents. LLMs are trained on web-scale text corpora and are adept at knowledge memorization. The pseudo-documents from LLMs often contain highly relevant information that can aid in query disambiguation and guide the retrievers. Experimental results demonstrate that query2doc boosts the performance of BM25 by 3{\%} to 15{\%} on ad-hoc IR datasets, such as MS-MARCO and TREC DL, without any model fine-tuning. Furthermore, our method also benefits state-of-the-art dense retrievers in terms of both in-domain and out-of-domain results.",
}
| This paper introduces a simple yet effective query expansion approach, denoted as query2doc, to improve both sparse and dense retrieval systems. The proposed method first generates pseudo-documents by few-shot prompting large language models (LLMs), and then expands the query with generated pseudo documents. LLMs are trained on web-scale text corpora and are adept at knowledge memorization. The pseudo-documents from LLMs often contain highly relevant information that can aid in query disambiguation and guide the retrievers. Experimental results demonstrate that query2doc boosts the performance of BM25 by 3{\%} to 15{\%} on ad-hoc IR datasets, such as MS-MARCO and TREC DL, without any model fine-tuning. Furthermore, our method also benefits state-of-the-art dense retrievers in terms of both in-domain and out-of-domain results. | [
"Wang, Liang",
"Yang, Nan",
"Wei, Furu"
] | Query2doc: Query Expansion with Large Language Models | emnlp-main.585 | 2303.07678 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.586.bib | https://aclanthology.org/2023.emnlp-main.586/ | @inproceedings{xue-etal-2023-need,
title = "We Need to Talk About Reproducibility in {NLP} Model Comparison",
author = "Xue, Yan and
Cao, Xuefei and
Yang, Xingli and
Wang, Yu and
Wang, Ruibo and
Li, Jihong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.586",
doi = "10.18653/v1/2023.emnlp-main.586",
pages = "9424--9434",
abstract = "NLPers frequently face reproducibility crisis in a comparison of various models of a real-world NLP task. Many studies have empirically showed that the standard splits tend to produce low reproducible and unreliable conclusions, and they attempted to improve the splits by using more random repetitions. However, the improvement on the reproducibility in a comparison of NLP models is limited attributed to a lack of investigation on the relationship between the reproducibility and the estimator induced by a splitting strategy. In this paper, we formulate the reproducibility in a model comparison into a probabilistic function with regard to a conclusion. Furthermore, we theoretically illustrate that the reproducibility is qualitatively dominated by the signal-to-noise ratio (SNR) of a model performance estimator obtained on a corpus splitting strategy. Specifically, a higher value of the SNR of an estimator probably indicates a better reproducibility. On the basis of the theoretical motivations, we develop a novel mixture estimator of the performance of an NLP model with a regularized corpus splitting strategy based on a blocked $3\times 2$ cross-validation. We conduct numerical experiments on multiple NLP tasks to show that the proposed estimator achieves a high SNR, and it substantially increases the reproducibility. Therefore, we recommend the NLP practitioners to use the proposed method to compare NLP models instead of the methods based on the widely-used standard splits and the random splits with multiple repetitions.",
}
| NLPers frequently face reproducibility crisis in a comparison of various models of a real-world NLP task. Many studies have empirically showed that the standard splits tend to produce low reproducible and unreliable conclusions, and they attempted to improve the splits by using more random repetitions. However, the improvement on the reproducibility in a comparison of NLP models is limited attributed to a lack of investigation on the relationship between the reproducibility and the estimator induced by a splitting strategy. In this paper, we formulate the reproducibility in a model comparison into a probabilistic function with regard to a conclusion. Furthermore, we theoretically illustrate that the reproducibility is qualitatively dominated by the signal-to-noise ratio (SNR) of a model performance estimator obtained on a corpus splitting strategy. Specifically, a higher value of the SNR of an estimator probably indicates a better reproducibility. On the basis of the theoretical motivations, we develop a novel mixture estimator of the performance of an NLP model with a regularized corpus splitting strategy based on a blocked $3\times 2$ cross-validation. We conduct numerical experiments on multiple NLP tasks to show that the proposed estimator achieves a high SNR, and it substantially increases the reproducibility. Therefore, we recommend the NLP practitioners to use the proposed method to compare NLP models instead of the methods based on the widely-used standard splits and the random splits with multiple repetitions. | [
"Xue, Yan",
"Cao, Xuefei",
"Yang, Xingli",
"Wang, Yu",
"Wang, Ruibo",
"Li, Jihong"
] | We Need to Talk About Reproducibility in NLP Model Comparison | emnlp-main.586 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.587.bib | https://aclanthology.org/2023.emnlp-main.587/ | @inproceedings{wan-etal-2023-explore,
title = "Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration",
author = "Wan, Fanqi and
Huang, Xinting and
Yang, Tao and
Quan, Xiaojun and
Bi, Wei and
Shi, Shuming",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.587",
doi = "10.18653/v1/2023.emnlp-main.587",
pages = "9435--9454",
abstract = "Instruction-tuning can be substantially optimized through enhanced diversity, resulting in models capable of handling a broader spectrum of tasks. However, existing data employed for such tuning often exhibit an inadequate coverage of individual domains, limiting the scope for nuanced comprehension and interactions within these areas. To address this deficiency, we propose Explore-Instruct, a novel approach to enhance the data coverage to be used in domain-specific instruction-tuning through active exploration via Large Language Models (LLMs). Built upon representative domain use cases, Explore-Instruct explores a multitude of variations or possibilities by implementing a search algorithm to obtain diversified and domain-focused instruction-tuning data. Our data-centric analysis validates the effectiveness of this proposed approach in improving domain-specific instruction coverage. Moreover, our model{'}s performance demonstrates considerable advancements over multiple baselines, including those utilizing domain-specific data enhancement. Our findings offer a promising opportunity to improve instruction coverage, especially in domain-specific contexts, thereby advancing the development of adaptable language models. Our code, model weights, and data are public at \url{https://github.com/fanqiwan/Explore-Instruct}.",
}
| Instruction-tuning can be substantially optimized through enhanced diversity, resulting in models capable of handling a broader spectrum of tasks. However, existing data employed for such tuning often exhibit an inadequate coverage of individual domains, limiting the scope for nuanced comprehension and interactions within these areas. To address this deficiency, we propose Explore-Instruct, a novel approach to enhance the data coverage to be used in domain-specific instruction-tuning through active exploration via Large Language Models (LLMs). Built upon representative domain use cases, Explore-Instruct explores a multitude of variations or possibilities by implementing a search algorithm to obtain diversified and domain-focused instruction-tuning data. Our data-centric analysis validates the effectiveness of this proposed approach in improving domain-specific instruction coverage. Moreover, our model{'}s performance demonstrates considerable advancements over multiple baselines, including those utilizing domain-specific data enhancement. Our findings offer a promising opportunity to improve instruction coverage, especially in domain-specific contexts, thereby advancing the development of adaptable language models. Our code, model weights, and data are public at \url{https://github.com/fanqiwan/Explore-Instruct}. | [
"Wan, Fanqi",
"Huang, Xinting",
"Yang, Tao",
"Quan, Xiaojun",
"Bi, Wei",
"Shi, Shuming"
] | Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration | emnlp-main.587 | 2310.09168 | [
"https://github.com/fanqiwan/explore-instruct"
] | https://huggingface.co/papers/2310.09168 | 2 | 2 | 0 | 6 | [
"Wanfq/Explore-LM-Ext-7B-Brainstorming",
"Wanfq/Explore-LM-Ext-7B-Rewriting",
"Wanfq/Explore-LM-7B-Math",
"Wanfq/Explore-LM-7B-Rewriting",
"Wanfq/Explore-LM-Ext-7B-Math",
"Wanfq/Explore-LM-7B-Brainstorming"
] | [
"Wanfq/Explore_Instruct_Brainstorming_16k",
"Wanfq/Explore_Instruct_Rewriting_32k",
"Wanfq/Explore_Instruct_Rewriting_10k",
"Wanfq/Explore_Instruct_Math_10k",
"Wanfq/Explore_Instruct_Math_64k",
"Wanfq/Explore_Instruct_Brainstorming_10k"
] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.588.bib | https://aclanthology.org/2023.emnlp-main.588/ | @inproceedings{irie-etal-2023-practical,
title = "Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions",
author = {Irie, Kazuki and
Csord{\'a}s, R{\'o}bert and
Schmidhuber, J{\"u}rgen},
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.588",
doi = "10.18653/v1/2023.emnlp-main.588",
pages = "9455--9465",
abstract = "Recent studies of the computational power of recurrent neural networks (RNNs) reveal a hierarchy of RNN architectures, given real-time and finite-precision assumptions. Here we study auto-regressive Transformers with linearised attention, a.k.a. linear Transformers (LTs) or Fast Weight Programmers (FWPs). LTs are special in the sense that they are equivalent to RNN-like sequence processors with a fixed-size state, while they can also be expressed as the now-popular self-attention networks. We show that many well-known results for the standard Transformer directly transfer to LTs/FWPs. Our formal language recognition experiments demonstrate how recently proposed FWP extensions such as recurrent FWPs and self-referential weight matrices successfully overcome certain limitations of the LT, e.g., allowing for generalisation on the parity problem. Our code is public.",
}
| Recent studies of the computational power of recurrent neural networks (RNNs) reveal a hierarchy of RNN architectures, given real-time and finite-precision assumptions. Here we study auto-regressive Transformers with linearised attention, a.k.a. linear Transformers (LTs) or Fast Weight Programmers (FWPs). LTs are special in the sense that they are equivalent to RNN-like sequence processors with a fixed-size state, while they can also be expressed as the now-popular self-attention networks. We show that many well-known results for the standard Transformer directly transfer to LTs/FWPs. Our formal language recognition experiments demonstrate how recently proposed FWP extensions such as recurrent FWPs and self-referential weight matrices successfully overcome certain limitations of the LT, e.g., allowing for generalisation on the parity problem. Our code is public. | [
"Irie, Kazuki",
"Csord{\\'a}s, R{\\'o}bert",
"Schmidhuber, J{\\\"u}rgen"
] | Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions | emnlp-main.588 | 2310.16076 | [
"https://github.com/idsia/fwp-formal-lang"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.589.bib | https://aclanthology.org/2023.emnlp-main.589/ | @inproceedings{majumder-etal-2023-interfair,
title = "{I}nter{F}air: Debiasing with Natural Language Feedback for Fair Interpretable Predictions",
author = "Majumder, Bodhisattwa and
He, Zexue and
McAuley, Julian",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.589",
doi = "10.18653/v1/2023.emnlp-main.589",
pages = "9466--9471",
abstract = "Debiasing methods in NLP models traditionally focus on isolating information related to a sensitive attribute (e.g., gender or race). We instead argue that a favorable debiasing method should use sensitive information {`}fairly,{'} with explanations, rather than blindly eliminating it. This fair balance is often subjective and can be challenging to achieve algorithmically. We explore two interactive setups with a frozen predictive model and show that users able to provide feedback can achieve a better and \textit{fairer} balance between task performance and bias mitigation. In one setup, users, by interacting with test examples, further decreased bias in the explanations (5-8{\%}) while maintaining the same prediction accuracy. In the other setup, human feedback was able to disentangle associated bias and predictive information from the input leading to superior bias mitigation and improved task performance (4-5{\%}) simultaneously.",
}
| Debiasing methods in NLP models traditionally focus on isolating information related to a sensitive attribute (e.g., gender or race). We instead argue that a favorable debiasing method should use sensitive information {`}fairly,{'} with explanations, rather than blindly eliminating it. This fair balance is often subjective and can be challenging to achieve algorithmically. We explore two interactive setups with a frozen predictive model and show that users able to provide feedback can achieve a better and \textit{fairer} balance between task performance and bias mitigation. In one setup, users, by interacting with test examples, further decreased bias in the explanations (5-8{\%}) while maintaining the same prediction accuracy. In the other setup, human feedback was able to disentangle associated bias and predictive information from the input leading to superior bias mitigation and improved task performance (4-5{\%}) simultaneously. | [
"Majumder, Bodhisattwa",
"He, Zexue",
"McAuley, Julian"
] | InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions | emnlp-main.589 | 2210.07440 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.590.bib | https://aclanthology.org/2023.emnlp-main.590/ | @inproceedings{pu-etal-2023-just,
title = "Just Adjust One Prompt: Enhancing In-Context Dialogue Scoring via Constructing the Optimal Subgraph of Demonstrations and Prompts",
author = "Pu, Jiashu and
Cheng, Ling and
Fan, Lu and
Lv, Tangjie and
Zhang, Rongsheng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.590",
doi = "10.18653/v1/2023.emnlp-main.590",
pages = "9472--9496",
abstract = "The use of modern Large Language Models (LLMs) as chatbots still has some problems such as hallucinations and lack of empathy. Identifying these issues can help improve chatbot performance. The community has been continually iterating on reference-free dialogue evaluation methods based on large language models (LLMs) that can be readily applied. However, many of these LLM-based metrics require selecting specific datasets and developing specialized training tasks for different evaluation dimensions (e.g., coherence, informative). The developing step can be time-consuming and may need to be repeated for new evaluation dimensions. To enable efficient and flexible adaptation to diverse needs of dialogue evaluation, we propose a dimension-agnostic scoring method that leverages the in-context learning (ICL) capability of LLMs to learn from human scoring to the fullest extent. Our method has three key features. To begin with, rather than manual prompt crafting, we propose automatically generating prompts, allowing the LLM to observe human labels and summarize the most suitable prompt. Additionally, since the LLM has a token limit and ICL is sensitive to demonstration variations, we train a selector to finely customize demonstrations and prompts for each dialogue input. Finally, during inference, we propose to request the LLM multiple times with a subgraph of demonstrations and prompts that are diverse and suitable to maximize ICL from various human scoring. We validate the efficacy of our method on five datasets, even with a small amount of annotated data, our method outperforms all strong baselines. Code is available at https://github.com/iamlxb3/EMNLP2023-ADOROR.",
}
| The use of modern Large Language Models (LLMs) as chatbots still has some problems such as hallucinations and lack of empathy. Identifying these issues can help improve chatbot performance. The community has been continually iterating on reference-free dialogue evaluation methods based on large language models (LLMs) that can be readily applied. However, many of these LLM-based metrics require selecting specific datasets and developing specialized training tasks for different evaluation dimensions (e.g., coherence, informative). The developing step can be time-consuming and may need to be repeated for new evaluation dimensions. To enable efficient and flexible adaptation to diverse needs of dialogue evaluation, we propose a dimension-agnostic scoring method that leverages the in-context learning (ICL) capability of LLMs to learn from human scoring to the fullest extent. Our method has three key features. To begin with, rather than manual prompt crafting, we propose automatically generating prompts, allowing the LLM to observe human labels and summarize the most suitable prompt. Additionally, since the LLM has a token limit and ICL is sensitive to demonstration variations, we train a selector to finely customize demonstrations and prompts for each dialogue input. Finally, during inference, we propose to request the LLM multiple times with a subgraph of demonstrations and prompts that are diverse and suitable to maximize ICL from various human scoring. We validate the efficacy of our method on five datasets, even with a small amount of annotated data, our method outperforms all strong baselines. Code is available at https://github.com/iamlxb3/EMNLP2023-ADOROR. | [
"Pu, Jiashu",
"Cheng, Ling",
"Fan, Lu",
"Lv, Tangjie",
"Zhang, Rongsheng"
] | Just Adjust One Prompt: Enhancing In-Context Dialogue Scoring via Constructing the Optimal Subgraph of Demonstrations and Prompts | emnlp-main.590 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
||
https://aclanthology.org/2023.emnlp-main.591.bib | https://aclanthology.org/2023.emnlp-main.591/ | @inproceedings{nikolaev-etal-2023-multilingual,
title = "Multilingual estimation of political-party positioning: From label aggregation to long-input Transformers",
author = "Nikolaev, Dmitry and
Ceron, Tanise and
Pad{\'o}, Sebastian",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.591",
doi = "10.18653/v1/2023.emnlp-main.591",
pages = "9497--9511",
abstract = "Scaling analysis is a technique in computational political science that assigns a political actor (e.g. politician or party) a score on a predefined scale based on a (typically long) body of text (e.g. a parliamentary speech or an election manifesto). For example, political scientists have often used the left{--}right scale to systematically analyse political landscapes of different countries. NLP methods for automatic scaling analysis can find broad application provided they (i) are able to deal with long texts and (ii) work robustly across domains and languages. In this work, we implement and compare two approaches to automatic scaling analysis of political-party manifestos: label aggregation, a pipeline strategy relying on annotations of individual statements from the manifestos, and long-input-Transformer-based models, which compute scaling values directly from raw text. We carry out the analysis of the Comparative Manifestos Project dataset across 41 countries and 27 languages and find that the task can be efficiently solved by state-of-the-art models, with label aggregation producing the best results.",
}
| Scaling analysis is a technique in computational political science that assigns a political actor (e.g. politician or party) a score on a predefined scale based on a (typically long) body of text (e.g. a parliamentary speech or an election manifesto). For example, political scientists have often used the left{--}right scale to systematically analyse political landscapes of different countries. NLP methods for automatic scaling analysis can find broad application provided they (i) are able to deal with long texts and (ii) work robustly across domains and languages. In this work, we implement and compare two approaches to automatic scaling analysis of political-party manifestos: label aggregation, a pipeline strategy relying on annotations of individual statements from the manifestos, and long-input-Transformer-based models, which compute scaling values directly from raw text. We carry out the analysis of the Comparative Manifestos Project dataset across 41 countries and 27 languages and find that the task can be efficiently solved by state-of-the-art models, with label aggregation producing the best results. | [
"Nikolaev, Dmitry",
"Ceron, Tanise",
"Pad{\\'o}, Sebastian"
] | Multilingual estimation of political-party positioning: From label aggregation to long-input Transformers | emnlp-main.591 | 2310.12575 | [
"https://github.com/macleginn/party-positioning-code"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.592.bib | https://aclanthology.org/2023.emnlp-main.592/ | @inproceedings{li-etal-2023-art,
title = "{ART}: rule b{A}sed futu{R}e-inference deduc{T}ion",
author = "Li, Mengze and
Zhao, Tianqi and
Jionghao, Bai and
He, Baoyi and
Miao, Jiaxu and
Ji, Wei and
Lv, Zheqi and
Zhao, Zhou and
Zhang, Shengyu and
Zhang, Wenqiao and
Wu, Fei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.592",
doi = "10.18653/v1/2023.emnlp-main.592",
pages = "9512--9522",
abstract = "Deductive reasoning is a crucial cognitive ability of humanity, allowing us to derive valid conclusions from premises and observations. However, existing works mainly focus on language-based premises and generally neglect deductive reasoning from visual observations. In this work, we introduce rule bAsed futuRe-inference deducTion (ART), which aims at deducing the correct future event based on the visual phenomenon (a video) and the rule-based premises, along with an explanation of the reasoning process. To advance this field, we construct a large-scale densely annotated dataset (Video-ART), where the premises, future event candidates, the reasoning process explanation, and auxiliary commonsense knowledge (e.g., actions and appearance) are annotated by annotators. Upon Video-ART, we develop a strong baseline named ARTNet. In essence, guided by commonsense knowledge, ARTNet learns to identify the target video character and perceives its visual clues related to the future event. Then, ARTNet rigorously applies the given premises to conduct reasoning from the identified information to future events, through a non-parametric rule reasoning network and a reasoning-path review module. Empirical studies validate the rationality of ARTNet in deductive reasoning upon visual observations and the effectiveness over existing works.",
}
| Deductive reasoning is a crucial cognitive ability of humanity, allowing us to derive valid conclusions from premises and observations. However, existing works mainly focus on language-based premises and generally neglect deductive reasoning from visual observations. In this work, we introduce rule bAsed futuRe-inference deducTion (ART), which aims at deducing the correct future event based on the visual phenomenon (a video) and the rule-based premises, along with an explanation of the reasoning process. To advance this field, we construct a large-scale densely annotated dataset (Video-ART), where the premises, future event candidates, the reasoning process explanation, and auxiliary commonsense knowledge (e.g., actions and appearance) are annotated by annotators. Upon Video-ART, we develop a strong baseline named ARTNet. In essence, guided by commonsense knowledge, ARTNet learns to identify the target video character and perceives its visual clues related to the future event. Then, ARTNet rigorously applies the given premises to conduct reasoning from the identified information to future events, through a non-parametric rule reasoning network and a reasoning-path review module. Empirical studies validate the rationality of ARTNet in deductive reasoning upon visual observations and the effectiveness over existing works. | [
"Li, Mengze",
"Zhao, Tianqi",
"Jionghao, Bai",
"He, Baoyi",
"Miao, Jiaxu",
"Ji, Wei",
"Lv, Zheqi",
"Zhao, Zhou",
"Zhang, Shengyu",
"Zhang, Wenqiao",
"Wu, Fei"
] | ART: rule bAsed futuRe-inference deducTion | emnlp-main.592 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.593.bib | https://aclanthology.org/2023.emnlp-main.593/ | @inproceedings{prato-etal-2023-epik,
title = "{E}pi{K}-Eval: Evaluation for Language Models as Epistemic Models",
author = "Prato, Gabriele and
Huang, Jerry and
Parthasarathi, Prasanna and
Sodhani, Shagun and
Chandar, Sarath",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.593",
doi = "10.18653/v1/2023.emnlp-main.593",
pages = "9523--9557",
abstract = "In the age of artificial intelligence, the role of large language models (LLMs) is becoming increasingly central. Despite their growing prevalence, their capacity to consolidate knowledge from different training documents{---}a crucial ability in numerous applications{---}remains unexplored. This paper presents the first study examining the capability of LLMs to effectively combine such information within their parameter space. We introduce EpiK-Eval, a novel question-answering benchmark tailored to evaluate LLMs{'} proficiency in formulating a coherent and consistent knowledge representation from segmented narratives. Evaluations across various LLMs reveal significant weaknesses in this domain. We contend that these shortcomings stem from the intrinsic nature of prevailing training objectives. Consequently, we advocate for refining the approach towards knowledge consolidation, as it harbors the potential to dramatically improve their overall effectiveness and performance. The findings from this study offer insights for developing more robust and reliable LLMs. Our code and benchmark are available at https://github.com/chandar-lab/EpiK-Eval",
}
| In the age of artificial intelligence, the role of large language models (LLMs) is becoming increasingly central. Despite their growing prevalence, their capacity to consolidate knowledge from different training documents{---}a crucial ability in numerous applications{---}remains unexplored. This paper presents the first study examining the capability of LLMs to effectively combine such information within their parameter space. We introduce EpiK-Eval, a novel question-answering benchmark tailored to evaluate LLMs{'} proficiency in formulating a coherent and consistent knowledge representation from segmented narratives. Evaluations across various LLMs reveal significant weaknesses in this domain. We contend that these shortcomings stem from the intrinsic nature of prevailing training objectives. Consequently, we advocate for refining the approach towards knowledge consolidation, as it harbors the potential to dramatically improve their overall effectiveness and performance. The findings from this study offer insights for developing more robust and reliable LLMs. Our code and benchmark are available at https://github.com/chandar-lab/EpiK-Eval | [
"Prato, Gabriele",
"Huang, Jerry",
"Parthasarathi, Prasanna",
"Sodhani, Shagun",
"Ch",
"ar, Sarath"
] | EpiK-Eval: Evaluation for Language Models as Epistemic Models | emnlp-main.593 | 2310.15372 | [
"https://github.com/chandar-lab/epik-eval"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
|
https://aclanthology.org/2023.emnlp-main.594.bib | https://aclanthology.org/2023.emnlp-main.594/ | @inproceedings{xu-etal-2023-dissonance,
title = "From Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification",
author = "Xu, Shanshan and
T.y.s.s, Santosh and
Ichim, Oana and
Risini, Isabella and
Plank, Barbara and
Grabmair, Matthias",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.594",
doi = "10.18653/v1/2023.emnlp-main.594",
pages = "9558--9576",
abstract = "In legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case{'}s facts supposedly relevant for its outcome.",
}
| In legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case{'}s facts supposedly relevant for its outcome. | [
"Xu, Shanshan",
"T.y.s.s, Santosh",
"Ichim, Oana",
"Risini, Isabella",
"Plank, Barbara",
"Grabmair, Matthias"
] | From Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification | emnlp-main.594 | 2310.11878 | [
""
] | https://huggingface.co/papers/2310.11878 | 0 | 0 | 0 | 6 | [] | [
"sxu/RaVE_emnlp23"
] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.595.bib | https://aclanthology.org/2023.emnlp-main.595/ | @inproceedings{li-etal-2023-bilingual,
title = "On Bilingual Lexicon Induction with Large Language Models",
author = "Li, Yaoyiran and
Korhonen, Anna and
Vuli{\'c}, Ivan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.595",
doi = "10.18653/v1/2023.emnlp-main.595",
pages = "9577--9599",
abstract = "Bilingual Lexicon Induction (BLI) is a core task in multilingual NLP that still, to a large extent, relies on calculating cross-lingual word representations. Inspired by the global paradigm shift in NLP towards Large Language Models (LLMs), we examine the potential of the latest generation of LLMs for the development of bilingual lexicons. We ask the following research question: Is it possible to prompt and fine-tune multilingual LLMs (mLLMs) for BLI, and how does this approach compare against and complement current BLI approaches? To this end, we systematically study 1) zero-shot prompting for unsupervised BLI and 2) few-shot in-context prompting with a set of seed translation pairs, both without any LLM fine-tuning, as well as 3) standard BLI-oriented fine-tuning of smaller LLMs. We experiment with 18 open-source text-to-text mLLMs of different sizes (from 0.3B to 13B parameters) on two standard BLI benchmarks covering a range of typologically diverse languages. Our work is the first to demonstrate strong BLI capabilities of text-to-text mLLMs. The results reveal that few-shot prompting with in-context examples from nearest neighbours achieves the best performance, establishing new state-of-the-art BLI scores for many language pairs. We also conduct a series of in-depth analyses and ablation studies, providing more insights on BLI with (m)LLMs, also along with their limitations.",
}
| Bilingual Lexicon Induction (BLI) is a core task in multilingual NLP that still, to a large extent, relies on calculating cross-lingual word representations. Inspired by the global paradigm shift in NLP towards Large Language Models (LLMs), we examine the potential of the latest generation of LLMs for the development of bilingual lexicons. We ask the following research question: Is it possible to prompt and fine-tune multilingual LLMs (mLLMs) for BLI, and how does this approach compare against and complement current BLI approaches? To this end, we systematically study 1) zero-shot prompting for unsupervised BLI and 2) few-shot in-context prompting with a set of seed translation pairs, both without any LLM fine-tuning, as well as 3) standard BLI-oriented fine-tuning of smaller LLMs. We experiment with 18 open-source text-to-text mLLMs of different sizes (from 0.3B to 13B parameters) on two standard BLI benchmarks covering a range of typologically diverse languages. Our work is the first to demonstrate strong BLI capabilities of text-to-text mLLMs. The results reveal that few-shot prompting with in-context examples from nearest neighbours achieves the best performance, establishing new state-of-the-art BLI scores for many language pairs. We also conduct a series of in-depth analyses and ablation studies, providing more insights on BLI with (m)LLMs, also along with their limitations. | [
"Li, Yaoyiran",
"Korhonen, Anna",
"Vuli{\\'c}, Ivan"
] | On Bilingual Lexicon Induction with Large Language Models | emnlp-main.595 | 2310.13995 | [
"https://github.com/cambridgeltl/prompt4bli"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
|
https://aclanthology.org/2023.emnlp-main.596.bib | https://aclanthology.org/2023.emnlp-main.596/ | @inproceedings{seegmiller-preum-2023-statistical,
title = "Statistical Depth for Ranking and Characterizing Transformer-Based Text Embeddings",
author = "Seegmiller, Parker and
Preum, Sarah",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.596",
doi = "10.18653/v1/2023.emnlp-main.596",
pages = "9600--9611",
abstract = "The popularity of transformer-based text embeddings calls for better statistical tools for measuring distributions of such embeddings. One such tool would be a method for ranking texts within a corpus by centrality, i.e. assigning each text a number signifying how representative that text is of the corpus as a whole. However, an intrinsic center-outward ordering of high-dimensional text representations is not trivial. A $\textit{statistical depth}$ is a function for ranking $k$-dimensional objects by measuring centrality with respect to some observed $k$-dimensional distribution. We adopt a statistical depth to measure distributions of transformer-based text embeddings, $\textit{transformer-based text embedding (TTE) depth}$, and introduce the practical use of this depth for both modeling and distributional inference in NLP pipelines. We first define TTE depth and an associated rank sum test for determining whether two corpora differ significantly in embedding space. We then use TTE depth for the task of in-context learning prompt selection, showing that this approach reliably improves performance over statistical baseline approaches across six text classification tasks. Finally, we use TTE depth and the associated rank sum test to characterize the distributions of synthesized and human-generated corpora, showing that five recent synthetic data augmentation processes cause a measurable distributional shift away from associated human-generated text.",
}
| The popularity of transformer-based text embeddings calls for better statistical tools for measuring distributions of such embeddings. One such tool would be a method for ranking texts within a corpus by centrality, i.e. assigning each text a number signifying how representative that text is of the corpus as a whole. However, an intrinsic center-outward ordering of high-dimensional text representations is not trivial. A $\textit{statistical depth}$ is a function for ranking $k$-dimensional objects by measuring centrality with respect to some observed $k$-dimensional distribution. We adopt a statistical depth to measure distributions of transformer-based text embeddings, $\textit{transformer-based text embedding (TTE) depth}$, and introduce the practical use of this depth for both modeling and distributional inference in NLP pipelines. We first define TTE depth and an associated rank sum test for determining whether two corpora differ significantly in embedding space. We then use TTE depth for the task of in-context learning prompt selection, showing that this approach reliably improves performance over statistical baseline approaches across six text classification tasks. Finally, we use TTE depth and the associated rank sum test to characterize the distributions of synthesized and human-generated corpora, showing that five recent synthetic data augmentation processes cause a measurable distributional shift away from associated human-generated text. | [
"Seegmiller, Parker",
"Preum, Sarah"
] | Statistical Depth for Ranking and Characterizing Transformer-Based Text Embeddings | emnlp-main.596 | 2310.15010 | [
"https://github.com/pkseeg/tte_depth"
] | https://huggingface.co/papers/2310.15010 | 0 | 0 | 0 | 2 | [] | [] | [] | 1 | Oral |
https://aclanthology.org/2023.emnlp-main.597.bib | https://aclanthology.org/2023.emnlp-main.597/ | @inproceedings{zhang-etal-2023-crash,
title = "{CR}a{S}h: Clustering, Removing, and Sharing Enhance Fine-tuning without Full Large Language Model",
author = "Zhang, Kaiyan and
Ding, Ning and
Qi, Biqing and
Zhu, Xuekai and
Long, Xinwei and
Zhou, Bowen",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.597",
doi = "10.18653/v1/2023.emnlp-main.597",
pages = "9612--9637",
abstract = "Instruction tuning has recently been recognized as an effective way of aligning Large Language Models (LLMs) to enhance their generalization ability across various tasks. However, when tuning publicly accessible, centralized LLMs with private instruction data, privacy concerns are inevitable. While direct transfer of parameterized modules between models is a plausible approach to address this, its implications and effectiveness need further exploration. This paper focuses on Offsite-Tuning (OFT), a representative technique that transfers transformer blocks between centralized LLMs and downstream emulators. Given the limited understanding of the underlying mechanism of OFT, we perform an empirical analysis on LLMs from the perspectives of representation and functional similarity. Interestingly, our findings reveal a unique modular structure within the layers of LLMs that appears to emerge as the model size expands. Simultaneously, we note subtle but potentially significant changes in representation and intermediate predictions across the layers. Inspired by these observations, we propose CRaSh, involving Clustering, Removing, and Sharing, a training-free strategy to derive improved emulators from LLMs. CRaSh significantly boosts performance of OFT with billions of parameters. Furthermore, we investigate the optimal solutions yielded by fine-tuning with and without full model through the lens of loss landscape. Our findings demonstrate a linear connectivity among these optima falling over the same basin, thereby highlighting the effectiveness of CRaSh and OFT.",
}
| Instruction tuning has recently been recognized as an effective way of aligning Large Language Models (LLMs) to enhance their generalization ability across various tasks. However, when tuning publicly accessible, centralized LLMs with private instruction data, privacy concerns are inevitable. While direct transfer of parameterized modules between models is a plausible approach to address this, its implications and effectiveness need further exploration. This paper focuses on Offsite-Tuning (OFT), a representative technique that transfers transformer blocks between centralized LLMs and downstream emulators. Given the limited understanding of the underlying mechanism of OFT, we perform an empirical analysis on LLMs from the perspectives of representation and functional similarity. Interestingly, our findings reveal a unique modular structure within the layers of LLMs that appears to emerge as the model size expands. Simultaneously, we note subtle but potentially significant changes in representation and intermediate predictions across the layers. Inspired by these observations, we propose CRaSh, involving Clustering, Removing, and Sharing, a training-free strategy to derive improved emulators from LLMs. CRaSh significantly boosts performance of OFT with billions of parameters. Furthermore, we investigate the optimal solutions yielded by fine-tuning with and without full model through the lens of loss landscape. Our findings demonstrate a linear connectivity among these optima falling over the same basin, thereby highlighting the effectiveness of CRaSh and OFT. | [
"Zhang, Kaiyan",
"Ding, Ning",
"Qi, Biqing",
"Zhu, Xuekai",
"Long, Xinwei",
"Zhou, Bowen"
] | CRaSh: Clustering, Removing, and Sharing Enhance Fine-tuning without Full Large Language Model | emnlp-main.597 | 2310.15477 | [
"https://github.com/tsinghuac3i/crash"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.598.bib | https://aclanthology.org/2023.emnlp-main.598/ | @inproceedings{kumar-etal-2023-multilingual,
title = "From Multilingual Complexity to Emotional Clarity: Leveraging Commonsense to Unveil Emotions in Code-Mixed Dialogues",
author = "Kumar, Shivani and
S, Ramaneswaran and
Akhtar, Md and
Chakraborty, Tanmoy",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.598",
doi = "10.18653/v1/2023.emnlp-main.598",
pages = "9638--9652",
abstract = "Understanding emotions during conversation is a fundamental aspect of human communication, driving NLP research for Emotion Recognition in Conversation (ERC). While considerable research has focused on discerning emotions of individual speakers in monolingual dialogues, understanding the emotional dynamics in code-mixed conversations has received relatively less attention. This motivates our undertaking of ERC for code-mixed conversations in this study. Recognizing that emotional intelligence encompasses a comprehension of worldly knowledge, we propose an innovative approach that integrates commonsense information with dialogue context to facilitate a deeper understanding of emotions. To achieve this, we devise an efficient pipeline that extracts relevant commonsense from existing knowledge graphs based on the code-mixed input. Subsequently, we develop an advanced fusion technique that seamlessly combines the acquired commonsense information with the dialogue representation obtained from a dedicated dialogue understanding module. Our comprehensive experimentation showcases the substantial performance improvement obtained through the systematic incorporation of commonsense in ERC. Both quantitative assessments and qualitative analyses further corroborate the validity of our hypothesis, reaffirming the pivotal role of commonsense integration in enhancing ERC.",
}
| Understanding emotions during conversation is a fundamental aspect of human communication, driving NLP research for Emotion Recognition in Conversation (ERC). While considerable research has focused on discerning emotions of individual speakers in monolingual dialogues, understanding the emotional dynamics in code-mixed conversations has received relatively less attention. This motivates our undertaking of ERC for code-mixed conversations in this study. Recognizing that emotional intelligence encompasses a comprehension of worldly knowledge, we propose an innovative approach that integrates commonsense information with dialogue context to facilitate a deeper understanding of emotions. To achieve this, we devise an efficient pipeline that extracts relevant commonsense from existing knowledge graphs based on the code-mixed input. Subsequently, we develop an advanced fusion technique that seamlessly combines the acquired commonsense information with the dialogue representation obtained from a dedicated dialogue understanding module. Our comprehensive experimentation showcases the substantial performance improvement obtained through the systematic incorporation of commonsense in ERC. Both quantitative assessments and qualitative analyses further corroborate the validity of our hypothesis, reaffirming the pivotal role of commonsense integration in enhancing ERC. | [
"Kumar, Shivani",
"S, Ramaneswaran",
"Akhtar, Md",
"Chakraborty, Tanmoy"
] | From Multilingual Complexity to Emotional Clarity: Leveraging Commonsense to Unveil Emotions in Code-Mixed Dialogues | emnlp-main.598 | 2310.13080 | [
"https://github.com/lcs2-iiitd/emnlp-coffee"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
|
https://aclanthology.org/2023.emnlp-main.599.bib | https://aclanthology.org/2023.emnlp-main.599/ | @inproceedings{herrera-berg-etal-2023-large,
title = "Large Language Models are biased to overestimate profoundness",
author = "Herrera-Berg, Eugenio and
Browne, Tom{\'a}s and
Le{\'o}n-Villagr{\'a}, Pablo and
Vives, Marc-Llu{\'\i}s and
Calderon, Cristian",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.599",
doi = "10.18653/v1/2023.emnlp-main.599",
pages = "9653--9661",
abstract = "Recent advancements in natural language processing by large language models (LLMs), such as GPT-4, have been suggested to approach Artificial General Intelligence. And yet, it is still under dispute whether LLMs possess similar reasoning abilities to humans. This study evaluates GPT-4 and various other LLMs in judging the profoundness of mundane, motivational, and pseudo-profound statements. We found a significant statement-to-statement correlation between the LLMs and humans, irrespective of the type of statements and the prompting technique used. However, LLMs systematically overestimate the profoundness of nonsensical statements, with the exception of Tk-instruct, which uniquely underestimates the profoundness of statements. Only few-shot learning prompts, as opposed to chain-of-thought prompting, draw LLMs ratings closer to humans. Furthermore, this work provides insights into the potential biases induced by Reinforcement Learning from Human Feedback (RLHF), inducing an increase in the bias to overestimate the profoundness of statements.",
}
| Recent advancements in natural language processing by large language models (LLMs), such as GPT-4, have been suggested to approach Artificial General Intelligence. And yet, it is still under dispute whether LLMs possess similar reasoning abilities to humans. This study evaluates GPT-4 and various other LLMs in judging the profoundness of mundane, motivational, and pseudo-profound statements. We found a significant statement-to-statement correlation between the LLMs and humans, irrespective of the type of statements and the prompting technique used. However, LLMs systematically overestimate the profoundness of nonsensical statements, with the exception of Tk-instruct, which uniquely underestimates the profoundness of statements. Only few-shot learning prompts, as opposed to chain-of-thought prompting, draw LLMs ratings closer to humans. Furthermore, this work provides insights into the potential biases induced by Reinforcement Learning from Human Feedback (RLHF), inducing an increase in the bias to overestimate the profoundness of statements. | [
"Herrera-Berg, Eugenio",
"Browne, Tom{\\'a}s",
"Le{\\'o}n-Villagr{\\'a}, Pablo",
"Vives, Marc-Llu{\\'\\i}s",
"Calderon, Cristian"
] | Large Language Models are biased to overestimate profoundness | emnlp-main.599 | 2310.14422 | [
"https://github.com/ouhenio/llms-overstimate-profoundness"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.600.bib | https://aclanthology.org/2023.emnlp-main.600/ | @inproceedings{laban-etal-2023-summedits,
title = "{S}umm{E}dits: Measuring {LLM} Ability at Factual Reasoning Through The Lens of Summarization",
author = "Laban, Philippe and
Kryscinski, Wojciech and
Agarwal, Divyansh and
Fabbri, Alexander and
Xiong, Caiming and
Joty, Shafiq and
Wu, Chien-Sheng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.600",
doi = "10.18653/v1/2023.emnlp-main.600",
pages = "9662--9676",
abstract = "With the recent appearance of LLMs in practical settings, having methods that can effectively detect factual inconsistencies is crucial to reduce the propagation of misinformation and improve trust in model outputs. When testing on existing factual consistency benchmarks, we find that a few large language models (LLMs) perform competitively on classification benchmarks for factual inconsistency detection compared to traditional non-LLM methods. However, a closer analysis reveals issues with existing evaluation benchmarks, affecting evaluation precision. To address this, we propose a new protocol for inconsistency detection benchmark creation and implement it in a 10-domain benchmark called SummEdits. This new benchmark is 20 times more cost-effective per sample than previous benchmarks and highly reproducible, as we estimate inter-annotator agreement at about 0.9. Most LLMs struggle on SummEdits, with performance close to random chance. The best-performing model, GPT-4, is still 8{\%} below estimated human performance, highlighting the gaps in LLMs{'} ability to reason about facts and detect inconsistencies when they occur.",
}
| With the recent appearance of LLMs in practical settings, having methods that can effectively detect factual inconsistencies is crucial to reduce the propagation of misinformation and improve trust in model outputs. When testing on existing factual consistency benchmarks, we find that a few large language models (LLMs) perform competitively on classification benchmarks for factual inconsistency detection compared to traditional non-LLM methods. However, a closer analysis reveals issues with existing evaluation benchmarks, affecting evaluation precision. To address this, we propose a new protocol for inconsistency detection benchmark creation and implement it in a 10-domain benchmark called SummEdits. This new benchmark is 20 times more cost-effective per sample than previous benchmarks and highly reproducible, as we estimate inter-annotator agreement at about 0.9. Most LLMs struggle on SummEdits, with performance close to random chance. The best-performing model, GPT-4, is still 8{\%} below estimated human performance, highlighting the gaps in LLMs{'} ability to reason about facts and detect inconsistencies when they occur. | [
"Laban, Philippe",
"Kryscinski, Wojciech",
"Agarwal, Divyansh",
"Fabbri, Alex",
"er",
"Xiong, Caiming",
"Joty, Shafiq",
"Wu, Chien-Sheng"
] | SummEdits: Measuring LLM Ability at Factual Reasoning Through The Lens of Summarization | emnlp-main.600 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |