Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
528
3.02k
abstract
stringlengths
17
2.35k
authors
sequencelengths
1
44
title
stringlengths
18
190
id
stringlengths
7
19
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
528 values
n_linked_authors
int64
-1
15
upvotes
int64
-1
77
num_comments
int64
-1
10
n_authors
int64
-1
52
Models
sequencelengths
0
100
Datasets
sequencelengths
0
15
Spaces
sequencelengths
0
46
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
https://aclanthology.org/2023.emnlp-main.201.bib
https://aclanthology.org/2023.emnlp-main.201/
@inproceedings{goyal-etal-2023-else, title = "What Else Do {I} Need to Know? The Effect of Background Information on Users{'} Reliance on {QA} Systems", author = "Goyal, Navita and Briakou, Eleftheria and Liu, Amanda and Baumler, Connor and Bonial, Claire and Micher, Jeffrey and Voss, Clare and Carpuat, Marine and Daum{\'e} III, Hal", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.201", doi = "10.18653/v1/2023.emnlp-main.201", pages = "3313--3330", abstract = "NLP systems have shown impressive performance at answering questions by retrieving relevant context. However, with the increasingly large models, it is impossible and often undesirable to constrain models{'} knowledge or reasoning to only the retrieved context. This leads to a mismatch between the information that \textit{the models} access to derive the answer and the information that is available to \textit{the user} to assess the model predicted answer. In this work, we study how users interact with QA systems in the absence of sufficient information to assess their predictions. Further, we ask whether adding the requisite background helps mitigate users{'} over-reliance on predictions. Our study reveals that users rely on model predictions even in the absence of sufficient information needed to assess the model{'}s correctness. Providing the relevant background, however, helps users better catch model errors, reducing over-reliance on incorrect predictions. On the flip side, background information also increases users{'} confidence in their accurate as well as inaccurate judgments. Our work highlights that supporting users{'} verification of QA predictions is an important, yet challenging, problem.", }
NLP systems have shown impressive performance at answering questions by retrieving relevant context. However, with the increasingly large models, it is impossible and often undesirable to constrain models{'} knowledge or reasoning to only the retrieved context. This leads to a mismatch between the information that \textit{the models} access to derive the answer and the information that is available to \textit{the user} to assess the model predicted answer. In this work, we study how users interact with QA systems in the absence of sufficient information to assess their predictions. Further, we ask whether adding the requisite background helps mitigate users{'} over-reliance on predictions. Our study reveals that users rely on model predictions even in the absence of sufficient information needed to assess the model{'}s correctness. Providing the relevant background, however, helps users better catch model errors, reducing over-reliance on incorrect predictions. On the flip side, background information also increases users{'} confidence in their accurate as well as inaccurate judgments. Our work highlights that supporting users{'} verification of QA predictions is an important, yet challenging, problem.
[ "Goyal, Navita", "Briakou, Eleftheria", "Liu, Am", "a", "Baumler, Connor", "Bonial, Claire", "Micher, Jeffrey", "Voss, Clare", "Carpuat, Marine", "Daum{\\'e} III, Hal" ]
What Else Do I Need to Know? The Effect of Background Information on Users' Reliance on QA Systems
emnlp-main.201
2305.14331
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.202.bib
https://aclanthology.org/2023.emnlp-main.202/
@inproceedings{surikuchi-etal-2023-groovist, title = "{GROOV}i{ST}: A Metric for Grounding Objects in Visual Storytelling", author = "Surikuchi, Aditya and Pezzelle, Sandro and Fern{\'a}ndez, Raquel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.202", doi = "10.18653/v1/2023.emnlp-main.202", pages = "3331--3339", abstract = "A proper evaluation of stories generated for a sequence of images{---}the task commonly referred to as visual storytelling{---}must consider multiple aspects, such as coherence, grammatical correctness, and visual grounding. In this work, we focus on evaluating the degree of grounding, that is, the extent to which a story is about the entities shown in the images. We analyze current metrics, both designed for this purpose and for general vision-text alignment. Given their observed shortcomings, we propose a novel evaluation tool, GROOViST, that accounts for cross-modal dependencies, \textit{temporal misalignments} (the fact that the order in which entities appear in the story and the image sequence may not match), and human intuitions on visual grounding. An additional advantage of GROOViST is its modular design, where the contribution of each component can be assessed and interpreted individually.", }
A proper evaluation of stories generated for a sequence of images{---}the task commonly referred to as visual storytelling{---}must consider multiple aspects, such as coherence, grammatical correctness, and visual grounding. In this work, we focus on evaluating the degree of grounding, that is, the extent to which a story is about the entities shown in the images. We analyze current metrics, both designed for this purpose and for general vision-text alignment. Given their observed shortcomings, we propose a novel evaluation tool, GROOViST, that accounts for cross-modal dependencies, \textit{temporal misalignments} (the fact that the order in which entities appear in the story and the image sequence may not match), and human intuitions on visual grounding. An additional advantage of GROOViST is its modular design, where the contribution of each component can be assessed and interpreted individually.
[ "Surikuchi, Aditya", "Pezzelle, S", "ro", "Fern{\\'a}ndez, Raquel" ]
GROOViST: A Metric for Grounding Objects in Visual Storytelling
emnlp-main.202
2310.17770
[ "https://github.com/akskuchi/groovist" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.203.bib
https://aclanthology.org/2023.emnlp-main.203/
@inproceedings{zhang-etal-2023-vibe, title = "{VIBE}: Topic-Driven Temporal Adaptation for {T}witter Classification", author = "Zhang, Yuji and Li, Jing and Li, Wenjie", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.203", doi = "10.18653/v1/2023.emnlp-main.203", pages = "3340--3354", abstract = "Language features are evolving in real-world social media, resulting in the deteriorating performance of text classification in dynamics. To address this challenge, we study temporal adaptation, where models trained on past data are tested in the future. Most prior work focused on continued pretraining or knowledge updating, which may compromise their performance on noisy social media data. To tackle this issue, we reflect feature change via modeling latent topic evolution and propose a novel model, VIBE: Variational Information Bottleneck for Evolutions. Concretely, we first employ two Information Bottleneck (IB) regularizers to distinguish past and future topics. Then, the distinguished topics work as adaptive features via multi-task training with timestamp and class label prediction. In adaptive learning, VIBE utilizes retrieved unlabeled data from online streams created posterior to training data time. Substantial Twitter experiments on three classification tasks show that our model, with only 3{\%} of data, significantly outperforms previous state-of-the-art continued-pretraining methods.", }
Language features are evolving in real-world social media, resulting in the deteriorating performance of text classification in dynamics. To address this challenge, we study temporal adaptation, where models trained on past data are tested in the future. Most prior work focused on continued pretraining or knowledge updating, which may compromise their performance on noisy social media data. To tackle this issue, we reflect feature change via modeling latent topic evolution and propose a novel model, VIBE: Variational Information Bottleneck for Evolutions. Concretely, we first employ two Information Bottleneck (IB) regularizers to distinguish past and future topics. Then, the distinguished topics work as adaptive features via multi-task training with timestamp and class label prediction. In adaptive learning, VIBE utilizes retrieved unlabeled data from online streams created posterior to training data time. Substantial Twitter experiments on three classification tasks show that our model, with only 3{\%} of data, significantly outperforms previous state-of-the-art continued-pretraining methods.
[ "Zhang, Yuji", "Li, Jing", "Li, Wenjie" ]
VIBE: Topic-Driven Temporal Adaptation for Twitter Classification
emnlp-main.203
2310.10191
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.204.bib
https://aclanthology.org/2023.emnlp-main.204/
@inproceedings{sohn-etal-2023-tod, title = "{TOD}-Flow: Modeling the Structure of Task-Oriented Dialogues", author = "Sohn, Sungryull and Lyu, Yiwei and Liu, Anthony and Logeswaran, Lajanugen and Kim, Dong-Ki and Shim, Dongsub and Lee, Honglak", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.204", doi = "10.18653/v1/2023.emnlp-main.204", pages = "3355--3371", abstract = "Task-Oriented Dialogue (TOD) systems have become crucial components in interactive artificial intelligence applications. While recent advances have capitalized on pre-trained language models (PLMs), they exhibit limitations regarding transparency and controllability. To address these challenges, we propose a novel approach focusing on inferring the TOD-flow graph from dialogue data annotated with dialog acts, uncovering the underlying task structure in the form of a graph. The inferred TOD-flow graph can be easily integrated with any dialogue model to improve its prediction performance, transparency, and controllability. Our TOD-flow graph learns what a model can, should, and should not predict, effectively reducing the search space and providing a rationale for the model{'}s prediction. We show that the proposed TOD-flow graph better resemble human-annotated graphs compared to prior approaches. Furthermore, when combined with several dialogue policies and end-to-end dialogue models, we demonstrate that our approach significantly improves dialog act classification and end-to-end response generation performance in the MultiWOZ and SGD benchmarks.", }
Task-Oriented Dialogue (TOD) systems have become crucial components in interactive artificial intelligence applications. While recent advances have capitalized on pre-trained language models (PLMs), they exhibit limitations regarding transparency and controllability. To address these challenges, we propose a novel approach focusing on inferring the TOD-flow graph from dialogue data annotated with dialog acts, uncovering the underlying task structure in the form of a graph. The inferred TOD-flow graph can be easily integrated with any dialogue model to improve its prediction performance, transparency, and controllability. Our TOD-flow graph learns what a model can, should, and should not predict, effectively reducing the search space and providing a rationale for the model{'}s prediction. We show that the proposed TOD-flow graph better resemble human-annotated graphs compared to prior approaches. Furthermore, when combined with several dialogue policies and end-to-end dialogue models, we demonstrate that our approach significantly improves dialog act classification and end-to-end response generation performance in the MultiWOZ and SGD benchmarks.
[ "Sohn, Sungryull", "Lyu, Yiwei", "Liu, Anthony", "Logeswaran, Lajanugen", "Kim, Dong-Ki", "Shim, Dongsub", "Lee, Honglak" ]
TOD-Flow: Modeling the Structure of Task-Oriented Dialogues
emnlp-main.204
2312.04668
[ "https://github.com/srsohn/tod-flow" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.205.bib
https://aclanthology.org/2023.emnlp-main.205/
@inproceedings{pan-etal-2023-topwords, title = "{T}op{WORDS}-Poetry: Simultaneous Text Segmentation and Word Discovery for Classical {C}hinese Poetry via {B}ayesian Inference", author = "Pan, Changzai and Li, Feiyue and Deng, Ke", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.205", doi = "10.18653/v1/2023.emnlp-main.205", pages = "3372--3386", abstract = "As a precious cultural heritage of human beings, classical Chinese poetry has a very unique writing style and often contains special words that rarely appear in general Chinese texts, posting critical challenges for natural language processing. Little effort has been made in the literature for processing texts from classical Chinese poetry. This study fills in this gap with TopWORDS-Poetry, an unsupervised method that can achieve reliable text segmentation and word discovery for classical Chinese poetry simultaneously without pre-given vocabulary or training corpus. Experimental studies confirm that TopWORDS-Poetry can successfully recognize unique poetry words, such as named entities and literary allusions, from metrical poems of \textit{Complete Tang Poetry} and segment these poetry lines into sequences of meaningful words with high quality.", }
As a precious cultural heritage of human beings, classical Chinese poetry has a very unique writing style and often contains special words that rarely appear in general Chinese texts, posting critical challenges for natural language processing. Little effort has been made in the literature for processing texts from classical Chinese poetry. This study fills in this gap with TopWORDS-Poetry, an unsupervised method that can achieve reliable text segmentation and word discovery for classical Chinese poetry simultaneously without pre-given vocabulary or training corpus. Experimental studies confirm that TopWORDS-Poetry can successfully recognize unique poetry words, such as named entities and literary allusions, from metrical poems of \textit{Complete Tang Poetry} and segment these poetry lines into sequences of meaningful words with high quality.
[ "Pan, Changzai", "Li, Feiyue", "Deng, Ke" ]
TopWORDS-Poetry: Simultaneous Text Segmentation and Word Discovery for Classical Chinese Poetry via Bayesian Inference
emnlp-main.205
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.206.bib
https://aclanthology.org/2023.emnlp-main.206/
@inproceedings{yao-etal-2023-knowledge, title = "Knowledge Rumination for Pre-trained Language Models", author = "Yao, Yunzhi and Wang, Peng and Mao, Shengyu and Tan, Chuanqi and Huang, Fei and Chen, Huajun and Zhang, Ningyu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.206", doi = "10.18653/v1/2023.emnlp-main.206", pages = "3387--3404", abstract = "Previous studies have revealed that vanilla pre-trained language models (PLMs) lack the capacity to handle knowledge-intensive NLP tasks alone; thus, several works have attempted to integrate external knowledge into PLMs. However, despite the promising outcome, we empirically observe that PLMs may have already encoded rich knowledge in their pre-trained parameters but fails to fully utilize them when applying to knowledge-intensive tasks. In this paper, we propose a new paradigm dubbed \textbf{Knowledge Rumination} to help the pre-trained language model utilize that related latent knowledge without retrieving them from the external corpus. By simply adding a prompt like \textit{{``}As far as I know{''}} to the PLMs, we try to review related latent knowledge and inject them back into the model for knowledge consolidation. We apply the proposed knowledge rumination to various language models, including RoBERTa, DeBERTa, and GPT-3. Experimental results on six commonsense reasoning tasks and GLUE benchmarks demonstrate the effectiveness of our proposed approach, which proves that the knowledge stored in PLMs can be better exploited to enhance performance.", }
Previous studies have revealed that vanilla pre-trained language models (PLMs) lack the capacity to handle knowledge-intensive NLP tasks alone; thus, several works have attempted to integrate external knowledge into PLMs. However, despite the promising outcome, we empirically observe that PLMs may have already encoded rich knowledge in their pre-trained parameters but fails to fully utilize them when applying to knowledge-intensive tasks. In this paper, we propose a new paradigm dubbed \textbf{Knowledge Rumination} to help the pre-trained language model utilize that related latent knowledge without retrieving them from the external corpus. By simply adding a prompt like \textit{{``}As far as I know{''}} to the PLMs, we try to review related latent knowledge and inject them back into the model for knowledge consolidation. We apply the proposed knowledge rumination to various language models, including RoBERTa, DeBERTa, and GPT-3. Experimental results on six commonsense reasoning tasks and GLUE benchmarks demonstrate the effectiveness of our proposed approach, which proves that the knowledge stored in PLMs can be better exploited to enhance performance.
[ "Yao, Yunzhi", "Wang, Peng", "Mao, Shengyu", "Tan, Chuanqi", "Huang, Fei", "Chen, Huajun", "Zhang, Ningyu" ]
Knowledge Rumination for Pre-trained Language Models
emnlp-main.206
2305.08732
[ "https://github.com/zjunlp/knowledge-rumination" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.207.bib
https://aclanthology.org/2023.emnlp-main.207/
@inproceedings{wu-lu-2023-struct, title = "Struct-{XLM}: A Structure Discovery Multilingual Language Model for Enhancing Cross-lingual Transfer through Reinforcement Learning", author = "Wu, Linjuan and Lu, Weiming", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.207", doi = "10.18653/v1/2023.emnlp-main.207", pages = "3405--3419", abstract = "Cross-lingual transfer learning heavily relies on well-aligned cross-lingual representations. The syntactic structure is recognized as beneficial for cross-lingual transfer, but limited researches utilize it for aligning representation in multilingual pre-trained language models (PLMs). Additionally, existing methods require syntactic labels that are difficult to obtain and of poor quality for low-resource languages. To address this gap, we propose Struct-XLM, a novel multilingual language model that leverages reinforcement learning (RL) to autonomously discover universal syntactic structures for improving the cross-lingual representation alignment of PLM. Struct-XLM integrates a policy network (PNet) and a translation ranking task. The PNet is designed to discover structural information and integrate it into the last layer of the PLM through the structural multi-head attention module to obtain structural representation. The translation ranking task obtains a delayed reward based on the structural representation to optimize the PNet while improving the alignment of cross-lingual representation. Experiments show the effectiveness of the proposed approach for enhancing cross-lingual transfer of multilingual PLM on the XTREME benchmark.", }
Cross-lingual transfer learning heavily relies on well-aligned cross-lingual representations. The syntactic structure is recognized as beneficial for cross-lingual transfer, but limited researches utilize it for aligning representation in multilingual pre-trained language models (PLMs). Additionally, existing methods require syntactic labels that are difficult to obtain and of poor quality for low-resource languages. To address this gap, we propose Struct-XLM, a novel multilingual language model that leverages reinforcement learning (RL) to autonomously discover universal syntactic structures for improving the cross-lingual representation alignment of PLM. Struct-XLM integrates a policy network (PNet) and a translation ranking task. The PNet is designed to discover structural information and integrate it into the last layer of the PLM through the structural multi-head attention module to obtain structural representation. The translation ranking task obtains a delayed reward based on the structural representation to optimize the PNet while improving the alignment of cross-lingual representation. Experiments show the effectiveness of the proposed approach for enhancing cross-lingual transfer of multilingual PLM on the XTREME benchmark.
[ "Wu, Linjuan", "Lu, Weiming" ]
Struct-XLM: A Structure Discovery Multilingual Language Model for Enhancing Cross-lingual Transfer through Reinforcement Learning
emnlp-main.207
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.208.bib
https://aclanthology.org/2023.emnlp-main.208/
@inproceedings{huang-etal-2023-adasent, title = "{A}da{S}ent: Efficient Domain-Adapted Sentence Embeddings for Few-Shot Classification", author = "Huang, Yongxin and Wang, Kexin and Dutta, Sourav and Patel, Raj and Glava{\v{s}}, Goran and Gurevych, Iryna", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.208", doi = "10.18653/v1/2023.emnlp-main.208", pages = "3420--3434", abstract = "Recent work has found that few-shot sentence classification based on pre-trained Sentence Encoders (SEs) is efficient, robust, and effective. In this work, we investigate strategies for domain-specialization in the context of few-shot sentence classification with SEs. We first establish that unsupervised Domain-Adaptive Pre-Training (DAPT) of a base Pre-trained Language Model (PLM) (i.e., not an SE) substantially improves the accuracy of few-shot sentence classification by up to 8.4 points. However, applying DAPT on SEs, on the one hand, disrupts the effects of their (general-domain) Sentence Embedding Pre-Training (SEPT). On the other hand, applying general-domain SEPT on top of a domain-adapted base PLM (i.e., after DAPT) is effective but inefficient, since the computationally expensive SEPT needs to be executed on top of a DAPT-ed PLM of each domain. As a solution, we propose AdaSent, which decouples SEPT from DAPT by training a SEPT adapter on the base PLM. The adapter can be inserted into DAPT-ed PLMs from any domain. We demonstrate AdaSent{'}s effectiveness in extensive experiments on 17 different few-shot sentence classification datasets. AdaSent matches or surpasses the performance of full SEPT on DAPT-ed PLM, while substantially reducing the training costs. The code for AdaSent is available.", }
Recent work has found that few-shot sentence classification based on pre-trained Sentence Encoders (SEs) is efficient, robust, and effective. In this work, we investigate strategies for domain-specialization in the context of few-shot sentence classification with SEs. We first establish that unsupervised Domain-Adaptive Pre-Training (DAPT) of a base Pre-trained Language Model (PLM) (i.e., not an SE) substantially improves the accuracy of few-shot sentence classification by up to 8.4 points. However, applying DAPT on SEs, on the one hand, disrupts the effects of their (general-domain) Sentence Embedding Pre-Training (SEPT). On the other hand, applying general-domain SEPT on top of a domain-adapted base PLM (i.e., after DAPT) is effective but inefficient, since the computationally expensive SEPT needs to be executed on top of a DAPT-ed PLM of each domain. As a solution, we propose AdaSent, which decouples SEPT from DAPT by training a SEPT adapter on the base PLM. The adapter can be inserted into DAPT-ed PLMs from any domain. We demonstrate AdaSent{'}s effectiveness in extensive experiments on 17 different few-shot sentence classification datasets. AdaSent matches or surpasses the performance of full SEPT on DAPT-ed PLM, while substantially reducing the training costs. The code for AdaSent is available.
[ "Huang, Yongxin", "Wang, Kexin", "Dutta, Sourav", "Patel, Raj", "Glava{\\v{s}}, Goran", "Gurevych, Iryna" ]
AdaSent: Efficient Domain-Adapted Sentence Embeddings for Few-Shot Classification
emnlp-main.208
2311.00408
[ "https://github.com/ukplab/adasent" ]
https://huggingface.co/papers/2311.00408
1
0
0
6
[ "yoh/distilroberta-base-sept-adapter" ]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.209.bib
https://aclanthology.org/2023.emnlp-main.209/
@inproceedings{li-etal-2023-interview, title = "Interview Evaluation: A Novel Approach for Automatic Evaluation of Conversational Question Answering Models", author = "Li, Xibo and Zou, Bowei and Fan, Yifan and Li, Yanling and Aw, Ai Ti and Hong, Yu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.209", doi = "10.18653/v1/2023.emnlp-main.209", pages = "3435--3446", abstract = "Conversational Question Answering (CQA) aims to provide natural language answers to users in information-seeking dialogues. Existing CQA benchmarks often evaluate models using pre-collected human-human conversations. However, replacing the model-predicted dialogue history with ground truth compromises the naturalness and sustainability of CQA evaluation. While previous studies proposed using predicted history and rewriting techniques to address unresolved coreferences and incoherencies, this approach renders the question self-contained from the conversation. In this paper, we propose a novel automatic evaluation approach, interview evaluation. Specifically, ChatGPT acts as the interviewer (Q agent) with a set of carefully designed prompts, and the CQA model under test serves as the interviewee (A agent). During the interview evaluation, questions are dynamically generated by the Q agent to guide the A agent in predicting the correct answer through an interactive process. We evaluated four different models on QuAC and two models on CoQA in our experiments. The experiment results demonstrate that our interview evaluation has advantages over previous CQA evaluation approaches, particularly in terms of naturalness and coherence. The source code is made publicly available.", }
Conversational Question Answering (CQA) aims to provide natural language answers to users in information-seeking dialogues. Existing CQA benchmarks often evaluate models using pre-collected human-human conversations. However, replacing the model-predicted dialogue history with ground truth compromises the naturalness and sustainability of CQA evaluation. While previous studies proposed using predicted history and rewriting techniques to address unresolved coreferences and incoherencies, this approach renders the question self-contained from the conversation. In this paper, we propose a novel automatic evaluation approach, interview evaluation. Specifically, ChatGPT acts as the interviewer (Q agent) with a set of carefully designed prompts, and the CQA model under test serves as the interviewee (A agent). During the interview evaluation, questions are dynamically generated by the Q agent to guide the A agent in predicting the correct answer through an interactive process. We evaluated four different models on QuAC and two models on CoQA in our experiments. The experiment results demonstrate that our interview evaluation has advantages over previous CQA evaluation approaches, particularly in terms of naturalness and coherence. The source code is made publicly available.
[ "Li, Xibo", "Zou, Bowei", "Fan, Yifan", "Li, Yanling", "Aw, Ai Ti", "Hong, Yu" ]
Interview Evaluation: A Novel Approach for Automatic Evaluation of Conversational Question Answering Models
emnlp-main.209
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.210.bib
https://aclanthology.org/2023.emnlp-main.210/
@inproceedings{wilkens-etal-2023-tcfle, title = "{TCFLE}-8: a Corpus of Learner Written Productions for {F}rench as a Foreign Language and its Application to Automated Essay Scoring", author = "Wilkens, Rodrigo and Pintard, Alice and Alfter, David and Folny, Vincent and Fran{\c{c}}ois, Thomas", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.210", doi = "10.18653/v1/2023.emnlp-main.210", pages = "3447--3465", abstract = "Automated Essay Scoring (AES) aims to automatically assess the quality of essays. Automation enables large-scale assessment, improvements in consistency, reliability, and standardization. Those characteristics are of particular relevance in the context of language certification exams. However, a major bottleneck in the development of AES systems is the availability of corpora, which, unfortunately, are scarce, especially for languages other than English. In this paper, we aim to foster the development of AES for French by providing the TCFLE-8 corpus, a corpus of 6.5k essays collected in the context of the \textit{Test de Connaissance du Fran{\c{c}}ais} (TCF - French Knowledge Test) certification exam. We report the strict quality procedure that led to the scoring of each essay by at least two raters according to the CEFR levels and to the creation of a balanced corpus. In addition, we describe how linguistic properties of the essays relate to the learners{'} proficiency in TCFLE-8. We also advance the state-of-the-art performance for the AES task in French by experimenting with two strong baselines (i.e. RoBERTa and feature-based). Finally, we discuss the challenges of AES using TCFLE-8.", }
Automated Essay Scoring (AES) aims to automatically assess the quality of essays. Automation enables large-scale assessment, improvements in consistency, reliability, and standardization. Those characteristics are of particular relevance in the context of language certification exams. However, a major bottleneck in the development of AES systems is the availability of corpora, which, unfortunately, are scarce, especially for languages other than English. In this paper, we aim to foster the development of AES for French by providing the TCFLE-8 corpus, a corpus of 6.5k essays collected in the context of the \textit{Test de Connaissance du Fran{\c{c}}ais} (TCF - French Knowledge Test) certification exam. We report the strict quality procedure that led to the scoring of each essay by at least two raters according to the CEFR levels and to the creation of a balanced corpus. In addition, we describe how linguistic properties of the essays relate to the learners{'} proficiency in TCFLE-8. We also advance the state-of-the-art performance for the AES task in French by experimenting with two strong baselines (i.e. RoBERTa and feature-based). Finally, we discuss the challenges of AES using TCFLE-8.
[ "Wilkens, Rodrigo", "Pintard, Alice", "Alfter, David", "Folny, Vincent", "Fran{\\c{c}}ois, Thomas" ]
TCFLE-8: a Corpus of Learner Written Productions for French as a Foreign Language and its Application to Automated Essay Scoring
emnlp-main.210
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.211.bib
https://aclanthology.org/2023.emnlp-main.211/
@inproceedings{heineman-etal-2023-dancing, title = "Dancing Between Success and Failure: Edit-level Simplification Evaluation using {SALSA}", author = "Heineman, David and Dou, Yao and Maddela, Mounica and Xu, Wei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.211", doi = "10.18653/v1/2023.emnlp-main.211", pages = "3466--3495", abstract = "Large language models (e.g., GPT-4) are uniquely capable of producing highly rated text simplification, yet current human evaluation methods fail to provide a clear understanding of systems{'} specific strengths and weaknesses. To address this limitation, we introduce SALSA, an edit-based human annotation framework that enables holistic and fine-grained text simplification evaluation. We develop twenty one linguistically grounded edit types, covering the full spectrum of success and failure across dimensions of conceptual, syntactic and lexical simplicity. Using SALSA, we collect 19K edit annotations on 840 simplifications, revealing discrepancies in the distribution of simplification strategies performed by fine-tuned models, prompted LLMs and humans, and find GPT-3.5 performs more quality edits than humans, but still exhibits frequent errors. Using our fine-grained annotations, we develop LENS-SALSA, a reference-free automatic simplification metric, trained to predict sentence- and word-level quality simultaneously. Additionally, we introduce word-level quality estimation for simplification and report promising baseline results. Our data, new metric, and annotation toolkit are available at https://salsa-eval.com.", }
Large language models (e.g., GPT-4) are uniquely capable of producing highly rated text simplification, yet current human evaluation methods fail to provide a clear understanding of systems{'} specific strengths and weaknesses. To address this limitation, we introduce SALSA, an edit-based human annotation framework that enables holistic and fine-grained text simplification evaluation. We develop twenty one linguistically grounded edit types, covering the full spectrum of success and failure across dimensions of conceptual, syntactic and lexical simplicity. Using SALSA, we collect 19K edit annotations on 840 simplifications, revealing discrepancies in the distribution of simplification strategies performed by fine-tuned models, prompted LLMs and humans, and find GPT-3.5 performs more quality edits than humans, but still exhibits frequent errors. Using our fine-grained annotations, we develop LENS-SALSA, a reference-free automatic simplification metric, trained to predict sentence- and word-level quality simultaneously. Additionally, we introduce word-level quality estimation for simplification and report promising baseline results. Our data, new metric, and annotation toolkit are available at https://salsa-eval.com.
[ "Heineman, David", "Dou, Yao", "Maddela, Mounica", "Xu, Wei" ]
Dancing Between Success and Failure: Edit-level Simplification Evaluation using SALSA
emnlp-main.211
2305.14458
[ "" ]
https://huggingface.co/papers/2305.14458
0
0
0
4
[ "davidheineman/lens-salsa" ]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.212.bib
https://aclanthology.org/2023.emnlp-main.212/
@inproceedings{casola-etal-2023-confidence, title = "Confidence-based Ensembling of Perspective-aware Models", author = "Casola, Silvia and Lo, Soda Marem and Basile, Valerio and Frenda, Simona and Cignarella, Alessandra and Patti, Viviana and Bosco, Cristina", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.212", doi = "10.18653/v1/2023.emnlp-main.212", pages = "3496--3507", abstract = "Research in the field of NLP has recently focused on the variability that people show in selecting labels when performing an annotation task. Exploiting disagreements in annotations has been shown to offer advantages for accurate modelling and fair evaluation. In this paper, we propose a strongly perspectivist model for supervised classification of natural language utterances. Our approach combines the predictions of several perspective-aware models using key information of their individual confidence to capture the subjectivity encoded in the annotation of linguistic phenomena. We validate our method through experiments on two case studies, irony and hate speech detection, in in-domain and cross-domain settings. The results show that confidence-based ensembling of perspective-aware models seems beneficial for classification performance in all scenarios. In addition, we demonstrate the effectiveness of our method with automatically extracted perspectives from annotations when the annotators{'} metadata are not available.", }
Research in the field of NLP has recently focused on the variability that people show in selecting labels when performing an annotation task. Exploiting disagreements in annotations has been shown to offer advantages for accurate modelling and fair evaluation. In this paper, we propose a strongly perspectivist model for supervised classification of natural language utterances. Our approach combines the predictions of several perspective-aware models using key information of their individual confidence to capture the subjectivity encoded in the annotation of linguistic phenomena. We validate our method through experiments on two case studies, irony and hate speech detection, in in-domain and cross-domain settings. The results show that confidence-based ensembling of perspective-aware models seems beneficial for classification performance in all scenarios. In addition, we demonstrate the effectiveness of our method with automatically extracted perspectives from annotations when the annotators{'} metadata are not available.
[ "Casola, Silvia", "Lo, Soda Marem", "Basile, Valerio", "Frenda, Simona", "Cignarella, Aless", "ra", "Patti, Viviana", "Bosco, Cristina" ]
Confidence-based Ensembling of Perspective-aware Models
emnlp-main.212
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.213.bib
https://aclanthology.org/2023.emnlp-main.213/
@inproceedings{wang-etal-2023-tovilag, title = "{T}o{V}i{L}a{G}: Your Visual-Language Generative Model is Also An Evildoer", author = "Wang, Xinpeng and Yi, Xiaoyuan and Jiang, Han and Zhou, Shanlin and Wei, Zhihua and Xie, Xing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.213", doi = "10.18653/v1/2023.emnlp-main.213", pages = "3508--3533", abstract = "Recent large-scale Visual-Language Generative Models (VLGMs) have achieved unprecedented improvement in multimodal image/text generation. However, these models might also generate toxic content, e.g., offensive text and pornography images, raising significant ethical risks. Despite exhaustive studies on toxic degeneration of language models, this problem remains largely unexplored within the context of visual-language generation. This work delves into the propensity for toxicity generation and susceptibility to toxic data across various VLGMs. For this purpose, we built ToViLaG, a dataset comprising 32K co-toxic/mono-toxic text-image pairs and 1K innocuous but evocative text that tends to stimulate toxicity. Furthermore, we propose WInToRe, a novel toxicity metric tailored to visual-language generation, which theoretically reflects different aspects of toxicity considering both input and output. On such a basis, we benchmarked the toxicity of a diverse spectrum of VLGMs and discovered that some models do more evil than expected while some are more vulnerable to infection, underscoring the necessity of VLGMs detoxification. Therefore, we develop an innovative bottleneck-based detoxification method. Our method could reduce toxicity while maintaining comparable generation quality, providing a promising initial solution to this line of research.", }
Recent large-scale Visual-Language Generative Models (VLGMs) have achieved unprecedented improvement in multimodal image/text generation. However, these models might also generate toxic content, e.g., offensive text and pornography images, raising significant ethical risks. Despite exhaustive studies on toxic degeneration of language models, this problem remains largely unexplored within the context of visual-language generation. This work delves into the propensity for toxicity generation and susceptibility to toxic data across various VLGMs. For this purpose, we built ToViLaG, a dataset comprising 32K co-toxic/mono-toxic text-image pairs and 1K innocuous but evocative text that tends to stimulate toxicity. Furthermore, we propose WInToRe, a novel toxicity metric tailored to visual-language generation, which theoretically reflects different aspects of toxicity considering both input and output. On such a basis, we benchmarked the toxicity of a diverse spectrum of VLGMs and discovered that some models do more evil than expected while some are more vulnerable to infection, underscoring the necessity of VLGMs detoxification. Therefore, we develop an innovative bottleneck-based detoxification method. Our method could reduce toxicity while maintaining comparable generation quality, providing a promising initial solution to this line of research.
[ "Wang, Xinpeng", "Yi, Xiaoyuan", "Jiang, Han", "Zhou, Shanlin", "Wei, Zhihua", "Xie, Xing" ]
ToViLaG: Your Visual-Language Generative Model is Also An Evildoer
emnlp-main.213
2312.11523
[ "https://github.com/victorup/ToViLaG" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.214.bib
https://aclanthology.org/2023.emnlp-main.214/
@inproceedings{wan-etal-2023-gpt, title = "{GPT}-{RE}: In-context Learning for Relation Extraction using Large Language Models", author = "Wan, Zhen and Cheng, Fei and Mao, Zhuoyuan and Liu, Qianying and Song, Haiyue and Li, Jiwei and Kurohashi, Sadao", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.214", doi = "10.18653/v1/2023.emnlp-main.214", pages = "3534--3547", abstract = "In spite of the potential for ground-breaking achievements offered by large language models (LLMs) (e.g., GPT-3) via in-context learning (ICL), they still lag significantly behind fully-supervised baselines (e.g., fine-tuned BERT) in relation extraction (RE). This is due to the two major shortcomings of ICL for RE: (1) low relevance regarding entity and relation in existing sentence-level demonstration retrieval approaches for ICL; and (2) the lack of explaining input-label mappings of demonstrations leading to poor ICL effectiveness. In this paper, we propose GPT-RE to successfully address the aforementioned issues by (1) incorporating task-aware representations in demonstration retrieval; and (2) enriching the demonstrations with gold label-induced reasoning logic. We evaluate GPT-RE on four widely-used RE datasets, and observe that GPT-RE achieves improvements over not only existing GPT-3 baselines, but also fully-supervised baselines as in Figure 1. Specifically, GPT-RE achieves SOTA performances on the Semeval and SciERC datasets, and competitive performances on the TACRED and ACE05 datasets. Additionally, a critical issue of LLMs revealed by previous work, the strong inclination to wrongly classify NULL examples into other pre-defined labels, is substantially alleviated by our method. We show an empirical analysis.", }
In spite of the potential for ground-breaking achievements offered by large language models (LLMs) (e.g., GPT-3) via in-context learning (ICL), they still lag significantly behind fully-supervised baselines (e.g., fine-tuned BERT) in relation extraction (RE). This is due to the two major shortcomings of ICL for RE: (1) low relevance regarding entity and relation in existing sentence-level demonstration retrieval approaches for ICL; and (2) the lack of explaining input-label mappings of demonstrations leading to poor ICL effectiveness. In this paper, we propose GPT-RE to successfully address the aforementioned issues by (1) incorporating task-aware representations in demonstration retrieval; and (2) enriching the demonstrations with gold label-induced reasoning logic. We evaluate GPT-RE on four widely-used RE datasets, and observe that GPT-RE achieves improvements over not only existing GPT-3 baselines, but also fully-supervised baselines as in Figure 1. Specifically, GPT-RE achieves SOTA performances on the Semeval and SciERC datasets, and competitive performances on the TACRED and ACE05 datasets. Additionally, a critical issue of LLMs revealed by previous work, the strong inclination to wrongly classify NULL examples into other pre-defined labels, is substantially alleviated by our method. We show an empirical analysis.
[ "Wan, Zhen", "Cheng, Fei", "Mao, Zhuoyuan", "Liu, Qianying", "Song, Haiyue", "Li, Jiwei", "Kurohashi, Sadao" ]
GPT-RE: In-context Learning for Relation Extraction using Large Language Models
emnlp-main.214
2305.02105
[ "https://github.com/yukinowan/gpt-re" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.215.bib
https://aclanthology.org/2023.emnlp-main.215/
@inproceedings{ch-wang-etal-2023-sociocultural, title = "Sociocultural Norm Similarities and Differences via Situational Alignment and Explainable Textual Entailment", author = "CH-Wang, Sky and Saakyan, Arkadiy and Li, Oliver and Yu, Zhou and Muresan, Smaranda", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.215", doi = "10.18653/v1/2023.emnlp-main.215", pages = "3548--3564", abstract = "Designing systems that can reason across cultures requires that they are grounded in the norms of the contexts in which they operate. However, current research on developing computational models of social norms has primarily focused on American society. Here, we propose a novel approach to discover and compare descriptive social norms across Chinese and American cultures. We demonstrate our approach by leveraging discussions on a Chinese Q{\&}A platform{---}Zhihu{---}and the existing SocialChemistry dataset as proxies for contrasting cultural axes, align social situations cross-culturally, and extract social norms from texts using in-context learning. Embedding Chain-of-Thought prompting in a human-AI collaborative framework, we build a high-quality dataset of 3,069 social norms aligned with social situations across Chinese and American cultures alongside corresponding free-text explanations. To test the ability of models to reason about social norms across cultures, we introduce the task of explainable social norm entailment, showing that existing models under 3B parameters have significant room for improvement in both automatic and human evaluation. Further analysis of cross-cultural norm differences based on our dataset shows empirical alignment with the social orientations framework, revealing several situational and descriptive nuances in norms across these cultures.", }
Designing systems that can reason across cultures requires that they are grounded in the norms of the contexts in which they operate. However, current research on developing computational models of social norms has primarily focused on American society. Here, we propose a novel approach to discover and compare descriptive social norms across Chinese and American cultures. We demonstrate our approach by leveraging discussions on a Chinese Q{\&}A platform{---}Zhihu{---}and the existing SocialChemistry dataset as proxies for contrasting cultural axes, align social situations cross-culturally, and extract social norms from texts using in-context learning. Embedding Chain-of-Thought prompting in a human-AI collaborative framework, we build a high-quality dataset of 3,069 social norms aligned with social situations across Chinese and American cultures alongside corresponding free-text explanations. To test the ability of models to reason about social norms across cultures, we introduce the task of explainable social norm entailment, showing that existing models under 3B parameters have significant room for improvement in both automatic and human evaluation. Further analysis of cross-cultural norm differences based on our dataset shows empirical alignment with the social orientations framework, revealing several situational and descriptive nuances in norms across these cultures.
[ "CH-Wang, Sky", "Saakyan, Arkadiy", "Li, Oliver", "Yu, Zhou", "Muresan, Smar", "a" ]
Sociocultural Norm Similarities and Differences via Situational Alignment and Explainable Textual Entailment
emnlp-main.215
2305.14492
[ "https://github.com/asaakyan/socnormnli" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.216.bib
https://aclanthology.org/2023.emnlp-main.216/
@inproceedings{zhou-etal-2023-inform, title = "{INFORM} : Information e{N}tropy based multi-step reasoning {FOR} large language Models", author = "Zhou, Chuyue and You, Wangjie and Li, Juntao and Ye, Jing and Chen, Kehai and Zhang, Min", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.216", doi = "10.18653/v1/2023.emnlp-main.216", pages = "3565--3576", abstract = "Large language models (LLMs) have demonstrated exceptional performance in reasoning tasks with dedicated Chain-of-Thought (CoT) prompts. Further enhancing CoT prompts with exquisite exemplars can significantly improve reasoning performance.However, the effectiveness of CoT prompts may fluctuate dramatically with different choices of in-context examples. Additionally, manual construction of rationale steps can be time-consuming, presenting challenges for the widespread adoption of CoT prompting. In this work, we propose a novel approach by introducing information entropy (IE) as a criteria on for CoT prompt selection. We extend this criterion to the CoT generation and inference stages, automatically generating CoT prompts with higher information entropy scores and adaptively determining the number of samples. These three stages together form our proposed information- entropy-based multi-step reasoning for large language models, named INFORM. Our experiments across seven reasoning benchmarks utilizing two language models(GPT-3.5-Turbo and text-davinci-003) demonstrate the superiority of INFORM both in performance and efficiency.", }
Large language models (LLMs) have demonstrated exceptional performance in reasoning tasks with dedicated Chain-of-Thought (CoT) prompts. Further enhancing CoT prompts with exquisite exemplars can significantly improve reasoning performance.However, the effectiveness of CoT prompts may fluctuate dramatically with different choices of in-context examples. Additionally, manual construction of rationale steps can be time-consuming, presenting challenges for the widespread adoption of CoT prompting. In this work, we propose a novel approach by introducing information entropy (IE) as a criteria on for CoT prompt selection. We extend this criterion to the CoT generation and inference stages, automatically generating CoT prompts with higher information entropy scores and adaptively determining the number of samples. These three stages together form our proposed information- entropy-based multi-step reasoning for large language models, named INFORM. Our experiments across seven reasoning benchmarks utilizing two language models(GPT-3.5-Turbo and text-davinci-003) demonstrate the superiority of INFORM both in performance and efficiency.
[ "Zhou, Chuyue", "You, Wangjie", "Li, Juntao", "Ye, Jing", "Chen, Kehai", "Zhang, Min" ]
INFORM : Information eNtropy based multi-step reasoning FOR large language Models
emnlp-main.216
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.217.bib
https://aclanthology.org/2023.emnlp-main.217/
@inproceedings{li-etal-2023-adaptive-gating, title = "Adaptive Gating in Mixture-of-Experts based Language Models", author = "Li, Jiamin and Su, Qiang and Yang, Yitao and Jiang, Yimin and Wang, Cong and Xu, Hong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.217", doi = "10.18653/v1/2023.emnlp-main.217", pages = "3577--3587", abstract = "Large language models have demonstrated exceptional language understanding capabilities in many NLP tasks. Sparsely activated mixture-of-experts (MoE) has emerged as a promising solution for scaling models while maintaining a constant number of computational operations. Existing MoE models adopt a fixed gating network where each token is computed by the same number of experts. This contradicts our intuition that the tokens in each sequence vary in terms of their linguistic complexity and, consequently, require different computational costs. Little is discussed in prior research on the trade-off between computation per token and model performance. This paper introduces adaptive gating in MoE, a flexible training strategy that allows tokens to be processed by a variable number of experts based on expert probability distribution. Adaptive gating preserves sparsity while improving training efficiency. We further draw upon curriculum learning to better align the order of training samples and maximize the training time savings. Extensive experiments on diverse NLP tasks show that adaptive gating reduces at most 22.5{\%} training time while maintaining inference quality. Moreover, we conduct a comprehensive analysis of the gating decisions and present our insights on which tokens are inherently difficult to process, depending on the specific language task.", }
Large language models have demonstrated exceptional language understanding capabilities in many NLP tasks. Sparsely activated mixture-of-experts (MoE) has emerged as a promising solution for scaling models while maintaining a constant number of computational operations. Existing MoE models adopt a fixed gating network where each token is computed by the same number of experts. This contradicts our intuition that the tokens in each sequence vary in terms of their linguistic complexity and, consequently, require different computational costs. Little is discussed in prior research on the trade-off between computation per token and model performance. This paper introduces adaptive gating in MoE, a flexible training strategy that allows tokens to be processed by a variable number of experts based on expert probability distribution. Adaptive gating preserves sparsity while improving training efficiency. We further draw upon curriculum learning to better align the order of training samples and maximize the training time savings. Extensive experiments on diverse NLP tasks show that adaptive gating reduces at most 22.5{\%} training time while maintaining inference quality. Moreover, we conduct a comprehensive analysis of the gating decisions and present our insights on which tokens are inherently difficult to process, depending on the specific language task.
[ "Li, Jiamin", "Su, Qiang", "Yang, Yitao", "Jiang, Yimin", "Wang, Cong", "Xu, Hong" ]
Adaptive Gating in Mixture-of-Experts based Language Models
emnlp-main.217
2310.07188
[ "" ]
https://huggingface.co/papers/2310.07188
1
2
0
6
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.218.bib
https://aclanthology.org/2023.emnlp-main.218/
@inproceedings{valentini-etal-2023-automatic, title = "On the Automatic Generation and Simplification of Children{'}s Stories", author = "Valentini, Maria and Weber, Jennifer and Salcido, Jesus and Wright, T{\'e}a and Colunga, Eliana and von der Wense, Katharina", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.218", doi = "10.18653/v1/2023.emnlp-main.218", pages = "3588--3598", abstract = "With recent advances in large language models (LLMs), the concept of automatically generating children{'}s educational materials has become increasingly realistic. Working toward the goal of age-appropriate simplicity in generated educational texts, we first examine the ability of several popular LLMs to generate stories with properly adjusted lexical and readability levels. We find that, in spite of the growing capabilities of LLMs, they do not yet possess the ability to limit their vocabulary to levels appropriate for younger age groups. As a second experiment, we explore the ability of state-of-the-art lexical simplification models to generalize to the domain of children{'}s stories and, thus, create an efficient pipeline for their automatic generation. In order to test these models, we develop a dataset of child-directed lexical simplification instances, with examples taken from the LLM-generated stories in our first experiment. We find that, while the strongest-performing current lexical simplification models do not perform as well on material designed for children due to their reliance on large language models behind the scenes, some models that still achieve fairly strong results on general data can mimic or even improve their performance on children-directed data with proper fine-tuning, which we conduct using our newly created child-directed simplification dataset.", }
With recent advances in large language models (LLMs), the concept of automatically generating children{'}s educational materials has become increasingly realistic. Working toward the goal of age-appropriate simplicity in generated educational texts, we first examine the ability of several popular LLMs to generate stories with properly adjusted lexical and readability levels. We find that, in spite of the growing capabilities of LLMs, they do not yet possess the ability to limit their vocabulary to levels appropriate for younger age groups. As a second experiment, we explore the ability of state-of-the-art lexical simplification models to generalize to the domain of children{'}s stories and, thus, create an efficient pipeline for their automatic generation. In order to test these models, we develop a dataset of child-directed lexical simplification instances, with examples taken from the LLM-generated stories in our first experiment. We find that, while the strongest-performing current lexical simplification models do not perform as well on material designed for children due to their reliance on large language models behind the scenes, some models that still achieve fairly strong results on general data can mimic or even improve their performance on children-directed data with proper fine-tuning, which we conduct using our newly created child-directed simplification dataset.
[ "Valentini, Maria", "Weber, Jennifer", "Salcido, Jesus", "Wright, T{\\'e}a", "Colunga, Eliana", "von der Wense, Katharina" ]
On the Automatic Generation and Simplification of Children's Stories
emnlp-main.218
2310.18502
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.219.bib
https://aclanthology.org/2023.emnlp-main.219/
@inproceedings{wei-etal-2023-decompositions, title = "When Do Decompositions Help for Machine Reading?", author = "Wei, Kangda and Lawrie, Dawn and Van Durme, Benjamin and Chen, Yunmo and Weller, Orion", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.219", doi = "10.18653/v1/2023.emnlp-main.219", pages = "3599--3606", abstract = "Answering complex questions often requires multi-step reasoning in order to obtain the final answer. Most research into decompositions of complex questions involves open-domain systems, which have shown success in using these decompositions for improved retrieval. In the machine reading setting, however, work to understand when decompositions are helpful is understudied. We conduct experiments on decompositions in machine reading to unify recent work in this space, using a range of models and datasets. We find that decompositions can be helpful in zero or limited-data settings, giving several points of improvement in exact match. However, we also show that when models are given access to around a few hundred or more examples, decompositions are not helpful (and can actually be detrimental). Thus, our analysis implies that models can learn decompositions implicitly even with limited data.", }
Answering complex questions often requires multi-step reasoning in order to obtain the final answer. Most research into decompositions of complex questions involves open-domain systems, which have shown success in using these decompositions for improved retrieval. In the machine reading setting, however, work to understand when decompositions are helpful is understudied. We conduct experiments on decompositions in machine reading to unify recent work in this space, using a range of models and datasets. We find that decompositions can be helpful in zero or limited-data settings, giving several points of improvement in exact match. However, we also show that when models are given access to around a few hundred or more examples, decompositions are not helpful (and can actually be detrimental). Thus, our analysis implies that models can learn decompositions implicitly even with limited data.
[ "Wei, Kangda", "Lawrie, Dawn", "Van Durme, Benjamin", "Chen, Yunmo", "Weller, Orion" ]
When Do Decompositions Help for Machine Reading?
emnlp-main.219
2212.10019
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.220.bib
https://aclanthology.org/2023.emnlp-main.220/
@inproceedings{slobodkin-etal-2023-curious, title = "The Curious Case of Hallucinatory (Un)answerability: Finding Truths in the Hidden States of Over-Confident Large Language Models", author = "Slobodkin, Aviv and Goldman, Omer and Caciularu, Avi and Dagan, Ido and Ravfogel, Shauli", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.220", doi = "10.18653/v1/2023.emnlp-main.220", pages = "3607--3625", abstract = "Large language models (LLMs) have been shown to possess impressive capabilities, while also raising crucial concerns about the faithfulness of their responses. A primary issue arising in this context is the management of (un)answerable queries by LLMs, which often results in hallucinatory behavior due to overconfidence. In this paper, we explore the behavior of LLMs when presented with (un)answerable queries. We ask: do models \textit{represent} the fact that the question is (un)answerable when generating a hallucinatory answer? Our results show strong indications that such models encode the answerability of an input query, with the representation of the first decoded token often being a strong indicator. These findings shed new light on the spatial organization within the latent representations of LLMs, unveiling previously unexplored facets of these models. Moreover, they pave the way for the development of improved decoding techniques with better adherence to factual generation, particularly in scenarios where query (un)answerability is a concern.", }
Large language models (LLMs) have been shown to possess impressive capabilities, while also raising crucial concerns about the faithfulness of their responses. A primary issue arising in this context is the management of (un)answerable queries by LLMs, which often results in hallucinatory behavior due to overconfidence. In this paper, we explore the behavior of LLMs when presented with (un)answerable queries. We ask: do models \textit{represent} the fact that the question is (un)answerable when generating a hallucinatory answer? Our results show strong indications that such models encode the answerability of an input query, with the representation of the first decoded token often being a strong indicator. These findings shed new light on the spatial organization within the latent representations of LLMs, unveiling previously unexplored facets of these models. Moreover, they pave the way for the development of improved decoding techniques with better adherence to factual generation, particularly in scenarios where query (un)answerability is a concern.
[ "Slobodkin, Aviv", "Goldman, Omer", "Caciularu, Avi", "Dagan, Ido", "Ravfogel, Shauli" ]
The Curious Case of Hallucinatory (Un)answerability: Finding Truths in the Hidden States of Over-Confident Large Language Models
emnlp-main.220
2310.11877
[ "https://github.com/lovodkin93/unanswerability" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.221.bib
https://aclanthology.org/2023.emnlp-main.221/
@inproceedings{spangher-etal-2023-identifying, title = "Identifying Informational Sources in News Articles", author = "Spangher, Alexander and Peng, Nanyun and Ferrara, Emilio and May, Jonathan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.221", doi = "10.18653/v1/2023.emnlp-main.221", pages = "3626--3639", abstract = "News articles are driven by the informational sources journalists use in reporting. Modeling when, how and why sources get used together in stories can help us better understand the information we consume and even help journalists with the task of producing it. In this work, we take steps toward this goal by constructing the largest and widest-ranging annotated dataset, to date, of informational sources used in news writing. We first show that our dataset can be used to train high-performing models for information detection and source attribution. Then, we introduce a novel task, source prediction, to study the compositionality of sources in news articles {--} i.e. how they are chosen to complement each other. We show good modeling performance on this task, indicating that there is a pattern to the way different sources are used \textit{together} in news storytelling. This insight opens the door for a focus on sources in narrative science (i.e. planning-based language generation) and computational journalism (i.e. a source-recommendation system to aid journalists writing stories). All data and model code can be found at https://github.com/alex2awesome/source-exploration.", }
News articles are driven by the informational sources journalists use in reporting. Modeling when, how and why sources get used together in stories can help us better understand the information we consume and even help journalists with the task of producing it. In this work, we take steps toward this goal by constructing the largest and widest-ranging annotated dataset, to date, of informational sources used in news writing. We first show that our dataset can be used to train high-performing models for information detection and source attribution. Then, we introduce a novel task, source prediction, to study the compositionality of sources in news articles {--} i.e. how they are chosen to complement each other. We show good modeling performance on this task, indicating that there is a pattern to the way different sources are used \textit{together} in news storytelling. This insight opens the door for a focus on sources in narrative science (i.e. planning-based language generation) and computational journalism (i.e. a source-recommendation system to aid journalists writing stories). All data and model code can be found at https://github.com/alex2awesome/source-exploration.
[ "Spangher, Alex", "er", "Peng, Nanyun", "Ferrara, Emilio", "May, Jonathan" ]
Identifying Informational Sources in News Articles
emnlp-main.221
2305.14904
[ "https://github.com/alex2awesome/source-exploration" ]
https://huggingface.co/papers/2305.14904
0
0
0
4
[ "alex2awesome/quote-detection__roberta-base-sentence", "alex2awesome/quote-detection__roberta-base-sentence-v2", "alex2awesome/quote-attribution__qa-model-v2", "alex2awesome/quote-detection__roberta-base-sentence-v3", "alex2awesome/quote-attribution__qa-model-v3" ]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.222.bib
https://aclanthology.org/2023.emnlp-main.222/
@inproceedings{shah-etal-2023-retrofitting, title = "Retrofitting Light-weight Language Models for Emotions using Supervised Contrastive Learning", author = "Shah, Sapan and Reddy, Sreedhar and Bhattacharyya, Pushpak", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.222", doi = "10.18653/v1/2023.emnlp-main.222", pages = "3640--3654", abstract = "We present a novel retrofitting method to induce emotion aspects into pre-trained language models (PLMs) such as BERT and RoBERTa. Our method updates pre-trained network weights using contrastive learning so that the text fragments exhibiting similar emotions are encoded nearby in the representation space, and the fragments with different emotion content are pushed apart. While doing so, it also ensures that the linguistic knowledge already present in PLMs is not inadvertently perturbed. The language models retrofitted by our method, i.e., BERTEmo and RoBERTaEmo, produce emotion-aware text representations, as evaluated through different clustering and retrieval metrics. For the downstream tasks on sentiment analysis and sarcasm detection, they perform better than their pre-trained counterparts (about 1{\%} improvement in F1-score) and other existing approaches. Additionally, a more significant boost in performance is observed for the retrofitted models over pre-trained ones in few-shot learning setting.", }
We present a novel retrofitting method to induce emotion aspects into pre-trained language models (PLMs) such as BERT and RoBERTa. Our method updates pre-trained network weights using contrastive learning so that the text fragments exhibiting similar emotions are encoded nearby in the representation space, and the fragments with different emotion content are pushed apart. While doing so, it also ensures that the linguistic knowledge already present in PLMs is not inadvertently perturbed. The language models retrofitted by our method, i.e., BERTEmo and RoBERTaEmo, produce emotion-aware text representations, as evaluated through different clustering and retrieval metrics. For the downstream tasks on sentiment analysis and sarcasm detection, they perform better than their pre-trained counterparts (about 1{\%} improvement in F1-score) and other existing approaches. Additionally, a more significant boost in performance is observed for the retrofitted models over pre-trained ones in few-shot learning setting.
[ "Shah, Sapan", "Reddy, Sreedhar", "Bhattacharyya, Pushpak" ]
Retrofitting Light-weight Language Models for Emotions using Supervised Contrastive Learning
emnlp-main.222
2310.18930
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.223.bib
https://aclanthology.org/2023.emnlp-main.223/
@inproceedings{yang-etal-2023-longtriever, title = "Longtriever: a Pre-trained Long Text Encoder for Dense Document Retrieval", author = "Yang, Junhan and Liu, Zheng and Li, Chaozhuo and Sun, Guangzhong and Xie, Xing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.223", doi = "10.18653/v1/2023.emnlp-main.223", pages = "3655--3665", abstract = "Pre-trained language models (PLMs) have achieved the preeminent position in dense retrieval due to their powerful capacity in modeling intrinsic semantics. However, most existing PLM-based retrieval models encounter substantial computational costs and are infeasible for processing long documents. In this paper, a novel retrieval model Longtriever is proposed to embrace three core challenges of long document retrieval: substantial computational cost, incomprehensive document understanding, and scarce annotations. Longtriever splits long documents into short blocks and then efficiently models the local semantics within a block and the global context semantics across blocks in a tightly-coupled manner. A pre-training phase is further proposed to empower Longtriever to achieve a better understanding of underlying semantic correlations. Experimental results on two popular benchmark datasets demonstrate the superiority of our proposal.", }
Pre-trained language models (PLMs) have achieved the preeminent position in dense retrieval due to their powerful capacity in modeling intrinsic semantics. However, most existing PLM-based retrieval models encounter substantial computational costs and are infeasible for processing long documents. In this paper, a novel retrieval model Longtriever is proposed to embrace three core challenges of long document retrieval: substantial computational cost, incomprehensive document understanding, and scarce annotations. Longtriever splits long documents into short blocks and then efficiently models the local semantics within a block and the global context semantics across blocks in a tightly-coupled manner. A pre-training phase is further proposed to empower Longtriever to achieve a better understanding of underlying semantic correlations. Experimental results on two popular benchmark datasets demonstrate the superiority of our proposal.
[ "Yang, Junhan", "Liu, Zheng", "Li, Chaozhuo", "Sun, Guangzhong", "Xie, Xing" ]
Longtriever: a Pre-trained Long Text Encoder for Dense Document Retrieval
emnlp-main.223
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.224.bib
https://aclanthology.org/2023.emnlp-main.224/
@inproceedings{liu-etal-2023-revisiting-de, title = "Revisiting De-Identification of Electronic Medical Records: Evaluation of Within- and Cross-Hospital Generalization", author = "Liu, Yiyang and Li, Jinpeng and Zhu, Enwei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.224", doi = "10.18653/v1/2023.emnlp-main.224", pages = "3666--3674", abstract = "The de-identification task aims to detect and remove the protected health information from electronic medical records (EMRs). Previous studies generally focus on the within-hospital setting and achieve great successes, while the cross-hospital setting has been overlooked. This study introduces a new de-identification dataset comprising EMRs from three hospitals in China, creating a benchmark for evaluating both within- and cross-hospital generalization. We find significant domain discrepancy between hospitals. A model with almost perfect within-hospital performance struggles when transferred across hospitals. Further experiments show that pretrained language models and some domain generalization methods can alleviate this problem. We believe that our data and findings will encourage investigations on the generalization of medical NLP models.", }
The de-identification task aims to detect and remove the protected health information from electronic medical records (EMRs). Previous studies generally focus on the within-hospital setting and achieve great successes, while the cross-hospital setting has been overlooked. This study introduces a new de-identification dataset comprising EMRs from three hospitals in China, creating a benchmark for evaluating both within- and cross-hospital generalization. We find significant domain discrepancy between hospitals. A model with almost perfect within-hospital performance struggles when transferred across hospitals. Further experiments show that pretrained language models and some domain generalization methods can alleviate this problem. We believe that our data and findings will encourage investigations on the generalization of medical NLP models.
[ "Liu, Yiyang", "Li, Jinpeng", "Zhu, Enwei" ]
Revisiting De-Identification of Electronic Medical Records: Evaluation of Within- and Cross-Hospital Generalization
emnlp-main.224
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.225.bib
https://aclanthology.org/2023.emnlp-main.225/
@inproceedings{juneja-etal-2023-small, title = "Small Language Models Fine-tuned to Coordinate Larger Language Models improve Complex Reasoning", author = "Juneja, Gurusha and Dutta, Subhabrata and Chakrabarti, Soumen and Manchanda, Sunny and Chakraborty, Tanmoy", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.225", doi = "10.18653/v1/2023.emnlp-main.225", pages = "3675--3691", abstract = "Large Language Models (LLMs) prompted to generate chain-of-thought (CoT) exhibit impressive reasoning capabilities. Recent attempts at prompt decomposition toward solving complex, multi-step reasoning problems depend on the ability of the LLM to simultaneously decompose and solve the problem. A significant disadvantage is that foundational LLMs are typically not available for fine-tuning, making adaptation computationally prohibitive. We believe (and demonstrate) that problem decomposition and solution generation are distinct capabilites, better addressed in separate modules, than by one monolithic LLM. We introduce DaSLaM, which uses a decomposition generator to decompose complex problems into subproblems that require fewer reasoning steps. These subproblems are answered by a solver. We use a relatively small (13B parameters) LM as the decomposition generator, which we train using policy gradient optimization to interact with a solver LM (regarded as black-box) and guide it through subproblems, thereby rendering our method solver-agnostic. Evaluation on multiple different reasoning datasets reveal that with our method, a 175 billion parameter LM (text-davinci-003) can produce competitive or even better performance, compared to its orders-of-magnitude larger successor, GPT-4. Additionally, we show that DaSLaM is not limited by the solver{'}s capabilities as a function of scale; e.g., solver LMs with diverse sizes give significant performance improvement with our solver-agnostic decomposition technique. Exhaustive ablation studies evince the superiority of our modular finetuning technique over exorbitantly large decomposer LLMs, based on prompting alone.", }
Large Language Models (LLMs) prompted to generate chain-of-thought (CoT) exhibit impressive reasoning capabilities. Recent attempts at prompt decomposition toward solving complex, multi-step reasoning problems depend on the ability of the LLM to simultaneously decompose and solve the problem. A significant disadvantage is that foundational LLMs are typically not available for fine-tuning, making adaptation computationally prohibitive. We believe (and demonstrate) that problem decomposition and solution generation are distinct capabilites, better addressed in separate modules, than by one monolithic LLM. We introduce DaSLaM, which uses a decomposition generator to decompose complex problems into subproblems that require fewer reasoning steps. These subproblems are answered by a solver. We use a relatively small (13B parameters) LM as the decomposition generator, which we train using policy gradient optimization to interact with a solver LM (regarded as black-box) and guide it through subproblems, thereby rendering our method solver-agnostic. Evaluation on multiple different reasoning datasets reveal that with our method, a 175 billion parameter LM (text-davinci-003) can produce competitive or even better performance, compared to its orders-of-magnitude larger successor, GPT-4. Additionally, we show that DaSLaM is not limited by the solver{'}s capabilities as a function of scale; e.g., solver LMs with diverse sizes give significant performance improvement with our solver-agnostic decomposition technique. Exhaustive ablation studies evince the superiority of our modular finetuning technique over exorbitantly large decomposer LLMs, based on prompting alone.
[ "Juneja, Gurusha", "Dutta, Subhabrata", "Chakrabarti, Soumen", "Manch", "a, Sunny", "Chakraborty, Tanmoy" ]
Small Language Models Fine-tuned to Coordinate Larger Language Models improve Complex Reasoning
emnlp-main.225
2310.18338
[ "https://github.com/lcs2-iiitd/daslam" ]
https://huggingface.co/papers/2310.18338
0
1
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.226.bib
https://aclanthology.org/2023.emnlp-main.226/
@inproceedings{xu-etal-2023-language-representation, title = "Language Representation Projection: Can We Transfer Factual Knowledge across Languages in Multilingual Language Models?", author = "Xu, Shaoyang and Li, Junzhuo and Xiong, Deyi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.226", doi = "10.18653/v1/2023.emnlp-main.226", pages = "3692--3702", abstract = "Multilingual pretrained language models serve as repositories of multilingual factual knowledge. Nevertheless, a substantial performance gap of factual knowledge probing exists between high-resource languages and low-resource languages, suggesting limited implicit factual knowledge transfer across languages in multilingual pretrained language models. This paper investigates the feasibility of explicitly transferring relatively rich factual knowledge from English to non-English languages. To accomplish this, we propose two parameter-free $\textbf{L}$anguage $\textbf{R}$epresentation $\textbf{P}$rojection modules (LRP2). The first module converts non-English representations into English-like equivalents, while the second module reverts English-like representations back into representations of the corresponding non-English language. Experimental results on the mLAMA dataset demonstrate that LRP2 significantly improves factual knowledge retrieval accuracy and facilitates knowledge transferability across diverse non-English languages. We further investigate the working mechanism of LRP2 from the perspectives of representation space and cross-lingual knowledge neuron.", }
Multilingual pretrained language models serve as repositories of multilingual factual knowledge. Nevertheless, a substantial performance gap of factual knowledge probing exists between high-resource languages and low-resource languages, suggesting limited implicit factual knowledge transfer across languages in multilingual pretrained language models. This paper investigates the feasibility of explicitly transferring relatively rich factual knowledge from English to non-English languages. To accomplish this, we propose two parameter-free $\textbf{L}$anguage $\textbf{R}$epresentation $\textbf{P}$rojection modules (LRP2). The first module converts non-English representations into English-like equivalents, while the second module reverts English-like representations back into representations of the corresponding non-English language. Experimental results on the mLAMA dataset demonstrate that LRP2 significantly improves factual knowledge retrieval accuracy and facilitates knowledge transferability across diverse non-English languages. We further investigate the working mechanism of LRP2 from the perspectives of representation space and cross-lingual knowledge neuron.
[ "Xu, Shaoyang", "Li, Junzhuo", "Xiong, Deyi" ]
Language Representation Projection: Can We Transfer Factual Knowledge across Languages in Multilingual Language Models?
emnlp-main.226
2311.03788
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.227.bib
https://aclanthology.org/2023.emnlp-main.227/
@inproceedings{michaelov-etal-2023-structural, title = "Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models", author = "Michaelov, James and Arnett, Catherine and Chang, Tyler and Bergen, Ben", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.227", doi = "10.18653/v1/2023.emnlp-main.227", pages = "3703--3720", abstract = "Abstract grammatical knowledge{---}of parts of speech and grammatical patterns{---}is key to the capacity for linguistic generalization in humans. But how abstract is grammatical knowledge in large language models? In the human literature, compelling evidence for grammatical abstraction comes from structural priming. A sentence that shares the same grammatical structure as a preceding sentence is processed and produced more readily. Because confounds exist when using stimuli in a single language, evidence of abstraction is even more compelling from crosslingual structural priming, where use of a syntactic structure in one language primes an analogous structure in another language. We measure crosslingual structural priming in large language models, comparing model behavior to human experimental results from eight crosslingual experiments covering six languages, and four monolingual structural priming experiments in three non-English languages. We find evidence for abstract monolingual and crosslingual grammatical representations in the models that function similarly to those found in humans. These results demonstrate that grammatical representations in multilingual language models are not only similar across languages, but they can causally influence text produced in different languages.", }
Abstract grammatical knowledge{---}of parts of speech and grammatical patterns{---}is key to the capacity for linguistic generalization in humans. But how abstract is grammatical knowledge in large language models? In the human literature, compelling evidence for grammatical abstraction comes from structural priming. A sentence that shares the same grammatical structure as a preceding sentence is processed and produced more readily. Because confounds exist when using stimuli in a single language, evidence of abstraction is even more compelling from crosslingual structural priming, where use of a syntactic structure in one language primes an analogous structure in another language. We measure crosslingual structural priming in large language models, comparing model behavior to human experimental results from eight crosslingual experiments covering six languages, and four monolingual structural priming experiments in three non-English languages. We find evidence for abstract monolingual and crosslingual grammatical representations in the models that function similarly to those found in humans. These results demonstrate that grammatical representations in multilingual language models are not only similar across languages, but they can causally influence text produced in different languages.
[ "Michaelov, James", "Arnett, Catherine", "Chang, Tyler", "Bergen, Ben" ]
Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models
emnlp-main.227
2311.09194
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.228.bib
https://aclanthology.org/2023.emnlp-main.228/
@inproceedings{jiang-etal-2023-reasoninglm, title = "{R}easoning{LM}: Enabling Structural Subgraph Reasoning in Pre-trained Language Models for Question Answering over Knowledge Graph", author = "Jiang, Jinhao and Zhou, Kun and Zhao, Xin and Li, Yaliang and Wen, Ji-Rong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.228", doi = "10.18653/v1/2023.emnlp-main.228", pages = "3721--3735", abstract = "Question Answering over Knowledge Graph (KGQA) aims to seek answer entities for the natural language question from a large-scale Knowledge Graph (KG). To better perform reasoning on KG, recent work typically adopts a pre-trained language model (PLM) to model the question, and a graph neural network (GNN) based module to perform multi-hop reasoning on the KG. Despite the effectiveness, due to the divergence in model architecture, the PLM and GNN are not closely integrated, limiting the knowledge sharing and fine-grained feature interactions. To solve it, we aim to simplify the above two-module approach, and develop a more capable PLM that can directly support subgraph reasoning for KGQA, namely ReasoningLM. In our approach, we propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning, and also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions. After adaptation, the PLM can be parameter-efficient fine-tuned on downstream tasks. Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data. Our codes and data are publicly available at https://github.com/RUCAIBox/ReasoningLM.", }
Question Answering over Knowledge Graph (KGQA) aims to seek answer entities for the natural language question from a large-scale Knowledge Graph (KG). To better perform reasoning on KG, recent work typically adopts a pre-trained language model (PLM) to model the question, and a graph neural network (GNN) based module to perform multi-hop reasoning on the KG. Despite the effectiveness, due to the divergence in model architecture, the PLM and GNN are not closely integrated, limiting the knowledge sharing and fine-grained feature interactions. To solve it, we aim to simplify the above two-module approach, and develop a more capable PLM that can directly support subgraph reasoning for KGQA, namely ReasoningLM. In our approach, we propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning, and also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions. After adaptation, the PLM can be parameter-efficient fine-tuned on downstream tasks. Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data. Our codes and data are publicly available at https://github.com/RUCAIBox/ReasoningLM.
[ "Jiang, Jinhao", "Zhou, Kun", "Zhao, Xin", "Li, Yaliang", "Wen, Ji-Rong" ]
ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained Language Models for Question Answering over Knowledge Graph
emnlp-main.228
2401.00158
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.229.bib
https://aclanthology.org/2023.emnlp-main.229/
@inproceedings{urrutia-etal-2023-deep, title = "Deep Natural Language Feature Learning for Interpretable Prediction", author = "Urrutia, Felipe and Calderon, Cristian and Barriere, Valentin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.229", doi = "10.18653/v1/2023.emnlp-main.229", pages = "3736--3763", abstract = "We propose a general method to break down a main complex task into a set of intermediary easier sub-tasks, which are formulated in natural language as binary questions related to the final target task. Our method allows for representing each example by a vector consisting of the answers to these questions. We call this representation Natural Language Learned Features (NLLF). NLLF is generated by a small transformer language model (e.g., BERT) that has been trained in a Natural Language Inference (NLI) fashion, using weak labels automatically obtained from a Large Language Model (LLM). We show that the LLM normally struggles for the main task using in-context learning, but can handle these easiest subtasks and produce useful weak labels to train a BERT. The NLI-like training of the BERT allows for tackling zero-shot inference with any binary question, and not necessarily the ones seen during the training. We show that this NLLF vector not only helps to reach better performances by enhancing any classifier, but that it can be used as input of an easy-to-interpret machine learning model like a decision tree. This decision tree is interpretable but also reaches high performances, surpassing those of a pre-trained transformer in some cases. We have successfully applied this method to two completely different tasks: detecting incoherence in students{'} answers to open-ended mathematics exam questions, and screening abstracts for a systematic literature review of scientific papers on climate change and agroecology.", }
We propose a general method to break down a main complex task into a set of intermediary easier sub-tasks, which are formulated in natural language as binary questions related to the final target task. Our method allows for representing each example by a vector consisting of the answers to these questions. We call this representation Natural Language Learned Features (NLLF). NLLF is generated by a small transformer language model (e.g., BERT) that has been trained in a Natural Language Inference (NLI) fashion, using weak labels automatically obtained from a Large Language Model (LLM). We show that the LLM normally struggles for the main task using in-context learning, but can handle these easiest subtasks and produce useful weak labels to train a BERT. The NLI-like training of the BERT allows for tackling zero-shot inference with any binary question, and not necessarily the ones seen during the training. We show that this NLLF vector not only helps to reach better performances by enhancing any classifier, but that it can be used as input of an easy-to-interpret machine learning model like a decision tree. This decision tree is interpretable but also reaches high performances, surpassing those of a pre-trained transformer in some cases. We have successfully applied this method to two completely different tasks: detecting incoherence in students{'} answers to open-ended mathematics exam questions, and screening abstracts for a systematic literature review of scientific papers on climate change and agroecology.
[ "Urrutia, Felipe", "Calderon, Cristian", "Barriere, Valentin" ]
Deep Natural Language Feature Learning for Interpretable Prediction
emnlp-main.229
2311.05754
[ "https://github.com/furrutiav/nllf-emnlp-2023" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.230.bib
https://aclanthology.org/2023.emnlp-main.230/
@inproceedings{esiobu-etal-2023-robbie, title = "{ROBBIE}: Robust Bias Evaluation of Large Generative Language Models", author = "Esiobu, David and Tan, Xiaoqing and Hosseini, Saghar and Ung, Megan and Zhang, Yuchen and Fernandes, Jude and Dwivedi-Yu, Jane and Presani, Eleonora and Williams, Adina and Smith, Eric", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.230", doi = "10.18653/v1/2023.emnlp-main.230", pages = "3764--3814", abstract = "As generative large language models (LLMs) grow more performant and prevalent, we must develop comprehensive enough tools to measure and improve their fairness. Different prompt-based datasets can be used to measure social bias across multiple text domains and demographic axes, meaning that testing LLMs on more datasets can potentially help us characterize their biases more fully, and better ensure equal and equitable treatment of marginalized demographic groups. In this work, our focus is two-fold: (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity metrics across 12 demographic axes and 5 families of generative LLMs. Out of those 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in the paper. The comparison of those benchmarks gives us insights about the bias and toxicity of the compared models. Therefore, we explore the frequency of demographic terms in common LLM pre-training corpora and how this may relate to model biases. (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity mitigation techniques perform across our suite of measurements. ROBBIE aims to provide insights for practitioners while deploying a model, emphasizing the need to not only measure potential harms, but also understand how they arise by characterizing the data, mitigate harms once found, and balance any trade-offs. We open-source our analysis code in hopes of encouraging broader measurements of bias in future LLMs.", }
As generative large language models (LLMs) grow more performant and prevalent, we must develop comprehensive enough tools to measure and improve their fairness. Different prompt-based datasets can be used to measure social bias across multiple text domains and demographic axes, meaning that testing LLMs on more datasets can potentially help us characterize their biases more fully, and better ensure equal and equitable treatment of marginalized demographic groups. In this work, our focus is two-fold: (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity metrics across 12 demographic axes and 5 families of generative LLMs. Out of those 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in the paper. The comparison of those benchmarks gives us insights about the bias and toxicity of the compared models. Therefore, we explore the frequency of demographic terms in common LLM pre-training corpora and how this may relate to model biases. (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity mitigation techniques perform across our suite of measurements. ROBBIE aims to provide insights for practitioners while deploying a model, emphasizing the need to not only measure potential harms, but also understand how they arise by characterizing the data, mitigate harms once found, and balance any trade-offs. We open-source our analysis code in hopes of encouraging broader measurements of bias in future LLMs.
[ "Esiobu, David", "Tan, Xiaoqing", "Hosseini, Saghar", "Ung, Megan", "Zhang, Yuchen", "Fern", "es, Jude", "Dwivedi-Yu, Jane", "Presani, Eleonora", "Williams, Adina", "Smith, Eric" ]
ROBBIE: Robust Bias Evaluation of Large Generative Language Models
emnlp-main.230
2311.18140
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.231.bib
https://aclanthology.org/2023.emnlp-main.231/
@inproceedings{ohashi-higashinaka-2023-enhancing, title = "Enhancing Task-oriented Dialogue Systems with Generative Post-processing Networks", author = "Ohashi, Atsumoto and Higashinaka, Ryuichiro", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.231", doi = "10.18653/v1/2023.emnlp-main.231", pages = "3815--3828", abstract = "Recently, post-processing networks (PPNs), which modify the outputs of arbitrary modules including non-differentiable ones in task-oriented dialogue systems, have been proposed. PPNs have successfully improved the dialogue performance by post-processing natural language understanding (NLU), dialogue state tracking (DST), and dialogue policy (Policy) modules with a classification-based approach. However, they cannot be applied to natural language generation (NLG) modules because the post-processing of the utterance output by the NLG module requires a generative approach. In this study, we propose a new post-processing component for NLG, generative post-processing networks (GenPPNs). For optimizing GenPPNs via reinforcement learning, the reward function incorporates dialogue act contribution, a new measure to evaluate the contribution of GenPPN-generated utterances with regard to task completion in dialogue. Through simulation and human evaluation experiments based on the MultiWOZ dataset, we confirmed that GenPPNs improve the task completion performance of task-oriented dialogue systems.", }
Recently, post-processing networks (PPNs), which modify the outputs of arbitrary modules including non-differentiable ones in task-oriented dialogue systems, have been proposed. PPNs have successfully improved the dialogue performance by post-processing natural language understanding (NLU), dialogue state tracking (DST), and dialogue policy (Policy) modules with a classification-based approach. However, they cannot be applied to natural language generation (NLG) modules because the post-processing of the utterance output by the NLG module requires a generative approach. In this study, we propose a new post-processing component for NLG, generative post-processing networks (GenPPNs). For optimizing GenPPNs via reinforcement learning, the reward function incorporates dialogue act contribution, a new measure to evaluate the contribution of GenPPN-generated utterances with regard to task completion in dialogue. Through simulation and human evaluation experiments based on the MultiWOZ dataset, we confirmed that GenPPNs improve the task completion performance of task-oriented dialogue systems.
[ "Ohashi, Atsumoto", "Higashinaka, Ryuichiro" ]
Enhancing Task-oriented Dialogue Systems with Generative Post-processing Networks
emnlp-main.231
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.232.bib
https://aclanthology.org/2023.emnlp-main.232/
@inproceedings{chevalier-etal-2023-adapting, title = "Adapting Language Models to Compress Contexts", author = "Chevalier, Alexis and Wettig, Alexander and Ajith, Anirudh and Chen, Danqi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.232", doi = "10.18653/v1/2023.emnlp-main.232", pages = "3829--3846", abstract = "Transformer-based language models (LMs) are powerful and widely-applicable tools, but their usefulness is constrained by a finite context window and the expensive computational cost of processing long text documents. We propose to adapt pre-trained LMs into AutoCompressors. These language models are capable of compressing long contexts into summary vectors, which are then accessible to the model as soft prompts. Summary vectors are trained with an unsupervised objective, whereby long documents are processed in segments, and summary vectors from all previous segments are used in language modeling. We fine-tune OPT and Llama-2 models on sequences of up to 30,720 tokens and show that AutoCompressors can utilize long contexts to improve perplexity. We evaluate AutoCompressors on in-context learning by compressing task demonstrations and find that summary vectors are good substitutes for plain-text demonstrations, increasing accuracy while reducing inference costs. Finally, we explore the benefits of pre-computing summary vectors for large corpora by applying summary vectors to retrieval-augmented language modeling and a passage re-ranking task. Overall, AutoCompressors emerge as a simple and inexpensive solution to extend the context window of LMs while speeding up inference over long contexts.", }
Transformer-based language models (LMs) are powerful and widely-applicable tools, but their usefulness is constrained by a finite context window and the expensive computational cost of processing long text documents. We propose to adapt pre-trained LMs into AutoCompressors. These language models are capable of compressing long contexts into summary vectors, which are then accessible to the model as soft prompts. Summary vectors are trained with an unsupervised objective, whereby long documents are processed in segments, and summary vectors from all previous segments are used in language modeling. We fine-tune OPT and Llama-2 models on sequences of up to 30,720 tokens and show that AutoCompressors can utilize long contexts to improve perplexity. We evaluate AutoCompressors on in-context learning by compressing task demonstrations and find that summary vectors are good substitutes for plain-text demonstrations, increasing accuracy while reducing inference costs. Finally, we explore the benefits of pre-computing summary vectors for large corpora by applying summary vectors to retrieval-augmented language modeling and a passage re-ranking task. Overall, AutoCompressors emerge as a simple and inexpensive solution to extend the context window of LMs while speeding up inference over long contexts.
[ "Chevalier, Alexis", "Wettig, Alex", "er", "Ajith, Anirudh", "Chen, Danqi" ]
Adapting Language Models to Compress Contexts
emnlp-main.232
2305.14788
[ "https://github.com/princeton-nlp/autocompressors" ]
https://huggingface.co/papers/2305.14788
0
1
0
4
[ "princeton-nlp/RMT-2.7b-8k", "princeton-nlp/AutoCompressor-2.7b-30k", "princeton-nlp/AutoCompressor-2.7b-6k", "princeton-nlp/AutoCompressor-Llama-2-7b-6k", "princeton-nlp/RMT-1.3b-30k", "princeton-nlp/AutoCompressor-1.3b-30k", "princeton-nlp/FullAttention-2.7b-4k", "princeton-nlp/FullAttention-Llama-2-7b-6k" ]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.233.bib
https://aclanthology.org/2023.emnlp-main.233/
@inproceedings{zhou-etal-2023-selective, title = "Selective Labeling: How to Radically Lower Data-Labeling Costs for Document Extraction Models", author = "Zhou, Yichao and Wendt, James Bradley and Potti, Navneet and Xie, Jing and Tata, Sandeep", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.233", doi = "10.18653/v1/2023.emnlp-main.233", pages = "3847--3860", abstract = "Building automatic extraction models for visually rich documents like invoices, receipts, bills, tax forms, etc. has received significant attention lately. A key bottleneck in developing extraction models for new document types is the cost of acquiring the several thousand high-quality labeled documents that are needed to train a model with acceptable accuracy. In this paper, we propose selective labeling as a solution to this problem. The key insight is to simplify the labeling task to provide {``}yes/no{''} labels for candidate extractions predicted by a model trained on partially labeled documents. We combine this with a custom active learning strategy to find the predictions that the model is most uncertain about. We show through experiments on document types drawn from 3 different domains that selective labeling can reduce the cost of acquiring labeled data by 10$\times$ with a negligible loss in accuracy.", }
Building automatic extraction models for visually rich documents like invoices, receipts, bills, tax forms, etc. has received significant attention lately. A key bottleneck in developing extraction models for new document types is the cost of acquiring the several thousand high-quality labeled documents that are needed to train a model with acceptable accuracy. In this paper, we propose selective labeling as a solution to this problem. The key insight is to simplify the labeling task to provide {``}yes/no{''} labels for candidate extractions predicted by a model trained on partially labeled documents. We combine this with a custom active learning strategy to find the predictions that the model is most uncertain about. We show through experiments on document types drawn from 3 different domains that selective labeling can reduce the cost of acquiring labeled data by 10$\times$ with a negligible loss in accuracy.
[ "Zhou, Yichao", "Wendt, James Bradley", "Potti, Navneet", "Xie, Jing", "Tata, S", "eep" ]
Selective Labeling: How to Radically Lower Data-Labeling Costs for Document Extraction Models
emnlp-main.233
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.234.bib
https://aclanthology.org/2023.emnlp-main.234/
@inproceedings{chen-etal-2023-travel, title = "{TRAVEL}: Tag-Aware Conversational {FAQ} Retrieval via Reinforcement Learning", author = "Chen, Yue and Jin, Dingnan and Huang, Chen and Liu, Jia and Lei, Wenqiang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.234", doi = "10.18653/v1/2023.emnlp-main.234", pages = "3861--3872", abstract = "Efficiently retrieving FAQ questions that match users{'} intent is essential for online customer service. Existing methods aim to fully utilize the dynamic conversation context to enhance the semantic association between the user query and FAQ questions. However, the conversation context contains noise, e.g., users may click questions they don{'}t like, leading to inaccurate semantics modeling. To tackle this, we introduce tags of FAQ questions, which can help us eliminate irrelevant information. We later integrate them into a reinforcement learning framework and minimize the negative impact of irrelevant information in the dynamic conversation context. We experimentally demonstrate our efficiency and effectiveness on conversational FAQ retrieval compared to other baselines.", }
Efficiently retrieving FAQ questions that match users{'} intent is essential for online customer service. Existing methods aim to fully utilize the dynamic conversation context to enhance the semantic association between the user query and FAQ questions. However, the conversation context contains noise, e.g., users may click questions they don{'}t like, leading to inaccurate semantics modeling. To tackle this, we introduce tags of FAQ questions, which can help us eliminate irrelevant information. We later integrate them into a reinforcement learning framework and minimize the negative impact of irrelevant information in the dynamic conversation context. We experimentally demonstrate our efficiency and effectiveness on conversational FAQ retrieval compared to other baselines.
[ "Chen, Yue", "Jin, Dingnan", "Huang, Chen", "Liu, Jia", "Lei, Wenqiang" ]
TRAVEL: Tag-Aware Conversational FAQ Retrieval via Reinforcement Learning
emnlp-main.234
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.235.bib
https://aclanthology.org/2023.emnlp-main.235/
@inproceedings{cho-etal-2023-continual, title = "Continual Dialogue State Tracking via Example-Guided Question Answering", author = "Cho, Hyundong and Madotto, Andrea and Lin, Zhaojiang and Chandu, Khyathi and Kottur, Satwik and Xu, Jing and May, Jonathan and Sankar, Chinnadhurai", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.235", doi = "10.18653/v1/2023.emnlp-main.235", pages = "3873--3886", abstract = "Dialogue systems are frequently updated to accommodate new services, but naively updating them by continually training with data for new services in diminishing performance on previously learnt services. Motivated by the insight that dialogue state tracking (DST), a crucial component of dialogue systems that estimates the user{'}s goal as a conversation proceeds, is a simple natural language understanding task, we propose reformulating it as a bundle of granular example-guided question answering tasks to minimize the task shift between services and thus benefit continual learning. Our approach alleviates service-specific memorization and teaches a model to contextualize the given question and example to extract the necessary information from the conversation. We find that a model with just 60M parameters can achieve a significant boost by learning to learn from in-context examples retrieved by a retriever trained to identify turns with similar dialogue state changes. Combining our method with dialogue-level memory replay, our approach attains state of the art performance on DST continual learning metrics without relying on any complex regularization or parameter expansion methods.", }
Dialogue systems are frequently updated to accommodate new services, but naively updating them by continually training with data for new services in diminishing performance on previously learnt services. Motivated by the insight that dialogue state tracking (DST), a crucial component of dialogue systems that estimates the user{'}s goal as a conversation proceeds, is a simple natural language understanding task, we propose reformulating it as a bundle of granular example-guided question answering tasks to minimize the task shift between services and thus benefit continual learning. Our approach alleviates service-specific memorization and teaches a model to contextualize the given question and example to extract the necessary information from the conversation. We find that a model with just 60M parameters can achieve a significant boost by learning to learn from in-context examples retrieved by a retriever trained to identify turns with similar dialogue state changes. Combining our method with dialogue-level memory replay, our approach attains state of the art performance on DST continual learning metrics without relying on any complex regularization or parameter expansion methods.
[ "Cho, Hyundong", "Madotto, Andrea", "Lin, Zhaojiang", "Ch", "u, Khyathi", "Kottur, Satwik", "Xu, Jing", "May, Jonathan", "Sankar, Chinnadhurai" ]
Continual Dialogue State Tracking via Example-Guided Question Answering
emnlp-main.235
2305.13721
[ "https://github.com/facebookresearch/dst-egqa" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.236.bib
https://aclanthology.org/2023.emnlp-main.236/
@inproceedings{mittal-etal-2023-lost, title = "Lost in Translation, Found in Spans: Identifying Claims in Multilingual Social Media", author = "Mittal, Shubham and Sundriyal, Megha and Nakov, Preslav", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.236", doi = "10.18653/v1/2023.emnlp-main.236", pages = "3887--3902", abstract = "Claim span identification (CSI) is an important step in fact-checking pipelines, aiming to identify text segments that contain a check-worthy claim or assertion in a social media post. Despite its importance to journalists and human fact-checkers, it remains a severely understudied problem, and the scarce research on this topic so far has only focused on English. Here we aim to bridge this gap by creating a novel dataset, X-CLAIM, consisting of 7K real-world claims collected from numerous social media platforms in five Indian languages and English. We report strong baselines with state-of-the-art encoder-only language models (e.g., XLM-R) and we demonstrate the benefits of training on multiple languages over alternative cross-lingual transfer methods such as zero-shot transfer, or training on translated data, from a high-resource language such as English. We evaluate generative large language models from the GPT series using prompting methods on the X-CLAIM dataset and we find that they underperform the smaller encoder-only language models for low-resource languages.", }
Claim span identification (CSI) is an important step in fact-checking pipelines, aiming to identify text segments that contain a check-worthy claim or assertion in a social media post. Despite its importance to journalists and human fact-checkers, it remains a severely understudied problem, and the scarce research on this topic so far has only focused on English. Here we aim to bridge this gap by creating a novel dataset, X-CLAIM, consisting of 7K real-world claims collected from numerous social media platforms in five Indian languages and English. We report strong baselines with state-of-the-art encoder-only language models (e.g., XLM-R) and we demonstrate the benefits of training on multiple languages over alternative cross-lingual transfer methods such as zero-shot transfer, or training on translated data, from a high-resource language such as English. We evaluate generative large language models from the GPT series using prompting methods on the X-CLAIM dataset and we find that they underperform the smaller encoder-only language models for low-resource languages.
[ "Mittal, Shubham", "Sundriyal, Megha", "Nakov, Preslav" ]
Lost in Translation, Found in Spans: Identifying Claims in Multilingual Social Media
emnlp-main.236
2310.18205
[ "https://github.com/mbzuai-nlp/x-claim" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.237.bib
https://aclanthology.org/2023.emnlp-main.237/
@inproceedings{kim-etal-2023-covid, title = "{COVID}-19 Vaccine Misinformation in Middle Income Countries", author = "Kim, Jongin and Bak, Byeo Rhee and Agrawal, Aditya and Wu, Jiaxi and Wirtz, Veronika and Hong, Traci and Wijaya, Derry", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.237", doi = "10.18653/v1/2023.emnlp-main.237", pages = "3903--3915", abstract = "This paper introduces a multilingual dataset of COVID-19 vaccine misinformation, consisting of annotated tweets from three middle-income countries: Brazil, Indonesia, and Nigeria. The expertly curated dataset includes annotations for 5,952 tweets, assessing their relevance to COVID-19 vaccines, presence of misinformation, and the themes of the misinformation. To address challenges posed by domain specificity, the low-resource setting, and data imbalance, we adopt two approaches for developing COVID-19 vaccine misinformation detection models: domain-specific pre-training and text augmentation using a large language model. Our best misinformation detection models demonstrate improvements ranging from 2.7 to 15.9 percentage points in macro F1-score compared to the baseline models. Additionally, we apply our misinformation detection models in a large-scale study of 19 million unlabeled tweets from the three countries between 2020 and 2022, showcasing the practical application of our dataset and models for detecting and analyzing vaccine misinformation in multiple countries and languages. Our analysis indicates that percentage changes in the number of new COVID-19 cases are positively associated with COVID-19 vaccine misinformation rates in a staggered manner for Brazil and Indonesia, and there are significant positive associations between the misinformation rates across the three countries.", }
This paper introduces a multilingual dataset of COVID-19 vaccine misinformation, consisting of annotated tweets from three middle-income countries: Brazil, Indonesia, and Nigeria. The expertly curated dataset includes annotations for 5,952 tweets, assessing their relevance to COVID-19 vaccines, presence of misinformation, and the themes of the misinformation. To address challenges posed by domain specificity, the low-resource setting, and data imbalance, we adopt two approaches for developing COVID-19 vaccine misinformation detection models: domain-specific pre-training and text augmentation using a large language model. Our best misinformation detection models demonstrate improvements ranging from 2.7 to 15.9 percentage points in macro F1-score compared to the baseline models. Additionally, we apply our misinformation detection models in a large-scale study of 19 million unlabeled tweets from the three countries between 2020 and 2022, showcasing the practical application of our dataset and models for detecting and analyzing vaccine misinformation in multiple countries and languages. Our analysis indicates that percentage changes in the number of new COVID-19 cases are positively associated with COVID-19 vaccine misinformation rates in a staggered manner for Brazil and Indonesia, and there are significant positive associations between the misinformation rates across the three countries.
[ "Kim, Jongin", "Bak, Byeo Rhee", "Agrawal, Aditya", "Wu, Jiaxi", "Wirtz, Veronika", "Hong, Traci", "Wijaya, Derry" ]
COVID-19 Vaccine Misinformation in Middle Income Countries
emnlp-main.237
2311.18195
[ "https://github.com/zzoliman/covid-vaccine-misinfo-mic" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.238.bib
https://aclanthology.org/2023.emnlp-main.238/
@inproceedings{zhang-etal-2023-contrastive-learning, title = "Contrastive Learning of Sentence Embeddings from Scratch", author = "Zhang, Junlei and Lan, Zhenzhong and He, Junxian", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.238", doi = "10.18653/v1/2023.emnlp-main.238", pages = "3916--3932", abstract = "Contrastive learning has been the dominant approach to train state-of-the-art sentence embeddings. Previous studies have typically learned sentence embeddings either through the use of human-annotated natural language inference (NLI) data or via large-scale unlabeled sentences in an unsupervised manner. However, even in the case of unlabeled data, their acquisition presents challenges in certain domains due to various reasons. due to copyright restrictions, data distribution issues, and messy formats, among other factors. To address these issues, we present SynCSE, a contrastive learning framework that trains sentence embeddings with synthetic data. Specifically, we explore utilizing large language models to synthesize the required data samples for contrastive learning, including (1) producing positive and negative annotations given unlabeled sentences SynCSE-partial, and (2) generating sentences along with their corresponding annotations from scratch SynCSE-scratch. Notably, SynCSE-scratch constitutes the first contrastive learning method to learn sentence embeddings from scratch without manually collecting any data sample. Experimental results on sentence similarity and reranking tasks indicate that both SynCSE-partial and SynCSE-scratch greatly outperform unsupervised baselines, and SynCSE-partial even achieves comparable performance to the supervised models in most settings.", }
Contrastive learning has been the dominant approach to train state-of-the-art sentence embeddings. Previous studies have typically learned sentence embeddings either through the use of human-annotated natural language inference (NLI) data or via large-scale unlabeled sentences in an unsupervised manner. However, even in the case of unlabeled data, their acquisition presents challenges in certain domains due to various reasons. due to copyright restrictions, data distribution issues, and messy formats, among other factors. To address these issues, we present SynCSE, a contrastive learning framework that trains sentence embeddings with synthetic data. Specifically, we explore utilizing large language models to synthesize the required data samples for contrastive learning, including (1) producing positive and negative annotations given unlabeled sentences SynCSE-partial, and (2) generating sentences along with their corresponding annotations from scratch SynCSE-scratch. Notably, SynCSE-scratch constitutes the first contrastive learning method to learn sentence embeddings from scratch without manually collecting any data sample. Experimental results on sentence similarity and reranking tasks indicate that both SynCSE-partial and SynCSE-scratch greatly outperform unsupervised baselines, and SynCSE-partial even achieves comparable performance to the supervised models in most settings.
[ "Zhang, Junlei", "Lan, Zhenzhong", "He, Junxian" ]
Contrastive Learning of Sentence Embeddings from Scratch
emnlp-main.238
2305.15077
[ "https://github.com/sjtu-lit/syncse" ]
https://huggingface.co/papers/2305.15077
0
0
0
3
[ "hkust-nlp/SynCSE-scratch-RoBERTa-large", "hkust-nlp/SynCSE-partial-RoBERTa-base", "hkust-nlp/SynCSE-scratch-RoBERTa-base", "hkust-nlp/SynCSE-partial-RoBERTa-large" ]
[ "hkust-nlp/SynCSE-partial-NLI", "hkust-nlp/SynCSE-scratch-NLI" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.239.bib
https://aclanthology.org/2023.emnlp-main.239/
@inproceedings{sandoval-etal-2023-rose, title = "A Rose by Any Other Name would not Smell as Sweet: Social Bias in Names Mistranslation", author = "Sandoval, Sandra and Zhao, Jieyu and Carpuat, Marine and Daum{\'e} III, Hal", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.239", doi = "10.18653/v1/2023.emnlp-main.239", pages = "3933--3945", abstract = "We ask the question: Are there widespread disparities in machine translations of names across race/ethnicity, and gender? We hypothesize that the translation quality of names and surrounding context will be lower for names associated with US racial and ethnic minorities due to these systems{'} tendencies to standardize language to predominant language patterns. We develop a dataset of names that are strongly demographically aligned and propose a translation evaluation procedure based on round-trip translation. We analyze the effect of name demographics on translation quality using generalized linear mixed effects models and find that the ability of translation systems to correctly translate female-associated names is significantly lower than male-associated names. This effect is particularly pronounced for female-associated names that are also associated with racial (Black) and ethnic (Hispanic) minorities. This disparity in translation quality between social groups for something as personal as someone{'}s name has significant implications for people{'}s professional, personal, and cultural identities, self-worth and ease of communication. Our findings suggest that more MT research is needed to improve the translation of names and to provide high-quality service for users regardless of gender, race, and ethnicity.", }
We ask the question: Are there widespread disparities in machine translations of names across race/ethnicity, and gender? We hypothesize that the translation quality of names and surrounding context will be lower for names associated with US racial and ethnic minorities due to these systems{'} tendencies to standardize language to predominant language patterns. We develop a dataset of names that are strongly demographically aligned and propose a translation evaluation procedure based on round-trip translation. We analyze the effect of name demographics on translation quality using generalized linear mixed effects models and find that the ability of translation systems to correctly translate female-associated names is significantly lower than male-associated names. This effect is particularly pronounced for female-associated names that are also associated with racial (Black) and ethnic (Hispanic) minorities. This disparity in translation quality between social groups for something as personal as someone{'}s name has significant implications for people{'}s professional, personal, and cultural identities, self-worth and ease of communication. Our findings suggest that more MT research is needed to improve the translation of names and to provide high-quality service for users regardless of gender, race, and ethnicity.
[ "S", "oval, S", "ra", "Zhao, Jieyu", "Carpuat, Marine", "Daum{\\'e} III, Hal" ]
A Rose by Any Other Name would not Smell as Sweet: Social Bias in Names Mistranslation
emnlp-main.239
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.240.bib
https://aclanthology.org/2023.emnlp-main.240/
@inproceedings{phang-etal-2023-investigating, title = "Investigating Efficiently Extending Transformers for Long Input Summarization", author = "Phang, Jason and Zhao, Yao and Liu, Peter", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.240", doi = "10.18653/v1/2023.emnlp-main.240", pages = "3946--3961", abstract = "While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs still poses a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens, which achieves strong performance on long input summarization tasks comparable with much larger models.", }
While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs still poses a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens, which achieves strong performance on long input summarization tasks comparable with much larger models.
[ "Phang, Jason", "Zhao, Yao", "Liu, Peter" ]
Investigating Efficiently Extending Transformers for Long Input Summarization
emnlp-main.240
2208.04347
[ "https://github.com/google-research/pegasus" ]
https://huggingface.co/papers/2208.04347
2
0
0
3
[ "UNIST-Eunchan/Research-Paper-Summarization-Pegasus-x-ArXiv" ]
[]
[ "micknikolic/pdf-abstract-summarizer", "UNIST-Eunchan/Paper-Abstract-Generator" ]
1
Oral
https://aclanthology.org/2023.emnlp-main.241.bib
https://aclanthology.org/2023.emnlp-main.241/
@inproceedings{guo-etal-2023-cs2w, title = "{CS}2{W}: A {C}hinese Spoken-to-Written Style Conversion Dataset with Multiple Conversion Types", author = "Guo, Zishan and Yu, Linhao and Xu, Minghui and Jin, Renren and Xiong, Deyi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.241", doi = "10.18653/v1/2023.emnlp-main.241", pages = "3962--3979", abstract = "Spoken texts (either manual or automatic transcriptions from automatic speech recognition (ASR)) often contain disfluencies and grammatical errors, which pose tremendous challenges to downstream tasks. Converting spoken into written language is hence desirable. Unfortunately, the availability of datasets for this is limited. To address this issue, we present CS2W, a Chinese Spoken-to-Written style conversion dataset comprising 7,237 spoken sentences extracted from transcribed conversational texts. Four types of conversion problems are covered in CS2W: disfluencies, grammatical errors, ASR transcription errors, and colloquial words. Our annotation convention, data, and code are publicly available at https://github.com/guozishan/CS2W.", }
Spoken texts (either manual or automatic transcriptions from automatic speech recognition (ASR)) often contain disfluencies and grammatical errors, which pose tremendous challenges to downstream tasks. Converting spoken into written language is hence desirable. Unfortunately, the availability of datasets for this is limited. To address this issue, we present CS2W, a Chinese Spoken-to-Written style conversion dataset comprising 7,237 spoken sentences extracted from transcribed conversational texts. Four types of conversion problems are covered in CS2W: disfluencies, grammatical errors, ASR transcription errors, and colloquial words. Our annotation convention, data, and code are publicly available at https://github.com/guozishan/CS2W.
[ "Guo, Zishan", "Yu, Linhao", "Xu, Minghui", "Jin, Renren", "Xiong, Deyi" ]
CS2W: A Chinese Spoken-to-Written Style Conversion Dataset with Multiple Conversion Types
emnlp-main.241
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.242.bib
https://aclanthology.org/2023.emnlp-main.242/
@inproceedings{ansell-etal-2023-unifying, title = "Unifying Cross-Lingual Transfer across Scenarios of Resource Scarcity", author = "Ansell, Alan and Parovi{\'c}, Marinela and Vuli{\'c}, Ivan and Korhonen, Anna and Ponti, Edoardo", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.242", doi = "10.18653/v1/2023.emnlp-main.242", pages = "3980--3995", abstract = "The scarcity of data in many of the world{'}s languages necessitates the transfer of knowledge from other, resource-rich languages. However, the level of scarcity varies significantly across multiple dimensions, including: i) the amount of task-specific data available in the source and target languages; ii) the amount of monolingual and parallel data available for both languages; and iii) the extent to which they are supported by pretrained multilingual and translation models. Prior work has largely treated these dimensions and the various techniques for dealing with them separately; in this paper, we offer a more integrated view by exploring how to deploy the arsenal of cross-lingual transfer tools across a range of scenarios, especially the most challenging, low-resource ones. To this end, we run experiments on the AmericasNLI and NusaX benchmarks over 20 languages, simulating a range of few-shot settings. The best configuration in our experiments employed parameter-efficient language and task adaptation of massively multilingual Transformers, trained simultaneously on source language data and both machine-translated and natural data for multiple target languages. In addition, we show that pre-trained translation models can be easily adapted to unseen languages, thus extending the range of our hybrid technique and translation-based transfer more broadly. Beyond new insights into the mechanisms of cross-lingual transfer, we hope our work will provide practitioners with a toolbox to integrate multiple techniques for different real-world scenarios. Our code is available at https://github.com/parovicm/unified-xlt.", }
The scarcity of data in many of the world{'}s languages necessitates the transfer of knowledge from other, resource-rich languages. However, the level of scarcity varies significantly across multiple dimensions, including: i) the amount of task-specific data available in the source and target languages; ii) the amount of monolingual and parallel data available for both languages; and iii) the extent to which they are supported by pretrained multilingual and translation models. Prior work has largely treated these dimensions and the various techniques for dealing with them separately; in this paper, we offer a more integrated view by exploring how to deploy the arsenal of cross-lingual transfer tools across a range of scenarios, especially the most challenging, low-resource ones. To this end, we run experiments on the AmericasNLI and NusaX benchmarks over 20 languages, simulating a range of few-shot settings. The best configuration in our experiments employed parameter-efficient language and task adaptation of massively multilingual Transformers, trained simultaneously on source language data and both machine-translated and natural data for multiple target languages. In addition, we show that pre-trained translation models can be easily adapted to unseen languages, thus extending the range of our hybrid technique and translation-based transfer more broadly. Beyond new insights into the mechanisms of cross-lingual transfer, we hope our work will provide practitioners with a toolbox to integrate multiple techniques for different real-world scenarios. Our code is available at https://github.com/parovicm/unified-xlt.
[ "Ansell, Alan", "Parovi{\\'c}, Marinela", "Vuli{\\'c}, Ivan", "Korhonen, Anna", "Ponti, Edoardo" ]
Unifying Cross-Lingual Transfer across Scenarios of Resource Scarcity
emnlp-main.242
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.243.bib
https://aclanthology.org/2023.emnlp-main.243/
@inproceedings{attanasio-etal-2023-tale, title = "A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation", author = "Attanasio, Giuseppe and Plaza del Arco, Flor Miriam and Nozza, Debora and Lauscher, Anne", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.243", doi = "10.18653/v1/2023.emnlp-main.243", pages = "3996--4014", abstract = "Recent instruction fine-tuned models can solve multiple NLP tasks when prompted to do so, with machine translation (MT) being a prominent use case. However, current research often focuses on standard performance benchmarks, leaving compelling fairness and ethical considerations behind. In MT, this might lead to misgendered translations, resulting, among other harms, in the perpetuation of stereotypes and prejudices. In this work, we address this gap by investigating whether and to what extent such models exhibit gender bias in machine translation and how we can mitigate it. Concretely, we compute established gender bias metrics on the WinoMT corpus from English to German and Spanish. We discover that IFT models default to male-inflected translations, even disregarding female occupational stereotypes. Next, using interpretability methods, we unveil that models systematically overlook the pronoun indicating the gender of a target occupation in misgendered translations. Finally, based on this finding, we propose an easy-to-implement and effective bias mitigation solution based on few-shot learning that leads to significantly fairer translations.", }
Recent instruction fine-tuned models can solve multiple NLP tasks when prompted to do so, with machine translation (MT) being a prominent use case. However, current research often focuses on standard performance benchmarks, leaving compelling fairness and ethical considerations behind. In MT, this might lead to misgendered translations, resulting, among other harms, in the perpetuation of stereotypes and prejudices. In this work, we address this gap by investigating whether and to what extent such models exhibit gender bias in machine translation and how we can mitigate it. Concretely, we compute established gender bias metrics on the WinoMT corpus from English to German and Spanish. We discover that IFT models default to male-inflected translations, even disregarding female occupational stereotypes. Next, using interpretability methods, we unveil that models systematically overlook the pronoun indicating the gender of a target occupation in misgendered translations. Finally, based on this finding, we propose an easy-to-implement and effective bias mitigation solution based on few-shot learning that leads to significantly fairer translations.
[ "Attanasio, Giuseppe", "Plaza del Arco, Flor Miriam", "Nozza, Debora", "Lauscher, Anne" ]
A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation
emnlp-main.243
2310.12127
[ "https://github.com/milanlproc/interpretability-mt-gender-bias" ]
https://huggingface.co/papers/2310.12127
1
1
0
4
[]
[ "MilaNLProc/a-tale-of-pronouns" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.244.bib
https://aclanthology.org/2023.emnlp-main.244/
@inproceedings{jiang-etal-2023-disco, title = "{D}is{C}o: Distilled Student Models Co-training for Semi-supervised Text Mining", author = "Jiang, Weifeng and Mao, Qianren and Lin, Chenghua and Li, Jianxin and Deng, Ting and Yang, Weiyi and Wang, Zheng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.244", doi = "10.18653/v1/2023.emnlp-main.244", pages = "4015--4030", abstract = "Many text mining models are constructed by fine-tuning a large deep pre-trained language model (PLM) in downstream tasks. However, a significant challenge that arises nowadays is how to maintain performance when we use a lightweight model with limited labeled samples. We present DisCo, a semi-supervised learning (SSL) framework for fine-tuning a cohort of small student models generated from a large PLM using knowledge distillation. Our key insight is to share complementary knowledge among distilled student cohorts to promote their SSL effectiveness. DisCo employs a novel co-training technique to optimize a cohort of multiple small student models by promoting knowledge sharing among students under diversified views: model views produced by different distillation strategies and data views produced by various input augmentations. We evaluate DisCo on both semi-supervised text classification and extractive summarization tasks. Experimental results show that DisCo can produce student models that are $7.6\times$ smaller and $4.8 \times$ faster in inference than the baseline PLMs while maintaining comparable performance. We also show that DisCo-generated student models outperform the similar-sized models elaborately tuned in distinct tasks.", }
Many text mining models are constructed by fine-tuning a large deep pre-trained language model (PLM) in downstream tasks. However, a significant challenge that arises nowadays is how to maintain performance when we use a lightweight model with limited labeled samples. We present DisCo, a semi-supervised learning (SSL) framework for fine-tuning a cohort of small student models generated from a large PLM using knowledge distillation. Our key insight is to share complementary knowledge among distilled student cohorts to promote their SSL effectiveness. DisCo employs a novel co-training technique to optimize a cohort of multiple small student models by promoting knowledge sharing among students under diversified views: model views produced by different distillation strategies and data views produced by various input augmentations. We evaluate DisCo on both semi-supervised text classification and extractive summarization tasks. Experimental results show that DisCo can produce student models that are $7.6\times$ smaller and $4.8 \times$ faster in inference than the baseline PLMs while maintaining comparable performance. We also show that DisCo-generated student models outperform the similar-sized models elaborately tuned in distinct tasks.
[ "Jiang, Weifeng", "Mao, Qianren", "Lin, Chenghua", "Li, Jianxin", "Deng, Ting", "Yang, Weiyi", "Wang, Zheng" ]
DisCo: Distilled Student Models Co-training for Semi-supervised Text Mining
emnlp-main.244
2305.12074
[ "https://github.com/litesslhub/disco" ]
https://huggingface.co/papers/2305.12074
0
0
0
7
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.245.bib
https://aclanthology.org/2023.emnlp-main.245/
@inproceedings{yin-etal-2023-dynosaur, title = "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation", author = "Yin, Da and Liu, Xiao and Yin, Fan and Zhong, Ming and Bansal, Hritik and Han, Jiawei and Chang, Kai-Wei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.245", doi = "10.18653/v1/2023.emnlp-main.245", pages = "4031--4047", abstract = "Instruction tuning has emerged to enhance the capabilities of large language models (LLMs) to comprehend instructions and generate appropriate responses. Existing methods either manually annotate or employ LLM (e.g., GPT-series) to generate data for instruction tuning. However, they often overlook associating instructions with existing annotated datasets. In this paper, we propose Dynosaur, a dynamic growth paradigm for the automatic curation of instruction-tuning data. Based on the metadata of existing datasets, we use LLMs to automatically construct instruction-tuning data by identifying relevant data fields and generating appropriate instructions. By leveraging the existing annotated datasets, Dynosaur offers several advantages: 1) it reduces the API cost for generating instructions (e.g., it costs less than {\$}12 USD by calling GPT-3.5-turbo for generating 800K instruction tuning samples; 2) it provides high-quality data for instruction tuning (e.g., it performs better than Alpaca and Flan on Super-NI and Longform with comparable data sizes); and 3) it supports the continuous improvement of models by generating instruction-tuning data when a new annotated dataset becomes available. We further investigate a continual learning scheme for learning with the ever-growing instruction-tuning dataset, and demonstrate that replaying tasks with diverse instruction embeddings not only helps mitigate forgetting issues but generalizes to unseen tasks better. Code and data are available at https://github.com/WadeYin9712/Dynosaur.", }
Instruction tuning has emerged to enhance the capabilities of large language models (LLMs) to comprehend instructions and generate appropriate responses. Existing methods either manually annotate or employ LLM (e.g., GPT-series) to generate data for instruction tuning. However, they often overlook associating instructions with existing annotated datasets. In this paper, we propose Dynosaur, a dynamic growth paradigm for the automatic curation of instruction-tuning data. Based on the metadata of existing datasets, we use LLMs to automatically construct instruction-tuning data by identifying relevant data fields and generating appropriate instructions. By leveraging the existing annotated datasets, Dynosaur offers several advantages: 1) it reduces the API cost for generating instructions (e.g., it costs less than {\$}12 USD by calling GPT-3.5-turbo for generating 800K instruction tuning samples; 2) it provides high-quality data for instruction tuning (e.g., it performs better than Alpaca and Flan on Super-NI and Longform with comparable data sizes); and 3) it supports the continuous improvement of models by generating instruction-tuning data when a new annotated dataset becomes available. We further investigate a continual learning scheme for learning with the ever-growing instruction-tuning dataset, and demonstrate that replaying tasks with diverse instruction embeddings not only helps mitigate forgetting issues but generalizes to unseen tasks better. Code and data are available at https://github.com/WadeYin9712/Dynosaur.
[ "Yin, Da", "Liu, Xiao", "Yin, Fan", "Zhong, Ming", "Bansal, Hritik", "Han, Jiawei", "Chang, Kai-Wei" ]
Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation
emnlp-main.245
2305.14327
[ "https://github.com/wadeyin9712/dynosaur" ]
https://huggingface.co/papers/2305.14327
2
0
0
7
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.246.bib
https://aclanthology.org/2023.emnlp-main.246/
@inproceedings{wang-etal-2023-steps, title = "Are All Steps Equally Important? Benchmarking Essentiality Detection in Event Processes", author = "Wang, Haoyu and Zhang, Hongming and Wang, Yueguan and Deng, Yuqian and Chen, Muhao and Roth, Dan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.246", doi = "10.18653/v1/2023.emnlp-main.246", pages = "4048--4056", abstract = "Natural language often describes events in different granularities, such that more coarse-grained (goal) events can often be decomposed into fine-grained sequences of (step) events. A critical but overlooked challenge in understanding an event process lies in the fact that the step events are not equally important to the central goal. In this paper, we seek to fill this gap by studying how well current models can understand the essentiality of different step events towards a goal event. As discussed by cognitive studies, such an ability enables the machine to mimic human{'}s commonsense reasoning about preconditions and necessary efforts of daily-life tasks. Our work contributes with a high-quality corpus of (goal, step) pairs from a community guideline website WikiHow, where the steps are manually annotated with their essentiality w.r.t. the goal. The high IAA indicates that humans have a consistent understanding of the events. Despite evaluating various statistical and massive pre-trained NLU models, we observe that existing SOTA models all perform drastically behind humans, indicating the need for future investigation of this crucial yet challenging task.", }
Natural language often describes events in different granularities, such that more coarse-grained (goal) events can often be decomposed into fine-grained sequences of (step) events. A critical but overlooked challenge in understanding an event process lies in the fact that the step events are not equally important to the central goal. In this paper, we seek to fill this gap by studying how well current models can understand the essentiality of different step events towards a goal event. As discussed by cognitive studies, such an ability enables the machine to mimic human{'}s commonsense reasoning about preconditions and necessary efforts of daily-life tasks. Our work contributes with a high-quality corpus of (goal, step) pairs from a community guideline website WikiHow, where the steps are manually annotated with their essentiality w.r.t. the goal. The high IAA indicates that humans have a consistent understanding of the events. Despite evaluating various statistical and massive pre-trained NLU models, we observe that existing SOTA models all perform drastically behind humans, indicating the need for future investigation of this crucial yet challenging task.
[ "Wang, Haoyu", "Zhang, Hongming", "Wang, Yueguan", "Deng, Yuqian", "Chen, Muhao", "Roth, Dan" ]
Are All Steps Equally Important? Benchmarking Essentiality Detection in Event Processes
emnlp-main.246
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.247.bib
https://aclanthology.org/2023.emnlp-main.247/
@inproceedings{chen-etal-2023-language, title = "Language Model is Suitable for Correction of Handwritten Mathematical Expressions Recognition", author = "Chen, Zui and Han, Jiaqi and Yang, Chaofan and Zhou, Yi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.247", doi = "10.18653/v1/2023.emnlp-main.247", pages = "4057--4068", abstract = "Handwritten mathematical expression recognition (HMER) is a multidisciplinary task that generates LaTeX sequences from images. Existing approaches, employing tree decoders within attention-based encoder-decoder architectures, aim to capture the hierarchical tree structure, but are limited by CFGs and pre-generated triplet data, hindering expandability and neglecting visual ambiguity challenges. This article investigates the distinctive language characteristics of LaTeX mathematical expressions, revealing two key observations: 1) the presence of explicit structural symbols, and 2) the treatment of symbols, particularly letters, as minimal units with context-dependent semantics, representing variables or constants. Rooted in these properties, we propose that language models have the potential to synchronously and complementarily provide both structural and semantic information, making them suitable for correction of HMER. To validate our proposition, we propose an architecture called Recognize and Language Fusion Network (RLFN), which integrates recognition and language features to output corrected sequences while jointly optimizing with a string decoder recognition model. Experiments show that RLFN outperforms existing state-of-the-art methods on the CROHME 2014/2016/2019 datasets.", }
Handwritten mathematical expression recognition (HMER) is a multidisciplinary task that generates LaTeX sequences from images. Existing approaches, employing tree decoders within attention-based encoder-decoder architectures, aim to capture the hierarchical tree structure, but are limited by CFGs and pre-generated triplet data, hindering expandability and neglecting visual ambiguity challenges. This article investigates the distinctive language characteristics of LaTeX mathematical expressions, revealing two key observations: 1) the presence of explicit structural symbols, and 2) the treatment of symbols, particularly letters, as minimal units with context-dependent semantics, representing variables or constants. Rooted in these properties, we propose that language models have the potential to synchronously and complementarily provide both structural and semantic information, making them suitable for correction of HMER. To validate our proposition, we propose an architecture called Recognize and Language Fusion Network (RLFN), which integrates recognition and language features to output corrected sequences while jointly optimizing with a string decoder recognition model. Experiments show that RLFN outperforms existing state-of-the-art methods on the CROHME 2014/2016/2019 datasets.
[ "Chen, Zui", "Han, Jiaqi", "Yang, Chaofan", "Zhou, Yi" ]
Language Model is Suitable for Correction of Handwritten Mathematical Expressions Recognition
emnlp-main.247
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.248.bib
https://aclanthology.org/2023.emnlp-main.248/
@inproceedings{de-la-pena-sarracen-etal-2023-vicinal, title = "Vicinal Risk Minimization for Few-Shot Cross-lingual Transfer in Abusive Language Detection", author = "De la Pe{\~n}a Sarrac{\'e}n, Gretel and Rosso, Paolo and Litschko, Robert and Glava{\v{s}}, Goran and Ponzetto, Simone", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.248", doi = "10.18653/v1/2023.emnlp-main.248", pages = "4069--4085", abstract = "Cross-lingual transfer learning from high-resource to medium and low-resource languages has shown encouraging results. However, the scarcity of resources in target languages remains a challenge. In this work, we resort to data augmentation and continual pre-training for domain adaptation to improve cross-lingual abusive language detection. For data augmentation, we analyze two existing techniques based on vicinal risk minimization and propose MIXAG, a novel data augmentation method which interpolates pairs of instances based on the angle of their representations. Our experiments involve seven languages typologically distinct from English and three different domains. The results reveal that the data augmentation strategies can enhance few-shot cross-lingual abusive language detection. Specifically, we observe that consistently in all target languages, MIXAG improves significantly in multidomain and multilingual environments. Finally, we show through an error analysis how the domain adaptation can favour the class of abusive texts (reducing false negatives), but at the same time, declines the precision of the abusive language detection model.", }
Cross-lingual transfer learning from high-resource to medium and low-resource languages has shown encouraging results. However, the scarcity of resources in target languages remains a challenge. In this work, we resort to data augmentation and continual pre-training for domain adaptation to improve cross-lingual abusive language detection. For data augmentation, we analyze two existing techniques based on vicinal risk minimization and propose MIXAG, a novel data augmentation method which interpolates pairs of instances based on the angle of their representations. Our experiments involve seven languages typologically distinct from English and three different domains. The results reveal that the data augmentation strategies can enhance few-shot cross-lingual abusive language detection. Specifically, we observe that consistently in all target languages, MIXAG improves significantly in multidomain and multilingual environments. Finally, we show through an error analysis how the domain adaptation can favour the class of abusive texts (reducing false negatives), but at the same time, declines the precision of the abusive language detection model.
[ "De la Pe{\\~n}a Sarrac{\\'e}n, Gretel", "Rosso, Paolo", "Litschko, Robert", "Glava{\\v{s}}, Goran", "Ponzetto, Simone" ]
Vicinal Risk Minimization for Few-Shot Cross-lingual Transfer in Abusive Language Detection
emnlp-main.248
2311.02025
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.249.bib
https://aclanthology.org/2023.emnlp-main.249/
@inproceedings{jiang-etal-2023-superdialseg, title = "{S}uper{D}ialseg: A Large-scale Dataset for Supervised Dialogue Segmentation", author = "Jiang, Junfeng and Dong, Chengzhang and Kurohashi, Sadao and Aizawa, Akiko", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.249", doi = "10.18653/v1/2023.emnlp-main.249", pages = "4086--4101", abstract = "Dialogue segmentation is a crucial task for dialogue systems allowing a better understanding of conversational texts. Despite recent progress in unsupervised dialogue segmentation methods, their performances are limited by the lack of explicit supervised signals for training. Furthermore, the precise definition of segmentation points in conversations still remains as a challenging problem, increasing the difficulty of collecting manual annotations. In this paper, we provide a feasible definition of dialogue segmentation points with the help of document-grounded dialogues and release a large-scale supervised dataset called SuperDialseg, containing 9,478 dialogues based on two prevalent document-grounded dialogue corpora, and also inherit their useful dialogue-related annotations. Moreover, we provide a benchmark including 18 models across five categories for the dialogue segmentation task with several proper evaluation metrics. Empirical studies show that supervised learning is extremely effective in in-domain datasets and models trained on SuperDialseg can achieve good generalization ability on out-of-domain data. Additionally, we also conducted human verification on the test set and the Kappa score confirmed the quality of our automatically constructed dataset. We believe our work is an important step forward in the field of dialogue segmentation.", }
Dialogue segmentation is a crucial task for dialogue systems allowing a better understanding of conversational texts. Despite recent progress in unsupervised dialogue segmentation methods, their performances are limited by the lack of explicit supervised signals for training. Furthermore, the precise definition of segmentation points in conversations still remains as a challenging problem, increasing the difficulty of collecting manual annotations. In this paper, we provide a feasible definition of dialogue segmentation points with the help of document-grounded dialogues and release a large-scale supervised dataset called SuperDialseg, containing 9,478 dialogues based on two prevalent document-grounded dialogue corpora, and also inherit their useful dialogue-related annotations. Moreover, we provide a benchmark including 18 models across five categories for the dialogue segmentation task with several proper evaluation metrics. Empirical studies show that supervised learning is extremely effective in in-domain datasets and models trained on SuperDialseg can achieve good generalization ability on out-of-domain data. Additionally, we also conducted human verification on the test set and the Kappa score confirmed the quality of our automatically constructed dataset. We believe our work is an important step forward in the field of dialogue segmentation.
[ "Jiang, Junfeng", "Dong, Chengzhang", "Kurohashi, Sadao", "Aizawa, Akiko" ]
SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation
emnlp-main.249
2305.08371
[ "https://github.com/coldog2333/superdialseg" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.250.bib
https://aclanthology.org/2023.emnlp-main.250/
@inproceedings{bai-etal-2023-atformer, title = "{ATF}ormer: A Learned Performance Model with Transfer Learning Across Devices for Deep Learning Tensor Programs", author = "Bai, Yang and Zhao, Wenqian and Yin, Shuo and Wang, Zixiao and Yu, Bei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.250", doi = "10.18653/v1/2023.emnlp-main.250", pages = "4102--4116", abstract = "The training and inference efficiency of ever-larger deep neural networks highly rely on the performance of tensor operators on specific hardware platforms. Therefore, a compilation-based optimization flow with automatic tensor generation and parameter tuning is necessary for efficient model deployment. While compilation-based methods with performance models can provide dynamic and suitable code optimization, they suffer from a large design space exploration with rough measurement accuracy and poor transferability among different hardware platforms. This paper presents ATFormer, a simple yet efficient design with attention-inspired modules to accurately predict the performance of optimized operators by capturing global and long-range dependencies within a complete scheduling space. Compared with state-of-the-arts, ATFormer can predict the optimal implementation of tensor operators to reduce inference time with minimal effort on modern DNN benchmarks. Furthermore, ATFormer with pre-trained parameters can quickly adapt to different workloads and hardware via transfer learning.", }
The training and inference efficiency of ever-larger deep neural networks highly rely on the performance of tensor operators on specific hardware platforms. Therefore, a compilation-based optimization flow with automatic tensor generation and parameter tuning is necessary for efficient model deployment. While compilation-based methods with performance models can provide dynamic and suitable code optimization, they suffer from a large design space exploration with rough measurement accuracy and poor transferability among different hardware platforms. This paper presents ATFormer, a simple yet efficient design with attention-inspired modules to accurately predict the performance of optimized operators by capturing global and long-range dependencies within a complete scheduling space. Compared with state-of-the-arts, ATFormer can predict the optimal implementation of tensor operators to reduce inference time with minimal effort on modern DNN benchmarks. Furthermore, ATFormer with pre-trained parameters can quickly adapt to different workloads and hardware via transfer learning.
[ "Bai, Yang", "Zhao, Wenqian", "Yin, Shuo", "Wang, Zixiao", "Yu, Bei" ]
ATFormer: A Learned Performance Model with Transfer Learning Across Devices for Deep Learning Tensor Programs
emnlp-main.250
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.251.bib
https://aclanthology.org/2023.emnlp-main.251/
@inproceedings{overbay-etal-2023-mredditsum, title = "m{R}eddit{S}um: A Multimodal Abstractive Summarization Dataset of {R}eddit Threads with Images", author = "Overbay, Keighley and Ahn, Jaewoo and Pesaran zadeh, Fatemeh and Park, Joonsuk and Kim, Gunhee", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.251", doi = "10.18653/v1/2023.emnlp-main.251", pages = "4117--4132", abstract = "The growing number of multimodal online discussions necessitates automatic summarization to save time and reduce content overload. However, existing summarization datasets are not suitable for this purpose, as they either do not cover discussions, multiple modalities, or both. To this end, we present mRedditSum, the first multimodal discussion summarization dataset. It consists of 3,033 discussion threads where a post solicits advice regarding an issue described with an image and text, and respective comments express diverse opinions. We annotate each thread with a human-written summary that captures both the essential information from the text, as well as the details available only in the image. Experiments show that popular summarization models{---}GPT-3.5, BART, and T5{---}consistently improve in performance when visual information is incorporated. We also introduce a novel method, cluster-based multi-stage summarization, that outperforms existing baselines and serves as a competitive baseline for future work.", }
The growing number of multimodal online discussions necessitates automatic summarization to save time and reduce content overload. However, existing summarization datasets are not suitable for this purpose, as they either do not cover discussions, multiple modalities, or both. To this end, we present mRedditSum, the first multimodal discussion summarization dataset. It consists of 3,033 discussion threads where a post solicits advice regarding an issue described with an image and text, and respective comments express diverse opinions. We annotate each thread with a human-written summary that captures both the essential information from the text, as well as the details available only in the image. Experiments show that popular summarization models{---}GPT-3.5, BART, and T5{---}consistently improve in performance when visual information is incorporated. We also introduce a novel method, cluster-based multi-stage summarization, that outperforms existing baselines and serves as a competitive baseline for future work.
[ "Overbay, Keighley", "Ahn, Jaewoo", "Pesaran zadeh, Fatemeh", "Park, Joonsuk", "Kim, Gunhee" ]
mRedditSum: A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images
emnlp-main.251
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.252.bib
https://aclanthology.org/2023.emnlp-main.252/
@inproceedings{ding-etal-2023-sparse, title = "Sparse Low-rank Adaptation of Pre-trained Language Models", author = "Ding, Ning and Lv, Xingtai and Wang, Qiaosen and Chen, Yulin and Zhou, Bowen and Liu, Zhiyuan and Sun, Maosong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.252", doi = "10.18653/v1/2023.emnlp-main.252", pages = "4133--4145", abstract = "Fine-tuning pre-trained large language models in a parameter-efficient manner is widely studied for its effectiveness and efficiency. The popular method of low-rank adaptation (LoRA) offers a notable approach, hypothesizing that the adaptation process is intrinsically low-dimensional. Although LoRA has demonstrated commendable performance, it is implemented with a fixed and unalterable intrinsic rank that might not always be the ideal choice. Recognizing the need for more flexible adaptation, we extend the methodology of LoRA to an innovative approach we call sparse low-rank adaptation (SoRA) that enables dynamic adjustments to the intrinsic rank during the adaptation process. We achieve this through the incorporation of a gate unit optimized with proximal gradient method in the training stage, controlling the cardinality of rank under the sparsity of the gate. In the subsequent inference stage, we eliminate the parameter blocks corresponding to the zeroed-out ranks, to reduce each SoRA module back to a concise yet rank-optimal LoRA. Our approach strengthens the representation power of LoRA by initializing it with a higher rank, while efficiently taming a temporarily increased number of parameters via updating in a sparse way. We further introduce a sparsifying scheduler for SoRA, aiming to examine the impact of the number of non-zero parameters on the model{'}s memorization and generalization. Our experimental results demonstrate that SoRA can outperform other baselines even with 70{\%} retained parameters and 70{\%} training time.", }
Fine-tuning pre-trained large language models in a parameter-efficient manner is widely studied for its effectiveness and efficiency. The popular method of low-rank adaptation (LoRA) offers a notable approach, hypothesizing that the adaptation process is intrinsically low-dimensional. Although LoRA has demonstrated commendable performance, it is implemented with a fixed and unalterable intrinsic rank that might not always be the ideal choice. Recognizing the need for more flexible adaptation, we extend the methodology of LoRA to an innovative approach we call sparse low-rank adaptation (SoRA) that enables dynamic adjustments to the intrinsic rank during the adaptation process. We achieve this through the incorporation of a gate unit optimized with proximal gradient method in the training stage, controlling the cardinality of rank under the sparsity of the gate. In the subsequent inference stage, we eliminate the parameter blocks corresponding to the zeroed-out ranks, to reduce each SoRA module back to a concise yet rank-optimal LoRA. Our approach strengthens the representation power of LoRA by initializing it with a higher rank, while efficiently taming a temporarily increased number of parameters via updating in a sparse way. We further introduce a sparsifying scheduler for SoRA, aiming to examine the impact of the number of non-zero parameters on the model{'}s memorization and generalization. Our experimental results demonstrate that SoRA can outperform other baselines even with 70{\%} retained parameters and 70{\%} training time.
[ "Ding, Ning", "Lv, Xingtai", "Wang, Qiaosen", "Chen, Yulin", "Zhou, Bowen", "Liu, Zhiyuan", "Sun, Maosong" ]
Sparse Low-rank Adaptation of Pre-trained Language Models
emnlp-main.252
2311.11696
[ "https://github.com/tsinghuac3i/sora" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.253.bib
https://aclanthology.org/2023.emnlp-main.253/
@inproceedings{don-yehiya-etal-2023-human, title = "Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney", author = "Don-Yehiya, Shachar and Choshen, Leshem and Abend, Omri", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.253", doi = "10.18653/v1/2023.emnlp-main.253", pages = "4146--4161", abstract = "Generating images with a Text-to-Image model often requires multiple trials, where human users iteratively update their prompt based on feedback, namely the output image. Taking inspiration from cognitive work on reference games and dialogue alignment, this paper analyzes the dynamics of the user prompts along such iterations. We compile a dataset of iterative interactions of human users with Midjourney. Our analysis then reveals that prompts predictably converge toward specific traits along these iterations. We further study whether this convergence is due to human users, realizing they missed important details, or due to adaptation to the model{'}s {``}preferences{''}, producing better images for a specific language style. We show initial evidence that both possibilities are at play. The possibility that users adapt to the model{'}s preference raises concerns about reusing user data for further training. The prompts may be biased towards the preferences of a specific model, rather than align with human intentions and natural manner of expression.", }
Generating images with a Text-to-Image model often requires multiple trials, where human users iteratively update their prompt based on feedback, namely the output image. Taking inspiration from cognitive work on reference games and dialogue alignment, this paper analyzes the dynamics of the user prompts along such iterations. We compile a dataset of iterative interactions of human users with Midjourney. Our analysis then reveals that prompts predictably converge toward specific traits along these iterations. We further study whether this convergence is due to human users, realizing they missed important details, or due to adaptation to the model{'}s {``}preferences{''}, producing better images for a specific language style. We show initial evidence that both possibilities are at play. The possibility that users adapt to the model{'}s preference raises concerns about reusing user data for further training. The prompts may be biased towards the preferences of a specific model, rather than align with human intentions and natural manner of expression.
[ "Don-Yehiya, Shachar", "Choshen, Leshem", "Abend, Omri" ]
Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney
emnlp-main.253
2311.12131
[ "https://github.com/shachardon/mid-journey-to-alignment" ]
https://huggingface.co/papers/2311.12131
1
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.254.bib
https://aclanthology.org/2023.emnlp-main.254/
@inproceedings{sedova-roth-2023-ulf, title = "{ULF}: Unsupervised Labeling Function Correction using Cross-Validation for Weak Supervision", author = "Sedova, Anastasiia and Roth, Benjamin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.254", doi = "10.18653/v1/2023.emnlp-main.254", pages = "4162--4176", abstract = "A cost-effective alternative to manual data labeling is weak supervision (WS), where data samples are automatically annotated using a predefined set of labeling functions (LFs), rule-based mechanisms that generate artificial labels for the associated classes. In this work, we investigate noise reduction techniques for WS based on the principle of k-fold cross-validation. We introduce a new algorithm ULF for Unsupervised Labeling Function correction, which denoises WS data by leveraging models trained on all but some LFs to identify and correct biases specific to the held-out LFs. Specifically, ULF refines the allocation of LFs to classes by re-estimating this assignment on highly reliable cross-validated samples. Evaluation on multiple datasets confirms ULF{'}s effectiveness in enhancing WS learning without the need for manual labeling.", }
A cost-effective alternative to manual data labeling is weak supervision (WS), where data samples are automatically annotated using a predefined set of labeling functions (LFs), rule-based mechanisms that generate artificial labels for the associated classes. In this work, we investigate noise reduction techniques for WS based on the principle of k-fold cross-validation. We introduce a new algorithm ULF for Unsupervised Labeling Function correction, which denoises WS data by leveraging models trained on all but some LFs to identify and correct biases specific to the held-out LFs. Specifically, ULF refines the allocation of LFs to classes by re-estimating this assignment on highly reliable cross-validated samples. Evaluation on multiple datasets confirms ULF{'}s effectiveness in enhancing WS learning without the need for manual labeling.
[ "Sedova, Anastasiia", "Roth, Benjamin" ]
ULF: Unsupervised Labeling Function Correction using Cross-Validation for Weak Supervision
emnlp-main.254
2204.06863
[ "https://github.com/knodle/knodle" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.255.bib
https://aclanthology.org/2023.emnlp-main.255/
@inproceedings{qi-etal-2023-art, title = "The Art of {SOCRATIC} {QUESTIONING}: Recursive Thinking with Large Language Models", author = "Qi, Jingyuan and Xu, Zhiyang and Shen, Ying and Liu, Minqian and Jin, Di and Wang, Qifan and Huang, Lifu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.255", doi = "10.18653/v1/2023.emnlp-main.255", pages = "4177--4199", abstract = "Chain-of-Thought (CoT) prompting enables large language models to solve complex reasoning problems by generating intermediate steps. However, confined by its inherent single-pass and sequential generation process, CoT heavily relies on the initial decisions, causing errors in early steps to accumulate and impact the final answers. In contrast, humans adopt recursive thinking when tackling complex reasoning problems, i.e. iteratively breaking the original problem into approachable sub-problems and aggregating their answers to resolve the original one. Inspired by the human cognitive process, we propose SOCRATIC QUESTIONING, a divide-and-conquer style algorithm that mimics the recursive thinking process. Specifically, SOCRATIC QUESTIONING leverages large language models to raise and answer sub-questions until collecting enough information to tackle the original question. Unlike CoT, SOCRATIC QUESTIONING explicitly navigates the thinking space, stimulates effective recursive thinking, and is more robust towards errors in the thinking process. Extensive experiments on several complex reasoning tasks, including MMLU, MATH, LogiQA, and visual question-answering demonstrate significant performance improvements over the state-of-the-art prompting methods, such as CoT, and Tree-of-Thought. The qualitative analysis clearly shows that the intermediate reasoning steps elicited by SOCRATIC QUESTIONING are similar to humans{'} recursively thinking process of complex reasoning problems.", }
Chain-of-Thought (CoT) prompting enables large language models to solve complex reasoning problems by generating intermediate steps. However, confined by its inherent single-pass and sequential generation process, CoT heavily relies on the initial decisions, causing errors in early steps to accumulate and impact the final answers. In contrast, humans adopt recursive thinking when tackling complex reasoning problems, i.e. iteratively breaking the original problem into approachable sub-problems and aggregating their answers to resolve the original one. Inspired by the human cognitive process, we propose SOCRATIC QUESTIONING, a divide-and-conquer style algorithm that mimics the recursive thinking process. Specifically, SOCRATIC QUESTIONING leverages large language models to raise and answer sub-questions until collecting enough information to tackle the original question. Unlike CoT, SOCRATIC QUESTIONING explicitly navigates the thinking space, stimulates effective recursive thinking, and is more robust towards errors in the thinking process. Extensive experiments on several complex reasoning tasks, including MMLU, MATH, LogiQA, and visual question-answering demonstrate significant performance improvements over the state-of-the-art prompting methods, such as CoT, and Tree-of-Thought. The qualitative analysis clearly shows that the intermediate reasoning steps elicited by SOCRATIC QUESTIONING are similar to humans{'} recursively thinking process of complex reasoning problems.
[ "Qi, Jingyuan", "Xu, Zhiyang", "Shen, Ying", "Liu, Minqian", "Jin, Di", "Wang, Qifan", "Huang, Lifu" ]
The Art of SOCRATIC QUESTIONING: Recursive Thinking with Large Language Models
emnlp-main.255
2305.14999
[ "https://github.com/vt-nlp/socratic-questioning" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.256.bib
https://aclanthology.org/2023.emnlp-main.256/
@inproceedings{liu-etal-2023-ideology, title = "Ideology Takes Multiple Looks: A High-Quality Dataset for Multifaceted Ideology Detection", author = "Liu, Songtao and Luo, Ziling and Xu, Minghua and Wei, Lixiao and Wei, Ziyao and Yu, Han and Xiang, Wei and Wang, Bang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.256", doi = "10.18653/v1/2023.emnlp-main.256", pages = "4200--4213", abstract = "Ideology detection (ID) is important for gaining insights about peoples{'} opinions and stances on our world and society, which can find many applications in politics, economics and social sciences. It is not uncommon that a piece of text can contain descriptions of various issues. It is also widely accepted that a person can take different ideological stances in different facets. However, existing datasets for the ID task only label a text as ideologically left- or right-leaning as a whole, regardless whether the text containing one or more different issues. Moreover, most prior work annotates texts from data resources with known ideological bias through distant supervision approaches, which may result in many false labels. With some theoretical help from social sciences, this work first designs an ideological schema containing five domains and twelve facets for a new multifaceted ideology detection (MID) task to provide a more complete and delicate description of ideology. We construct a MITweet dataset for the MID task, which contains 12,594 English Twitter posts, each annotated with a Relevance and an Ideology label for all twelve facets. We also design and test a few of strong baselines for the MID task under in-topic and cross-topic settings, which can serve as benchmarks for further research.", }
Ideology detection (ID) is important for gaining insights about peoples{'} opinions and stances on our world and society, which can find many applications in politics, economics and social sciences. It is not uncommon that a piece of text can contain descriptions of various issues. It is also widely accepted that a person can take different ideological stances in different facets. However, existing datasets for the ID task only label a text as ideologically left- or right-leaning as a whole, regardless whether the text containing one or more different issues. Moreover, most prior work annotates texts from data resources with known ideological bias through distant supervision approaches, which may result in many false labels. With some theoretical help from social sciences, this work first designs an ideological schema containing five domains and twelve facets for a new multifaceted ideology detection (MID) task to provide a more complete and delicate description of ideology. We construct a MITweet dataset for the MID task, which contains 12,594 English Twitter posts, each annotated with a Relevance and an Ideology label for all twelve facets. We also design and test a few of strong baselines for the MID task under in-topic and cross-topic settings, which can serve as benchmarks for further research.
[ "Liu, Songtao", "Luo, Ziling", "Xu, Minghua", "Wei, Lixiao", "Wei, Ziyao", "Yu, Han", "Xiang, Wei", "Wang, Bang" ]
Ideology Takes Multiple Looks: A High-Quality Dataset for Multifaceted Ideology Detection
emnlp-main.256
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.257.bib
https://aclanthology.org/2023.emnlp-main.257/
@inproceedings{colombo-etal-2023-transductive, title = "Transductive Learning for Textual Few-Shot Classification in {API}-based Embedding Models", author = "Colombo, Pierre and Pellegrain, Victor and Boudiaf, Malik and Tami, Myriam and Storchan, Victor and Ayed, Ismail and Piantanida, Pablo", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.257", doi = "10.18653/v1/2023.emnlp-main.257", pages = "4214--4231", abstract = "Proprietary and closed APIs are becoming increasingly common to process natural language, and are impacting the practical applications of natural language processing, including few-shot classification. Few-shot classification involves training a model to perform a new classification task with a handful of labeled data. This paper presents three contributions. First, we introduce a scenario where the embedding of a pre-trained model is served through a gated API with compute-cost and data-privacy constraints. Second, we propose a transductive inference, a learning paradigm that has been overlooked by the NLP community. Transductive inference, unlike traditional inductive learning, leverages the statistics of unlabelled data. We also introduce a new parameter-free transductive regularizer based on the Fisher-Rao loss, which can be used on top of the gated API embeddings. This method fully utilizes unlabelled data, does not share any label with the third-party API provider and could serve as a baseline for future research. Third, we propose an improved experimental setting and compile a benchmark of eight datasets involving multiclass classification in four different languages, with up to 151 classes. We evaluate our methods using eight backbone models, along with an episodic evaluation over 1,000 episodes, which demonstrate the superiority of transductive inference over the standard inductive setting.", }
Proprietary and closed APIs are becoming increasingly common to process natural language, and are impacting the practical applications of natural language processing, including few-shot classification. Few-shot classification involves training a model to perform a new classification task with a handful of labeled data. This paper presents three contributions. First, we introduce a scenario where the embedding of a pre-trained model is served through a gated API with compute-cost and data-privacy constraints. Second, we propose a transductive inference, a learning paradigm that has been overlooked by the NLP community. Transductive inference, unlike traditional inductive learning, leverages the statistics of unlabelled data. We also introduce a new parameter-free transductive regularizer based on the Fisher-Rao loss, which can be used on top of the gated API embeddings. This method fully utilizes unlabelled data, does not share any label with the third-party API provider and could serve as a baseline for future research. Third, we propose an improved experimental setting and compile a benchmark of eight datasets involving multiclass classification in four different languages, with up to 151 classes. We evaluate our methods using eight backbone models, along with an episodic evaluation over 1,000 episodes, which demonstrate the superiority of transductive inference over the standard inductive setting.
[ "Colombo, Pierre", "Pellegrain, Victor", "Boudiaf, Malik", "Tami, Myriam", "Storchan, Victor", "Ayed, Ismail", "Piantanida, Pablo" ]
Transductive Learning for Textual Few-Shot Classification in API-based Embedding Models
emnlp-main.257
2310.13998
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.258.bib
https://aclanthology.org/2023.emnlp-main.258/
@inproceedings{ahuja-etal-2023-mega, title = "{MEGA}: Multilingual Evaluation of Generative {AI}", author = "Ahuja, Kabir and Diddee, Harshita and Hada, Rishav and Ochieng, Millicent and Ramesh, Krithika and Jain, Prachi and Nambi, Akshay and Ganu, Tanuja and Segal, Sameer and Ahmed, Mohamed and Bali, Kalika and Sitaram, Sunayana", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.258", doi = "10.18653/v1/2023.emnlp-main.258", pages = "4232--4267", abstract = "Generative AI models have shown impressive performance on many Natural Language Processing tasks such as language understanding, reasoning, and language generation. An important question being asked by the AI community today is about the capabilities and limits of these models, and it is clear that evaluating generative AI is very challenging. Most studies on generative LLMs have been restricted to English and it is unclear how capable these models are at understanding and generating text in other languages. We present the first comprehensive benchmarking of generative LLMs - MEGA, which evaluates models on standard NLP benchmarks, covering 16 NLP datasets across 70 typologically diverse languages. We compare the performance of generative LLMs including Chat-GPT and GPT-4 to State of the Art (SOTA) non-autoregressive models on these tasks to determine how well generative models perform compared to the previous generation of LLMs. We present a thorough analysis of the performance of models across languages and tasks and discuss challenges in improving the performance of generative LLMs on low-resource languages. We create a framework for evaluating generative LLMs in the multilingual setting and provide directions for future progress in the field.", }
Generative AI models have shown impressive performance on many Natural Language Processing tasks such as language understanding, reasoning, and language generation. An important question being asked by the AI community today is about the capabilities and limits of these models, and it is clear that evaluating generative AI is very challenging. Most studies on generative LLMs have been restricted to English and it is unclear how capable these models are at understanding and generating text in other languages. We present the first comprehensive benchmarking of generative LLMs - MEGA, which evaluates models on standard NLP benchmarks, covering 16 NLP datasets across 70 typologically diverse languages. We compare the performance of generative LLMs including Chat-GPT and GPT-4 to State of the Art (SOTA) non-autoregressive models on these tasks to determine how well generative models perform compared to the previous generation of LLMs. We present a thorough analysis of the performance of models across languages and tasks and discuss challenges in improving the performance of generative LLMs on low-resource languages. We create a framework for evaluating generative LLMs in the multilingual setting and provide directions for future progress in the field.
[ "Ahuja, Kabir", "Diddee, Harshita", "Hada, Rishav", "Ochieng, Millicent", "Ramesh, Krithika", "Jain, Prachi", "Nambi, Akshay", "Ganu, Tanuja", "Segal, Sameer", "Ahmed, Mohamed", "Bali, Kalika", "Sitaram, Sunayana" ]
MEGA: Multilingual Evaluation of Generative AI
emnlp-main.258
2303.12528
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.259.bib
https://aclanthology.org/2023.emnlp-main.259/
@inproceedings{yuan-etal-2023-support, title = "Support or Refute: Analyzing the Stance of Evidence to Detect Out-of-Context Mis- and Disinformation", author = "Yuan, Xin and Guo, Jie and Qiu, Weidong and Huang, Zheng and Li, Shujun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.259", doi = "10.18653/v1/2023.emnlp-main.259", pages = "4268--4280", abstract = "Mis- and disinformation online have become a major societal problem as major sources of online harms of different kinds. One common form of mis- and disinformation is out-of-context (OOC) information, where different pieces of information are falsely associated, e.g., a real image combined with a false textual caption or a misleading textual description. Although some past studies have attempted to defend against OOC mis- and disinformation through external evidence, they tend to disregard the role of different pieces of evidence with different stances. Motivated by the intuition that the stance of evidence represents a bias towards different detection results, we propose a stance extraction network (SEN) that can extract the stances of different pieces of multi-modal evidence in a unified framework. Moreover, we introduce a support-refutation score calculated based on the co-occurrence relations of named entities into the textual SEN. Extensive experiments on a public large-scale dataset demonstrated that our proposed method outperformed the state-of-the-art baselines, with the best model achieving a performance gain of 3.2{\%} in accuracy.", }
Mis- and disinformation online have become a major societal problem as major sources of online harms of different kinds. One common form of mis- and disinformation is out-of-context (OOC) information, where different pieces of information are falsely associated, e.g., a real image combined with a false textual caption or a misleading textual description. Although some past studies have attempted to defend against OOC mis- and disinformation through external evidence, they tend to disregard the role of different pieces of evidence with different stances. Motivated by the intuition that the stance of evidence represents a bias towards different detection results, we propose a stance extraction network (SEN) that can extract the stances of different pieces of multi-modal evidence in a unified framework. Moreover, we introduce a support-refutation score calculated based on the co-occurrence relations of named entities into the textual SEN. Extensive experiments on a public large-scale dataset demonstrated that our proposed method outperformed the state-of-the-art baselines, with the best model achieving a performance gain of 3.2{\%} in accuracy.
[ "Yuan, Xin", "Guo, Jie", "Qiu, Weidong", "Huang, Zheng", "Li, Shujun" ]
Support or Refute: Analyzing the Stance of Evidence to Detect Out-of-Context Mis- and Disinformation
emnlp-main.259
2311.01766
[ "https://github.com/yx3266/SEN" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.260.bib
https://aclanthology.org/2023.emnlp-main.260/
@inproceedings{li-etal-2023-video, title = "Video-Helpful Multimodal Machine Translation", author = "Li, Yihang and Shimizu, Shuichiro and Chu, Chenhui and Kurohashi, Sadao and Li, Wei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.260", doi = "10.18653/v1/2023.emnlp-main.260", pages = "4281--4299", abstract = "Existing multimodal machine translation (MMT) datasets consist of images and video captions or instructional video subtitles, which rarely contain linguistic ambiguity, making visual information ineffective in generating appropriate translations. Recent work has constructed an ambiguous subtitles dataset to alleviate this problem but is still limited to the problem that videos do not necessarily contribute to disambiguation. We introduce EVA (Extensive training set and Video-helpful evaluation set for Ambiguous subtitles translation), an MMT dataset containing 852k Japanese-English parallel subtitle pairs, 520k Chinese-English parallel subtitle pairs, and corresponding video clips collected from movies and TV episodes. In addition to the extensive training set, EVA contains a video-helpful evaluation set in which subtitles are ambiguous, and videos are guaranteed helpful for disambiguation. Furthermore, we propose SAFA, an MMT model based on the Selective Attention model with two novel methods: Frame attention loss and Ambiguity augmentation, aiming to use videos in EVA for disambiguation fully. Experiments on EVA show that visual information and the proposed methods can boost translation performance, and our model performs significantly better than existing MMT models.", }
Existing multimodal machine translation (MMT) datasets consist of images and video captions or instructional video subtitles, which rarely contain linguistic ambiguity, making visual information ineffective in generating appropriate translations. Recent work has constructed an ambiguous subtitles dataset to alleviate this problem but is still limited to the problem that videos do not necessarily contribute to disambiguation. We introduce EVA (Extensive training set and Video-helpful evaluation set for Ambiguous subtitles translation), an MMT dataset containing 852k Japanese-English parallel subtitle pairs, 520k Chinese-English parallel subtitle pairs, and corresponding video clips collected from movies and TV episodes. In addition to the extensive training set, EVA contains a video-helpful evaluation set in which subtitles are ambiguous, and videos are guaranteed helpful for disambiguation. Furthermore, we propose SAFA, an MMT model based on the Selective Attention model with two novel methods: Frame attention loss and Ambiguity augmentation, aiming to use videos in EVA for disambiguation fully. Experiments on EVA show that visual information and the proposed methods can boost translation performance, and our model performs significantly better than existing MMT models.
[ "Li, Yihang", "Shimizu, Shuichiro", "Chu, Chenhui", "Kurohashi, Sadao", "Li, Wei" ]
Video-Helpful Multimodal Machine Translation
emnlp-main.260
2310.20201
[ "https://github.com/ku-nlp/video-helpful-mmt" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.261.bib
https://aclanthology.org/2023.emnlp-main.261/
@inproceedings{ko-etal-2023-large, title = "Large Language Models are Temporal and Causal Reasoners for Video Question Answering", author = "Ko, Dohwan and Lee, Ji and Kang, Woo-Young and Roh, Byungseok and Kim, Hyunwoo", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.261", doi = "10.18653/v1/2023.emnlp-main.261", pages = "4300--4316", abstract = "Large Language Models (LLMs) have shown remarkable performances on a wide range of natural language understanding and generation tasks. We observe that the LLMs provide effective priors in exploiting $\textit{linguistic shortcuts}$ for temporal and causal reasoning in Video Question Answering (VideoQA). However, such priors often cause suboptimal results on VideoQA by leading the model to over-rely on questions, $\textit{i.e.}$, $\textit{linguistic bias}$, while ignoring visual content. This is also known as {`}ungrounded guesses{'} or {`}hallucinations{'}. To address this problem while leveraging LLMs{'} prior on VideoQA, we propose a novel framework, Flipped-VQA, encouraging the model to predict all the combinations of $\langle$V, Q, A$\rangle$ triplet by flipping the source pair and the target label to understand their complex relationships, $\textit{i.e.}$, predict A, Q, and V given a VQ, VA, and QA pairs, respectively. In this paper, we develop LLaMA-VQA by applying Flipped-VQA to LLaMA, and it outperforms both LLMs-based and non-LLMs-based models on five challenging VideoQA benchmarks. Furthermore, our Flipped-VQA is a general framework that is applicable to various LLMs (OPT and GPT-J) and consistently improves their performances. We empirically demonstrate that Flipped-VQA not only enhances the exploitation of linguistic shortcuts but also mitigates the linguistic bias, which causes incorrect answers over-relying on the question. Code is available at https://github.com/mlvlab/Flipped-VQA.", }
Large Language Models (LLMs) have shown remarkable performances on a wide range of natural language understanding and generation tasks. We observe that the LLMs provide effective priors in exploiting $\textit{linguistic shortcuts}$ for temporal and causal reasoning in Video Question Answering (VideoQA). However, such priors often cause suboptimal results on VideoQA by leading the model to over-rely on questions, $\textit{i.e.}$, $\textit{linguistic bias}$, while ignoring visual content. This is also known as {`}ungrounded guesses{'} or {`}hallucinations{'}. To address this problem while leveraging LLMs{'} prior on VideoQA, we propose a novel framework, Flipped-VQA, encouraging the model to predict all the combinations of $\langle$V, Q, A$\rangle$ triplet by flipping the source pair and the target label to understand their complex relationships, $\textit{i.e.}$, predict A, Q, and V given a VQ, VA, and QA pairs, respectively. In this paper, we develop LLaMA-VQA by applying Flipped-VQA to LLaMA, and it outperforms both LLMs-based and non-LLMs-based models on five challenging VideoQA benchmarks. Furthermore, our Flipped-VQA is a general framework that is applicable to various LLMs (OPT and GPT-J) and consistently improves their performances. We empirically demonstrate that Flipped-VQA not only enhances the exploitation of linguistic shortcuts but also mitigates the linguistic bias, which causes incorrect answers over-relying on the question. Code is available at https://github.com/mlvlab/Flipped-VQA.
[ "Ko, Dohwan", "Lee, Ji", "Kang, Woo-Young", "Roh, Byungseok", "Kim, Hyunwoo" ]
Large Language Models are Temporal and Causal Reasoners for Video Question Answering
emnlp-main.261
2310.15747
[ "https://github.com/mlvlab/Flipped-VQA" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.262.bib
https://aclanthology.org/2023.emnlp-main.262/
@inproceedings{sagirova-burtsev-2023-uncertainty, title = "Uncertainty Guided Global Memory Improves Multi-Hop Question Answering", author = "Sagirova, Alsu and Burtsev, Mikhail", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.262", doi = "10.18653/v1/2023.emnlp-main.262", pages = "4317--4328", abstract = "Transformers have become the gold standard for many natural language processing tasks and, in particular, for multi-hop question answering (MHQA). This task includes processing a long document and reasoning over the multiple parts of it. The landscape of MHQA approaches can be classified into two primary categories. The first group focuses on extracting supporting evidence, thereby constraining the QA model{'}s context to predicted facts. Conversely, the second group relies on the attention mechanism of the long input encoding model to facilitate multi-hop reasoning. However, attention-based token representations lack explicit global contextual information to connect reasoning steps. To address these issues, we propose GEMFormer, a two-stage method that first collects relevant information over the entire document to the memory and then combines it with local context to solve the task. Our experimental results show that fine-tuning a pre-trained model with memory-augmented input, including the most certain global elements, improves the model{'}s performance on three MHQA datasets compared to the baseline. We also found that the global explicit memory contains information from supporting facts required for the correct answer.", }
Transformers have become the gold standard for many natural language processing tasks and, in particular, for multi-hop question answering (MHQA). This task includes processing a long document and reasoning over the multiple parts of it. The landscape of MHQA approaches can be classified into two primary categories. The first group focuses on extracting supporting evidence, thereby constraining the QA model{'}s context to predicted facts. Conversely, the second group relies on the attention mechanism of the long input encoding model to facilitate multi-hop reasoning. However, attention-based token representations lack explicit global contextual information to connect reasoning steps. To address these issues, we propose GEMFormer, a two-stage method that first collects relevant information over the entire document to the memory and then combines it with local context to solve the task. Our experimental results show that fine-tuning a pre-trained model with memory-augmented input, including the most certain global elements, improves the model{'}s performance on three MHQA datasets compared to the baseline. We also found that the global explicit memory contains information from supporting facts required for the correct answer.
[ "Sagirova, Alsu", "Burtsev, Mikhail" ]
Uncertainty Guided Global Memory Improves Multi-Hop Question Answering
emnlp-main.262
2311.18151
[ "https://github.com/aloriosa/gemformer" ]
https://huggingface.co/papers/2311.18151
2
0
0
2
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.263.bib
https://aclanthology.org/2023.emnlp-main.263/
@inproceedings{liang-etal-2023-prompting, title = "Prompting Large Language Models with Chain-of-Thought for Few-Shot Knowledge Base Question Generation", author = "Liang, Yuanyuan and Wang, Jianing and Zhu, Hanlun and Wang, Lei and Qian, Weining and Lan, Yunshi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.263", doi = "10.18653/v1/2023.emnlp-main.263", pages = "4329--4343", abstract = "The task of Question Generation over Knowledge Bases (KBQG) aims to convert a logical form into a natural language question. For the sake of expensive cost of large-scale question annotation, the methods of KBQG under low-resource scenarios urgently need to be developed. However, current methods heavily rely on annotated data for fine-tuning, which is not well-suited for few-shot question generation. The emergence of Large Language Models (LLMs) has shown their impressive generalization ability in few-shot tasks. Inspired by Chain-of-Thought (CoT) prompting, which is an in-context learning strategy for reasoning, we formulate KBQG task as a reasoning problem, where the generation of a complete question is splitted into a series of sub-question generation. Our proposed prompting method KQG-CoT first retrieves supportive logical forms from the unlabeled data pool taking account of the characteristics of the logical form. Then, we write a prompt to explicit the reasoning chain of generating complicated questions based on the selected demonstrations. To further ensure prompt quality, we extend KQG-CoT into KQG-CoT+ via sorting the logical forms by their complexity. We conduct extensive experiments over three public KBQG datasets. The results demonstrate that our prompting method consistently outperforms other prompting baselines on the evaluated datasets. Remarkably, our KQG-CoT+ method could surpass existing few-shot SoTA results of the PathQuestions dataset by 18.25, 10.72, and 10.18 absolute points on BLEU-4, METEOR, and ROUGE-L, respectively.", }
The task of Question Generation over Knowledge Bases (KBQG) aims to convert a logical form into a natural language question. For the sake of expensive cost of large-scale question annotation, the methods of KBQG under low-resource scenarios urgently need to be developed. However, current methods heavily rely on annotated data for fine-tuning, which is not well-suited for few-shot question generation. The emergence of Large Language Models (LLMs) has shown their impressive generalization ability in few-shot tasks. Inspired by Chain-of-Thought (CoT) prompting, which is an in-context learning strategy for reasoning, we formulate KBQG task as a reasoning problem, where the generation of a complete question is splitted into a series of sub-question generation. Our proposed prompting method KQG-CoT first retrieves supportive logical forms from the unlabeled data pool taking account of the characteristics of the logical form. Then, we write a prompt to explicit the reasoning chain of generating complicated questions based on the selected demonstrations. To further ensure prompt quality, we extend KQG-CoT into KQG-CoT+ via sorting the logical forms by their complexity. We conduct extensive experiments over three public KBQG datasets. The results demonstrate that our prompting method consistently outperforms other prompting baselines on the evaluated datasets. Remarkably, our KQG-CoT+ method could surpass existing few-shot SoTA results of the PathQuestions dataset by 18.25, 10.72, and 10.18 absolute points on BLEU-4, METEOR, and ROUGE-L, respectively.
[ "Liang, Yuanyuan", "Wang, Jianing", "Zhu, Hanlun", "Wang, Lei", "Qian, Weining", "Lan, Yunshi" ]
Prompting Large Language Models with Chain-of-Thought for Few-Shot Knowledge Base Question Generation
emnlp-main.263
2310.08395
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.264.bib
https://aclanthology.org/2023.emnlp-main.264/
@inproceedings{zhang-etal-2023-trojansql, title = "{T}rojan{SQL}: {SQL} Injection against Natural Language Interface to Database", author = "Zhang, Jinchuan and Zhou, Yan and Hui, Binyuan and Liu, Yaxin and Li, Ziming and Hu, Songlin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.264", doi = "10.18653/v1/2023.emnlp-main.264", pages = "4344--4359", abstract = "The technology of text-to-SQL has significantly enhanced the efficiency of accessing and manipulating databases. However, limited research has been conducted to study its vulnerabilities emerging from malicious user interaction. By proposing TrojanSQL, a backdoor-based SQL injection framework for text-to-SQL systems, we show how state-of-the-art text-to-SQL parsers can be easily misled to produce harmful SQL statements that can invalidate user queries or compromise sensitive information about the database. The study explores two specific injection attacks, namely $\textit{boolean-based injection}$ and $\textit{union-based injection}$, which use different types of triggers to achieve distinct goals in compromising the parser. Experimental results demonstrate that both medium-sized models based on fine-tuning and LLM-based parsers using prompting techniques are vulnerable to this type of attack, with attack success rates as high as 99{\%} and 89{\%}, respectively. We hope that this study will raise more concerns about the potential security risks of building natural language interfaces to databases.", }
The technology of text-to-SQL has significantly enhanced the efficiency of accessing and manipulating databases. However, limited research has been conducted to study its vulnerabilities emerging from malicious user interaction. By proposing TrojanSQL, a backdoor-based SQL injection framework for text-to-SQL systems, we show how state-of-the-art text-to-SQL parsers can be easily misled to produce harmful SQL statements that can invalidate user queries or compromise sensitive information about the database. The study explores two specific injection attacks, namely $\textit{boolean-based injection}$ and $\textit{union-based injection}$, which use different types of triggers to achieve distinct goals in compromising the parser. Experimental results demonstrate that both medium-sized models based on fine-tuning and LLM-based parsers using prompting techniques are vulnerable to this type of attack, with attack success rates as high as 99{\%} and 89{\%}, respectively. We hope that this study will raise more concerns about the potential security risks of building natural language interfaces to databases.
[ "Zhang, Jinchuan", "Zhou, Yan", "Hui, Binyuan", "Liu, Yaxin", "Li, Ziming", "Hu, Songlin" ]
TrojanSQL: SQL Injection against Natural Language Interface to Database
emnlp-main.264
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.265.bib
https://aclanthology.org/2023.emnlp-main.265/
@inproceedings{kassem-etal-2023-preserving, title = "Preserving Privacy Through Dememorization: An Unlearning Technique For Mitigating Memorization Risks In Language Models", author = "Kassem, Aly and Mahmoud, Omar and Saad, Sherif", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.265", doi = "10.18653/v1/2023.emnlp-main.265", pages = "4360--4379", abstract = "Large Language models (LLMs) are trained on vast amounts of data, including sensitive information that poses a risk to personal privacy if exposed. LLMs have shown the ability to memorize and reproduce portions of their training data when prompted by adversaries. Prior research has focused on addressing this memorization issue and preventing verbatim replication through techniques like knowledge unlearning and data pre-processing. However, these methods have limitations regarding the number of protected samples, limited privacy types, and potentially lower-quality generative models. To tackle this challenge more effectively, we propose {``}DeMem,{''} a novel unlearning approach that utilizes an efficient reinforcement learning feedback loop via proximal policy optimization. By fine-tuning the language model with a negative similarity score as a reward signal, we incentivize the LLMs to learn a paraphrasing policy to unlearn the pre-training data. Our experiments demonstrate that DeMem surpasses strong baselines and state-of-the-art methods in terms of its ability to generalize and strike a balance between maintaining privacy and LLM performance.", }
Large Language models (LLMs) are trained on vast amounts of data, including sensitive information that poses a risk to personal privacy if exposed. LLMs have shown the ability to memorize and reproduce portions of their training data when prompted by adversaries. Prior research has focused on addressing this memorization issue and preventing verbatim replication through techniques like knowledge unlearning and data pre-processing. However, these methods have limitations regarding the number of protected samples, limited privacy types, and potentially lower-quality generative models. To tackle this challenge more effectively, we propose {``}DeMem,{''} a novel unlearning approach that utilizes an efficient reinforcement learning feedback loop via proximal policy optimization. By fine-tuning the language model with a negative similarity score as a reward signal, we incentivize the LLMs to learn a paraphrasing policy to unlearn the pre-training data. Our experiments demonstrate that DeMem surpasses strong baselines and state-of-the-art methods in terms of its ability to generalize and strike a balance between maintaining privacy and LLM performance.
[ "Kassem, Aly", "Mahmoud, Omar", "Saad, Sherif" ]
Preserving Privacy Through Dememorization: An Unlearning Technique For Mitigating Memorization Risks In Language Models
emnlp-main.265
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.266.bib
https://aclanthology.org/2023.emnlp-main.266/
@inproceedings{chen-etal-2023-mingofficial, title = "{M}ing{O}fficial: A Ming Official Career Dataset and a Historical Context-Aware Representation Learning Framework", author = "Chen, You-Jun and Hsieh, Hsin-Yi and Lin, Yu and Tian, Yingtao and Chan, Bert and Liu, Yu-Sin and Lin, Yi-Hsuan and Tsai, Richard", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.266", doi = "10.18653/v1/2023.emnlp-main.266", pages = "4380--4401", abstract = "In Chinese studies, understanding the nuanced traits of historical figures, often not explicitly evident in biographical data, has been a key interest. However, identifying these traits can be challenging due to the need for domain expertise, specialist knowledge, and context-specific insights, making the process time-consuming and difficult to scale. Our focus on studying officials from China{'}s Ming Dynasty is no exception. To tackle this challenge, we propose MingOfficial, a large-scale multi-modal dataset consisting of both structured (career records, annotated personnel types) and text (historical texts) data for 9,376 officials. We further couple the dataset with a a graph neural network (GNN) to combine both modalities in order to allow investigation of social structures and provide features to boost down-stream tasks. Experiments show that our proposed MingOfficial could enable exploratory analysis of official identities, and also significantly boost performance in tasks such as identifying nuance identities (e.g. civil officials holding military power) from 24.6{\%} to 98.2{\%} F$_1$ score in hold-out test set. By making MingOfficial publicly available (see main text for the URL) as both a dataset and an interactive tool, we aim to stimulate further research into the role of social context and representation learning in identifying individual characteristics, and hope to provide inspiration for computational approaches in other fields beyond Chinese studies.", }
In Chinese studies, understanding the nuanced traits of historical figures, often not explicitly evident in biographical data, has been a key interest. However, identifying these traits can be challenging due to the need for domain expertise, specialist knowledge, and context-specific insights, making the process time-consuming and difficult to scale. Our focus on studying officials from China{'}s Ming Dynasty is no exception. To tackle this challenge, we propose MingOfficial, a large-scale multi-modal dataset consisting of both structured (career records, annotated personnel types) and text (historical texts) data for 9,376 officials. We further couple the dataset with a a graph neural network (GNN) to combine both modalities in order to allow investigation of social structures and provide features to boost down-stream tasks. Experiments show that our proposed MingOfficial could enable exploratory analysis of official identities, and also significantly boost performance in tasks such as identifying nuance identities (e.g. civil officials holding military power) from 24.6{\%} to 98.2{\%} F$_1$ score in hold-out test set. By making MingOfficial publicly available (see main text for the URL) as both a dataset and an interactive tool, we aim to stimulate further research into the role of social context and representation learning in identifying individual characteristics, and hope to provide inspiration for computational approaches in other fields beyond Chinese studies.
[ "Chen, You-Jun", "Hsieh, Hsin-Yi", "Lin, Yu", "Tian, Yingtao", "Chan, Bert", "Liu, Yu-Sin", "Lin, Yi-Hsuan", "Tsai, Richard" ]
MingOfficial: A Ming Official Career Dataset and a Historical Context-Aware Representation Learning Framework
emnlp-main.266
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.267.bib
https://aclanthology.org/2023.emnlp-main.267/
@inproceedings{joo-etal-2023-dpp, title = "{DPP}-{TTS}: Diversifying prosodic features of speech via determinantal point processes", author = "Joo, Seongho and Koh, Hyukhun and Jung, Kyomin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.267", doi = "10.18653/v1/2023.emnlp-main.267", pages = "4402--4417", abstract = "With the rapid advancement in deep generative models, recent neural Text-To-Speech(TTS) models have succeeded in synthesizing human-like speech. There have been some efforts to generate speech with various prosody beyond monotonous prosody patterns. However, previous works have several limitations. First, typical TTS models depend on the scaled sampling temperature for boosting the diversity of prosody. Speech samples generated at high sampling temperatures often lack perceptual prosodic diversity, which can adversely affect the naturalness of the speech. Second, the diversity among samples is neglected since the sampling procedure often focuses on a single speech sample rather than multiple ones. In this paper, we propose DPP-TTS: a text-to-speech model based on Determinantal Point Processes (DPPs) with a prosody diversifying module. Our TTS model is capable of generating speech samples that simultaneously consider perceptual diversity in each sample and among multiple samples. We demonstrate that DPP-TTS generates speech samples with more diversified prosody than baselines in the side-by-side comparison test considering the naturalness of speech at the same time.", }
With the rapid advancement in deep generative models, recent neural Text-To-Speech(TTS) models have succeeded in synthesizing human-like speech. There have been some efforts to generate speech with various prosody beyond monotonous prosody patterns. However, previous works have several limitations. First, typical TTS models depend on the scaled sampling temperature for boosting the diversity of prosody. Speech samples generated at high sampling temperatures often lack perceptual prosodic diversity, which can adversely affect the naturalness of the speech. Second, the diversity among samples is neglected since the sampling procedure often focuses on a single speech sample rather than multiple ones. In this paper, we propose DPP-TTS: a text-to-speech model based on Determinantal Point Processes (DPPs) with a prosody diversifying module. Our TTS model is capable of generating speech samples that simultaneously consider perceptual diversity in each sample and among multiple samples. We demonstrate that DPP-TTS generates speech samples with more diversified prosody than baselines in the side-by-side comparison test considering the naturalness of speech at the same time.
[ "Joo, Seongho", "Koh, Hyukhun", "Jung, Kyomin" ]
DPP-TTS: Diversifying prosodic features of speech via determinantal point processes
emnlp-main.267
2310.14663
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.268.bib
https://aclanthology.org/2023.emnlp-main.268/
@inproceedings{hu-etal-2023-meta, title = "Meta-Learning Online Adaptation of Language Models", author = "Hu, Nathan and Mitchell, Eric and Manning, Christopher and Finn, Chelsea", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.268", doi = "10.18653/v1/2023.emnlp-main.268", pages = "4418--4432", abstract = "Large language models encode impressively broad world knowledge in their parameters. However, the knowledge in static language models falls out of date, limiting the model{'}s effective {``}shelf life.{''} While online fine-tuning can reduce this degradation, we find that naively fine-tuning on a stream of documents leads to a low level of information uptake. We hypothesize that online fine-tuning does not sufficiently attend to important information. That is, the gradient signal from important tokens representing factual information is drowned out by the gradient from inherently noisy tokens, suggesting that a dynamic, context-aware learning rate may be beneficial. We therefore propose learning which tokens to upweight. We meta-train a small, autoregressive model to reweight the language modeling loss for each token during online fine-tuning, with the objective of maximizing the out-of-date base question-answering model{'}s ability to answer questions about a document after a single weighted gradient step. We call this approach Context-aware Meta-learned Loss Scaling (CaMeLS). Across three different distributions of documents, our experiments find that CaMeLS provides substantially improved information uptake on streams of thousands of documents compared with standard fine-tuning and baseline heuristics for reweighting token losses.", }
Large language models encode impressively broad world knowledge in their parameters. However, the knowledge in static language models falls out of date, limiting the model{'}s effective {``}shelf life.{''} While online fine-tuning can reduce this degradation, we find that naively fine-tuning on a stream of documents leads to a low level of information uptake. We hypothesize that online fine-tuning does not sufficiently attend to important information. That is, the gradient signal from important tokens representing factual information is drowned out by the gradient from inherently noisy tokens, suggesting that a dynamic, context-aware learning rate may be beneficial. We therefore propose learning which tokens to upweight. We meta-train a small, autoregressive model to reweight the language modeling loss for each token during online fine-tuning, with the objective of maximizing the out-of-date base question-answering model{'}s ability to answer questions about a document after a single weighted gradient step. We call this approach Context-aware Meta-learned Loss Scaling (CaMeLS). Across three different distributions of documents, our experiments find that CaMeLS provides substantially improved information uptake on streams of thousands of documents compared with standard fine-tuning and baseline heuristics for reweighting token losses.
[ "Hu, Nathan", "Mitchell, Eric", "Manning, Christopher", "Finn, Chelsea" ]
Meta-Learning Online Adaptation of Language Models
emnlp-main.268
2305.15076
[ "https://github.com/nathanhu0/CaMeLS" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.269.bib
https://aclanthology.org/2023.emnlp-main.269/
@inproceedings{leong-etal-2023-self, title = "Self-Detoxifying Language Models via Toxification Reversal", author = "Leong, Chak Tou and Cheng, Yi and Wang, Jiashuo and Wang, Jian and Li, Wenjie", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.269", doi = "10.18653/v1/2023.emnlp-main.269", pages = "4433--4449", abstract = "Language model detoxification aims to minimize the risk of generating offensive or harmful content in pretrained language models (PLMs) for safer deployment. Existing methods can be roughly categorized as finetuning-based and decoding-based. However, the former is often resource-intensive, while the latter relies on additional components and potentially compromises the generation fluency. In this paper, we propose a more lightweight approach that enables the PLM itself to achieve {``}self-detoxification{''}. Our method is built upon the observation that prepending a negative steering prompt can effectively induce PLMs to generate toxic content. At the same time, we are inspired by the recent research in the interpretability field, which formulates the evolving contextualized representations within the PLM as an information stream facilitated by the attention layers. Drawing on this idea, we devise a method to identify the toxification direction from the normal generation process to the one prompted with the negative prefix, and then steer the generation to the reversed direction by manipulating the information movement within the attention layers. Experimental results show that our approach, without any fine-tuning or extra components, can achieve comparable performance with state-of-the-art methods.", }
Language model detoxification aims to minimize the risk of generating offensive or harmful content in pretrained language models (PLMs) for safer deployment. Existing methods can be roughly categorized as finetuning-based and decoding-based. However, the former is often resource-intensive, while the latter relies on additional components and potentially compromises the generation fluency. In this paper, we propose a more lightweight approach that enables the PLM itself to achieve {``}self-detoxification{''}. Our method is built upon the observation that prepending a negative steering prompt can effectively induce PLMs to generate toxic content. At the same time, we are inspired by the recent research in the interpretability field, which formulates the evolving contextualized representations within the PLM as an information stream facilitated by the attention layers. Drawing on this idea, we devise a method to identify the toxification direction from the normal generation process to the one prompted with the negative prefix, and then steer the generation to the reversed direction by manipulating the information movement within the attention layers. Experimental results show that our approach, without any fine-tuning or extra components, can achieve comparable performance with state-of-the-art methods.
[ "Leong, Chak Tou", "Cheng, Yi", "Wang, Jiashuo", "Wang, Jian", "Li, Wenjie" ]
Self-Detoxifying Language Models via Toxification Reversal
emnlp-main.269
2310.09573
[ "https://github.com/cooperleong00/toxificationreversal" ]
https://huggingface.co/papers/2310.09573
0
0
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.270.bib
https://aclanthology.org/2023.emnlp-main.270/
@inproceedings{faltings-etal-2023-interactive, title = "Interactive Text Generation", author = "Faltings, Felix and Galley, Michel and Brantley, Kiant{\'e} and Peng, Baolin and Cai, Weixin and Zhang, Yizhe and Gao, Jianfeng and Dolan, Bill", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.270", doi = "10.18653/v1/2023.emnlp-main.270", pages = "4450--4468", abstract = "Users interact with text, image, code, or other editors on a daily basis. However, machine learning models are rarely trained in the settings that reflect the interactivity between users and their editor. This is understandable as training AI models with real users is not only slow and costly, but what these models learn may be specific to user interface design choices. Unfortunately, this means most of the research on text, code, and image generation has focused on non-interactive settings, whereby the model is expected to get everything right without accounting for any input from a user who may be willing to help. We introduce a new Interactive Text Generation task that allows training generation models interactively without the costs of involving real users, by using user simulators that provide edits that guide the model towards a given target text. We train our interactive models using Imitation Learning, and our experiments against competitive non-interactive generation models show that models trained interactively are superior to their non-interactive counterparts, even when all models are given the same budget of user inputs or edits.", }
Users interact with text, image, code, or other editors on a daily basis. However, machine learning models are rarely trained in the settings that reflect the interactivity between users and their editor. This is understandable as training AI models with real users is not only slow and costly, but what these models learn may be specific to user interface design choices. Unfortunately, this means most of the research on text, code, and image generation has focused on non-interactive settings, whereby the model is expected to get everything right without accounting for any input from a user who may be willing to help. We introduce a new Interactive Text Generation task that allows training generation models interactively without the costs of involving real users, by using user simulators that provide edits that guide the model towards a given target text. We train our interactive models using Imitation Learning, and our experiments against competitive non-interactive generation models show that models trained interactively are superior to their non-interactive counterparts, even when all models are given the same budget of user inputs or edits.
[ "Faltings, Felix", "Galley, Michel", "Brantley, Kiant{\\'e}", "Peng, Baolin", "Cai, Weixin", "Zhang, Yizhe", "Gao, Jianfeng", "Dolan, Bill" ]
Interactive Text Generation
emnlp-main.270
2303.00908
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.271.bib
https://aclanthology.org/2023.emnlp-main.271/
@inproceedings{sultan-2023-knowledge, title = "Knowledge Distillation $\approx$ Label Smoothing: Fact or Fallacy?", author = "Sultan, Md", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.271", doi = "10.18653/v1/2023.emnlp-main.271", pages = "4469--4477", abstract = "Originally proposed as a method for knowledge transfer from one model to another, some recent studies have suggested that knowledge distillation (KD) is in fact a form of regularization. Perhaps the strongest argument of all for this new perspective comes from its apparent similarities with label smoothing (LS). Here we re-examine this stated equivalence between the two methods by comparing the predictive confidences of the models they train. Experiments on four text classification tasks involving models of different sizes show that: (a) In most settings, KD and LS drive model confidence in completely opposite directions, and (b) In KD, the student inherits not only its knowledge but also its confidence from the teacher, reinforcing the classical knowledge transfer view.", }
Originally proposed as a method for knowledge transfer from one model to another, some recent studies have suggested that knowledge distillation (KD) is in fact a form of regularization. Perhaps the strongest argument of all for this new perspective comes from its apparent similarities with label smoothing (LS). Here we re-examine this stated equivalence between the two methods by comparing the predictive confidences of the models they train. Experiments on four text classification tasks involving models of different sizes show that: (a) In most settings, KD and LS drive model confidence in completely opposite directions, and (b) In KD, the student inherits not only its knowledge but also its confidence from the teacher, reinforcing the classical knowledge transfer view.
[ "Sultan, Md" ]
Knowledge Distillation ≈ Label Smoothing: Fact or Fallacy?
emnlp-main.271
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.272.bib
https://aclanthology.org/2023.emnlp-main.272/
@inproceedings{beinborn-pinter-2023-analyzing, title = "Analyzing Cognitive Plausibility of Subword Tokenization", author = "Beinborn, Lisa and Pinter, Yuval", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.272", doi = "10.18653/v1/2023.emnlp-main.272", pages = "4478--4486", abstract = "Subword tokenization has become the de-facto standard for tokenization although comparative evaluations of their quality across languages are scarce. Existing evaluation studies focus on the effect of a tokenization algorithm on the performance in downstream tasks, or on engineering criteria such as the compression rate. We present a new evaluation paradigm that focuses on the cognitive plausibility of subword tokenization. We analyze the correlation of the tokenizer output with the reading time and accuracy of human responses on a lexical decision task. We compare three tokenization algorithms across several languages and vocabulary sizes. Our results indicate that the Unigram algorithm yields less cognitively plausible tokenization behavior and a worse coverage of derivational morphemes, in contrast with prior work.", }
Subword tokenization has become the de-facto standard for tokenization although comparative evaluations of their quality across languages are scarce. Existing evaluation studies focus on the effect of a tokenization algorithm on the performance in downstream tasks, or on engineering criteria such as the compression rate. We present a new evaluation paradigm that focuses on the cognitive plausibility of subword tokenization. We analyze the correlation of the tokenizer output with the reading time and accuracy of human responses on a lexical decision task. We compare three tokenization algorithms across several languages and vocabulary sizes. Our results indicate that the Unigram algorithm yields less cognitively plausible tokenization behavior and a worse coverage of derivational morphemes, in contrast with prior work.
[ "Beinborn, Lisa", "Pinter, Yuval" ]
Analyzing Cognitive Plausibility of Subword Tokenization
emnlp-main.272
2310.13348
[ "" ]
https://huggingface.co/papers/2310.13348
2
0
0
2
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.273.bib
https://aclanthology.org/2023.emnlp-main.273/
@inproceedings{ma-du-2023-poe, title = "{POE}: Process of Elimination for Multiple Choice Reasoning", author = "Ma, Chenkai and Du, Xinya", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.273", doi = "10.18653/v1/2023.emnlp-main.273", pages = "4487--4496", abstract = "Language models (LMs) are capable of conducting in-context learning for multiple choice reasoning tasks, but the options in these tasks are treated equally. As humans often first eliminate wrong options before picking the final correct answer, we argue a similar two-step strategy can make LMs better at these tasks. To this end, we present the Process of Elimination (POE), a two-step scoring method. In the first step, POE scores each option, and eliminates seemingly wrong options. In the second step, POE masks these wrong options, and makes the final prediction from the remaining options. Zero-shot experiments on 8 reasoning tasks illustrate the effectiveness of POE, and a following analysis finds our method to be especially performant on logical reasoning tasks. We further analyze the effect of masks, and show that POE applies to few-shot settings and large language models (LLMs) like ChatGPT.", }
Language models (LMs) are capable of conducting in-context learning for multiple choice reasoning tasks, but the options in these tasks are treated equally. As humans often first eliminate wrong options before picking the final correct answer, we argue a similar two-step strategy can make LMs better at these tasks. To this end, we present the Process of Elimination (POE), a two-step scoring method. In the first step, POE scores each option, and eliminates seemingly wrong options. In the second step, POE masks these wrong options, and makes the final prediction from the remaining options. Zero-shot experiments on 8 reasoning tasks illustrate the effectiveness of POE, and a following analysis finds our method to be especially performant on logical reasoning tasks. We further analyze the effect of masks, and show that POE applies to few-shot settings and large language models (LLMs) like ChatGPT.
[ "Ma, Chenkai", "Du, Xinya" ]
POE: Process of Elimination for Multiple Choice Reasoning
emnlp-main.273
2310.15575
[ "https://github.com/kasmasvan/poe" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.274.bib
https://aclanthology.org/2023.emnlp-main.274/
@inproceedings{singh-etal-2023-neustip, title = "{N}eu{STIP}: A Neuro-Symbolic Model for Link and Time Prediction in Temporal Knowledge Graphs", author = "Singh, Ishaan and Kaur, Navdeep and Gaur, Garima and {Mausam}", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.274", doi = "10.18653/v1/2023.emnlp-main.274", pages = "4497--4516", abstract = "Neuro-symbolic (NS) models for knowledge graph completion (KGC) combine the benefits of symbolic models (interpretable inference) with those of distributed representations (parameter sharing, high accuracy). While several NS models exist for KGs with static facts, there is limited work on temporal KGC (TKGC) for KGs where a fact is associated with a time interval. In response, we propose a novel NS model for TKGC called NeuSTIP, which performs link prediction and time interval prediction in a TKG. NeuSTIP learns temporal rules with Allen predicates, which ensure temporal consistency between neighboring predicates in the rule body. We further design a unique scoring function that evaluates the confidence of the candidate answers while performing link and time interval predictions by utilizing the learned rules. Our empirical evaluation on two time interval based TKGC datasets shows that our model shows competitive performance on link prediction and establishes a new state of the art on time prediction.", }
Neuro-symbolic (NS) models for knowledge graph completion (KGC) combine the benefits of symbolic models (interpretable inference) with those of distributed representations (parameter sharing, high accuracy). While several NS models exist for KGs with static facts, there is limited work on temporal KGC (TKGC) for KGs where a fact is associated with a time interval. In response, we propose a novel NS model for TKGC called NeuSTIP, which performs link prediction and time interval prediction in a TKG. NeuSTIP learns temporal rules with Allen predicates, which ensure temporal consistency between neighboring predicates in the rule body. We further design a unique scoring function that evaluates the confidence of the candidate answers while performing link and time interval predictions by utilizing the learned rules. Our empirical evaluation on two time interval based TKGC datasets shows that our model shows competitive performance on link prediction and establishes a new state of the art on time prediction.
[ "Singh, Ishaan", "Kaur, Navdeep", "Gaur, Garima", "{Mausam}" ]
NeuSTIP: A Neuro-Symbolic Model for Link and Time Prediction in Temporal Knowledge Graphs
emnlp-main.274
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.275.bib
https://aclanthology.org/2023.emnlp-main.275/
@inproceedings{singh-etal-2023-standardizing, title = "Standardizing Distress Analysis: Emotion-Driven Distress Identification and Cause Extraction ({DICE}) in Multimodal Online Posts", author = "Singh, Gopendra and Ghosh, Soumitra and Verma, Atul and Painkra, Chetna and Ekbal, Asif", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.275", doi = "10.18653/v1/2023.emnlp-main.275", pages = "4517--4532", abstract = "Due to its growing impact on public opinion, hate speech on social media has garnered increased attention. While automated methods for identifying hate speech have been presented in the past, they have mostly been limited to analyzing textual content. The interpretability of such models has received very little attention, despite the social and legal consequences of erroneous predictions. In this work, we present a novel problem of \textit{Distress Identification and Cause Extraction (DICE)} from multimodal online posts. We develop a multi-task deep framework for the simultaneous detection of distress content and identify connected causal phrases from the text using emotional information. The emotional information is incorporated into the training process using a zero-shot strategy, and a novel mechanism is devised to fuse the features from the multimodal inputs. Furthermore, we introduce the first-of-its-kind \textit{Distress and Cause annotated Multimodal (DCaM)} dataset of 20,764 social media posts. We thoroughly evaluate our proposed method by comparing it to several existing benchmarks. Empirical assessment and comprehensive qualitative analysis demonstrate that our proposed method works well on distress detection and cause extraction tasks, improving F1 and ROS scores by 1.95{\%} and 3{\%}, respectively, relative to the best-performing baseline. The code and the dataset can be accessed from the following link: \url{https://www.iitp.ac.in/~ai-nlp-ml/resources.html\#DICE}.", }
Due to its growing impact on public opinion, hate speech on social media has garnered increased attention. While automated methods for identifying hate speech have been presented in the past, they have mostly been limited to analyzing textual content. The interpretability of such models has received very little attention, despite the social and legal consequences of erroneous predictions. In this work, we present a novel problem of \textit{Distress Identification and Cause Extraction (DICE)} from multimodal online posts. We develop a multi-task deep framework for the simultaneous detection of distress content and identify connected causal phrases from the text using emotional information. The emotional information is incorporated into the training process using a zero-shot strategy, and a novel mechanism is devised to fuse the features from the multimodal inputs. Furthermore, we introduce the first-of-its-kind \textit{Distress and Cause annotated Multimodal (DCaM)} dataset of 20,764 social media posts. We thoroughly evaluate our proposed method by comparing it to several existing benchmarks. Empirical assessment and comprehensive qualitative analysis demonstrate that our proposed method works well on distress detection and cause extraction tasks, improving F1 and ROS scores by 1.95{\%} and 3{\%}, respectively, relative to the best-performing baseline. The code and the dataset can be accessed from the following link: \url{https://www.iitp.ac.in/~ai-nlp-ml/resources.html\#DICE}.
[ "Singh, Gopendra", "Ghosh, Soumitra", "Verma, Atul", "Painkra, Chetna", "Ekbal, Asif" ]
Standardizing Distress Analysis: Emotion-Driven Distress Identification and Cause Extraction (DICE) in Multimodal Online Posts
emnlp-main.275
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.276.bib
https://aclanthology.org/2023.emnlp-main.276/
@inproceedings{yang-etal-2023-distribution, title = "Out-of-Distribution Generalization in Natural Language Processing: Past, Present, and Future", author = "Yang, Linyi and Song, Yaoxian and Ren, Xuan and Lyu, Chenyang and Wang, Yidong and Zhuo, Jingming and Liu, Lingqiao and Wang, Jindong and Foster, Jennifer and Zhang, Yue", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.276", doi = "10.18653/v1/2023.emnlp-main.276", pages = "4533--4559", abstract = "Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution. This poses important questions about the robustness of NLP models and their high accuracy, which may be artificially inflated due to their underlying sensitivity to systematic biases. Despite these challenges, there is a lack of comprehensive surveys on the generalization challenge from an OOD perspective in natural language understanding. Therefore, this paper aims to fill this gap by presenting the first comprehensive review of recent progress, methods, and evaluations on this topic. We further discuss the challenges involved and potential future research directions. By providing convenient access to existing work, we hope this survey will encourage future research in this area.", }
Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution. This poses important questions about the robustness of NLP models and their high accuracy, which may be artificially inflated due to their underlying sensitivity to systematic biases. Despite these challenges, there is a lack of comprehensive surveys on the generalization challenge from an OOD perspective in natural language understanding. Therefore, this paper aims to fill this gap by presenting the first comprehensive review of recent progress, methods, and evaluations on this topic. We further discuss the challenges involved and potential future research directions. By providing convenient access to existing work, we hope this survey will encourage future research in this area.
[ "Yang, Linyi", "Song, Yaoxian", "Ren, Xuan", "Lyu, Chenyang", "Wang, Yidong", "Zhuo, Jingming", "Liu, Lingqiao", "Wang, Jindong", "Foster, Jennifer", "Zhang, Yue" ]
Out-of-Distribution Generalization in Natural Language Processing: Past, Present, and Future
emnlp-main.276
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.277.bib
https://aclanthology.org/2023.emnlp-main.277/
@inproceedings{zheng-saparov-2023-noisy, title = "Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis", author = "Zheng, Hongyi and Saparov, Abulhair", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.277", doi = "10.18653/v1/2023.emnlp-main.277", pages = "4560--4568", abstract = "Recent advances in prompt engineering enable large language models (LLMs) to solve multi-hop logical reasoning problems with impressive accuracy. However, there is little existing work investigating the robustness of LLMs with few-shot prompting techniques. Therefore, we introduce a systematic approach to test the robustness of LLMs in multi-hop reasoning tasks via domain-agnostic perturbations. We include perturbations at multiple levels of abstractions (e.g. lexical perturbations such as typos, and semantic perturbations such as the inclusion of intermediate reasoning steps in the questions) to conduct behavioral analysis on the LLMs. Throughout our experiments, we find that models are more sensitive to certain perturbations such as replacing words with their synonyms. We also demonstrate that increasing the proportion of perturbed exemplars in the prompts improves the robustness of few-shot prompting methods.", }
Recent advances in prompt engineering enable large language models (LLMs) to solve multi-hop logical reasoning problems with impressive accuracy. However, there is little existing work investigating the robustness of LLMs with few-shot prompting techniques. Therefore, we introduce a systematic approach to test the robustness of LLMs in multi-hop reasoning tasks via domain-agnostic perturbations. We include perturbations at multiple levels of abstractions (e.g. lexical perturbations such as typos, and semantic perturbations such as the inclusion of intermediate reasoning steps in the questions) to conduct behavioral analysis on the LLMs. Throughout our experiments, we find that models are more sensitive to certain perturbations such as replacing words with their synonyms. We also demonstrate that increasing the proportion of perturbed exemplars in the prompts improves the robustness of few-shot prompting methods.
[ "Zheng, Hongyi", "Saparov, Abulhair" ]
Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis
emnlp-main.277
2311.00258
[ "https://github.com/hiroki39/noisy-exemplars-make-large-language-models-more-robust" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.278.bib
https://aclanthology.org/2023.emnlp-main.278/
@inproceedings{lee-etal-2023-large, title = "Can Large Language Models Capture Dissenting Human Voices?", author = "Lee, Noah and An, Na Min and Thorne, James", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.278", doi = "10.18653/v1/2023.emnlp-main.278", pages = "4569--4585", abstract = "Large language models (LLMs) have shown impressive achievements in solving a broad range of tasks. Augmented by instruction fine-tuning, LLMs have also been shown to generalize in zero-shot settings as well. However, whether LLMs closely align with the human disagreement distribution has not been well-studied, especially within the scope of natural language inference (NLI). In this paper, we evaluate the performance and alignment of LLM distribution with humans using two different techniques to estimate the multinomial distribution: Monte Carlo Estimation (MCE) and Log Probability Estimation (LPE). As a result, we show LLMs exhibit limited ability in solving NLI tasks and simultaneously fail to capture human disagreement distribution. The inference and human alignment performances plunge even further on data samples with high human disagreement levels, raising concerns about their natural language understanding (NLU) ability and their representativeness to a larger human population.", }
Large language models (LLMs) have shown impressive achievements in solving a broad range of tasks. Augmented by instruction fine-tuning, LLMs have also been shown to generalize in zero-shot settings as well. However, whether LLMs closely align with the human disagreement distribution has not been well-studied, especially within the scope of natural language inference (NLI). In this paper, we evaluate the performance and alignment of LLM distribution with humans using two different techniques to estimate the multinomial distribution: Monte Carlo Estimation (MCE) and Log Probability Estimation (LPE). As a result, we show LLMs exhibit limited ability in solving NLI tasks and simultaneously fail to capture human disagreement distribution. The inference and human alignment performances plunge even further on data samples with high human disagreement levels, raising concerns about their natural language understanding (NLU) ability and their representativeness to a larger human population.
[ "Lee, Noah", "An, Na Min", "Thorne, James" ]
Can Large Language Models Capture Dissenting Human Voices?
emnlp-main.278
2305.13788
[ "https://github.com/xfactlab/emnlp2023-llm-disagreement" ]
https://huggingface.co/papers/2305.13788
2
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.279.bib
https://aclanthology.org/2023.emnlp-main.279/
@inproceedings{puduppully-etal-2023-decomt, title = "{D}eco{MT}: Decomposed Prompting for Machine Translation Between Related Languages using Large Language Models", author = "Puduppully, Ratish and Kunchukuttan, Anoop and Dabre, Raj and Aw, Ai Ti and Chen, Nancy", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.279", doi = "10.18653/v1/2023.emnlp-main.279", pages = "4586--4602", abstract = "This study investigates machine translation between related languages i.e., languages within the same family that share linguistic characteristics such as word order and lexical similarity. Machine translation through few-shot prompting leverages a small set of translation pair examples to generate translations for test sentences. This procedure requires the model to learn how to generate translations while simultaneously ensuring that token ordering is maintained to produce a fluent and accurate translation. We propose that for related languages, the task of machine translation can be simplified by leveraging the monotonic alignment characteristic of such languages. We introduce DecoMT, a novel approach of few-shot prompting that decomposes the translation process into a sequence of word chunk translations. Through automatic and human evaluation conducted on multiple related language pairs across various language families, we demonstrate that our proposed approach of decomposed prompting surpasses multiple established few-shot baseline approaches. For example, DecoMT outperforms the strong few-shot prompting BLOOM model with an average improvement of 8 chrF++ scores across the examined languages.", }
This study investigates machine translation between related languages i.e., languages within the same family that share linguistic characteristics such as word order and lexical similarity. Machine translation through few-shot prompting leverages a small set of translation pair examples to generate translations for test sentences. This procedure requires the model to learn how to generate translations while simultaneously ensuring that token ordering is maintained to produce a fluent and accurate translation. We propose that for related languages, the task of machine translation can be simplified by leveraging the monotonic alignment characteristic of such languages. We introduce DecoMT, a novel approach of few-shot prompting that decomposes the translation process into a sequence of word chunk translations. Through automatic and human evaluation conducted on multiple related language pairs across various language families, we demonstrate that our proposed approach of decomposed prompting surpasses multiple established few-shot baseline approaches. For example, DecoMT outperforms the strong few-shot prompting BLOOM model with an average improvement of 8 chrF++ scores across the examined languages.
[ "Puduppully, Ratish", "Kunchukuttan, Anoop", "Dabre, Raj", "Aw, Ai Ti", "Chen, Nancy" ]
DecoMT: Decomposed Prompting for Machine Translation Between Related Languages using Large Language Models
emnlp-main.279
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.280.bib
https://aclanthology.org/2023.emnlp-main.280/
@inproceedings{zhao-etal-2023-prototype, title = "Prototype-based {H}yper{A}dapter for Sample-Efficient Multi-task Tuning", author = "Zhao, Hao and Fu, Jie and He, Zhaofeng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.280", doi = "10.18653/v1/2023.emnlp-main.280", pages = "4603--4615", abstract = "Parameter-efficient fine-tuning (PEFT) has shown its effectiveness in adapting the pre-trained language models to downstream tasks while only updating a small number of parameters. Despite the success, most existing methods independently adapt to each task without considering knowledge transfer between tasks and are limited to low-data regimes. To overcome this issue, we propose Prototype-based HyperAdapter (PHA), a novel framework built on the adapter-tuning and hypernetwork. It introduces an instance-dense retriever and a prototypical hypernetwork to generate the conditional modules in a sample-efficient manner. This leads to comparable performance improvements against existing PEFT methods on multi-task learning and few-shot transfer learning. More importantly, when the available data size gets smaller, our method outperforms other strong baselines by a large margin. Based on our extensive empirical experiments across various datasets, we demonstrate that PHA strikes a better trade-off between trainable parameters, accuracy on stream tasks, and sample efficiency. Our code is publicly available at https://github.com/Bumble666/PHA", }
Parameter-efficient fine-tuning (PEFT) has shown its effectiveness in adapting the pre-trained language models to downstream tasks while only updating a small number of parameters. Despite the success, most existing methods independently adapt to each task without considering knowledge transfer between tasks and are limited to low-data regimes. To overcome this issue, we propose Prototype-based HyperAdapter (PHA), a novel framework built on the adapter-tuning and hypernetwork. It introduces an instance-dense retriever and a prototypical hypernetwork to generate the conditional modules in a sample-efficient manner. This leads to comparable performance improvements against existing PEFT methods on multi-task learning and few-shot transfer learning. More importantly, when the available data size gets smaller, our method outperforms other strong baselines by a large margin. Based on our extensive empirical experiments across various datasets, we demonstrate that PHA strikes a better trade-off between trainable parameters, accuracy on stream tasks, and sample efficiency. Our code is publicly available at https://github.com/Bumble666/PHA
[ "Zhao, Hao", "Fu, Jie", "He, Zhaofeng" ]
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
emnlp-main.280
2310.11670
[ "https://github.com/bumble666/pha" ]
https://huggingface.co/papers/2310.11670
0
1
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.281.bib
https://aclanthology.org/2023.emnlp-main.281/
@inproceedings{ma-etal-2023-towards, title = "Towards Building More Robust {NER} datasets: An Empirical Study on {NER} Dataset Bias from a Dataset Difficulty View", author = "Ma, Ruotian and Wang, Xiaolei and Zhou, Xin and Zhang, Qi and Huang, Xuanjing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.281", doi = "10.18653/v1/2023.emnlp-main.281", pages = "4616--4630", abstract = "Recently, many studies have illustrated the robustness problem of Named Entity Recognition (NER) systems: the NER models often rely on superficial entity patterns for predictions, without considering evidence from the context. Consequently, even state-of-the-art NER models generalize poorly to out-of-domain scenarios when out-of-distribution (OOD) entity patterns are introduced. Previous research attributes the robustness problem to the existence of NER dataset bias, where simpler and regular entity patterns induce shortcut learning. In this work, we bring new insights into this problem by comprehensively investigating the NER dataset bias from a dataset difficulty view. We quantify the entity-context difficulty distribution in existing datasets and explain their relationship with model robustness. Based on our findings, we explore three potential ways to de-bias the NER datasets by altering entity-context distribution, and we validate the feasibility with intensive experiments. Finally, we show that the de-biased datasets can transfer to different models and even benefit existing model-based robustness-improving methods, indicating that building more robust datasets is fundamental for building more robust NER systems.", }
Recently, many studies have illustrated the robustness problem of Named Entity Recognition (NER) systems: the NER models often rely on superficial entity patterns for predictions, without considering evidence from the context. Consequently, even state-of-the-art NER models generalize poorly to out-of-domain scenarios when out-of-distribution (OOD) entity patterns are introduced. Previous research attributes the robustness problem to the existence of NER dataset bias, where simpler and regular entity patterns induce shortcut learning. In this work, we bring new insights into this problem by comprehensively investigating the NER dataset bias from a dataset difficulty view. We quantify the entity-context difficulty distribution in existing datasets and explain their relationship with model robustness. Based on our findings, we explore three potential ways to de-bias the NER datasets by altering entity-context distribution, and we validate the feasibility with intensive experiments. Finally, we show that the de-biased datasets can transfer to different models and even benefit existing model-based robustness-improving methods, indicating that building more robust datasets is fundamental for building more robust NER systems.
[ "Ma, Ruotian", "Wang, Xiaolei", "Zhou, Xin", "Zhang, Qi", "Huang, Xuanjing" ]
Towards Building More Robust NER datasets: An Empirical Study on NER Dataset Bias from a Dataset Difficulty View
emnlp-main.281
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.282.bib
https://aclanthology.org/2023.emnlp-main.282/
@inproceedings{wang-etal-2023-gradsim, title = "{G}rad{S}im: Gradient-Based Language Grouping for Effective Multilingual Training", author = {Wang, Mingyang and Adel, Heike and Lange, Lukas and Str{\"o}tgen, Jannik and Schuetze, Hinrich}, editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.282", doi = "10.18653/v1/2023.emnlp-main.282", pages = "4631--4646", abstract = "Most languages of the world pose low-resource challenges to natural language processing models. With multilingual training, knowledge can be shared among languages. However, not all languages positively influence each other and it is an open research question how to select the most suitable set of languages for multilingual training and avoid negative interference among languages whose characteristics or data distributions are not compatible. In this paper, we propose GradSim, a language grouping method based on gradient similarity. Our experiments on three diverse multilingual benchmark datasets show that it leads to the largest performance gains compared to other similarity measures and it is better correlated with cross-lingual model performance. As a result, we set the new state of the art on AfriSenti, a benchmark dataset for sentiment analysis on low-resource African languages. In our extensive analysis, we further reveal that besides linguistic features, the topics of the datasets play an important role for language grouping and that lower layers of transformer models encode language-specific features while higher layers capture task-specific information.", }
Most languages of the world pose low-resource challenges to natural language processing models. With multilingual training, knowledge can be shared among languages. However, not all languages positively influence each other and it is an open research question how to select the most suitable set of languages for multilingual training and avoid negative interference among languages whose characteristics or data distributions are not compatible. In this paper, we propose GradSim, a language grouping method based on gradient similarity. Our experiments on three diverse multilingual benchmark datasets show that it leads to the largest performance gains compared to other similarity measures and it is better correlated with cross-lingual model performance. As a result, we set the new state of the art on AfriSenti, a benchmark dataset for sentiment analysis on low-resource African languages. In our extensive analysis, we further reveal that besides linguistic features, the topics of the datasets play an important role for language grouping and that lower layers of transformer models encode language-specific features while higher layers capture task-specific information.
[ "Wang, Mingyang", "Adel, Heike", "Lange, Lukas", "Str{\\\"o}tgen, Jannik", "Schuetze, Hinrich" ]
GradSim: Gradient-Based Language Grouping for Effective Multilingual Training
emnlp-main.282
2310.15269
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.283.bib
https://aclanthology.org/2023.emnlp-main.283/
@inproceedings{yamagiwa-etal-2023-discovering, title = "Discovering Universal Geometry in Embeddings with {ICA}", author = "Yamagiwa, Hiroaki and Oyama, Momose and Shimodaira, Hidetoshi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.283", doi = "10.18653/v1/2023.emnlp-main.283", pages = "4647--4675", abstract = "This study utilizes Independent Component Analysis (ICA) to unveil a consistent semantic structure within embeddings of words or images. Our approach extracts independent semantic components from the embeddings of a pre-trained model by leveraging anisotropic information that remains after the whitening process in Principal Component Analysis (PCA). We demonstrate that each embedding can be expressed as a composition of a few intrinsic interpretable axes and that these semantic axes remain consistent across different languages, algorithms, and modalities. The discovery of a universal semantic structure in the geometric patterns of embeddings enhances our understanding of the representations in embeddings.", }
This study utilizes Independent Component Analysis (ICA) to unveil a consistent semantic structure within embeddings of words or images. Our approach extracts independent semantic components from the embeddings of a pre-trained model by leveraging anisotropic information that remains after the whitening process in Principal Component Analysis (PCA). We demonstrate that each embedding can be expressed as a composition of a few intrinsic interpretable axes and that these semantic axes remain consistent across different languages, algorithms, and modalities. The discovery of a universal semantic structure in the geometric patterns of embeddings enhances our understanding of the representations in embeddings.
[ "Yamagiwa, Hiroaki", "Oyama, Momose", "Shimodaira, Hidetoshi" ]
Discovering Universal Geometry in Embeddings with ICA
emnlp-main.283
2305.13175
[ "https://github.com/shimo-lab/universal-geometry-with-ica" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.284.bib
https://aclanthology.org/2023.emnlp-main.284/
@inproceedings{brunila-etal-2023-toward, title = "Toward a Critical Toponymy Framework for Named Entity Recognition: A Case Study of Airbnb in {N}ew {Y}ork City", author = "Brunila, Mikael and LaViolette, Jack and CH-Wang, Sky and Verma, Priyanka and F{\'e}r{\'e}, Clara and McKenzie, Grant", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.284", doi = "10.18653/v1/2023.emnlp-main.284", pages = "4676--4695", abstract = "Critical toponymy examines the dynamics of power, capital, and resistance through place names and the sites to which they refer. Studies here have traditionally focused on the semantic content of toponyms and the top-down institutional processes that produce them. However, they have generally ignored the ways in which toponyms are used by ordinary people in everyday discourse, as well as the other strategies of geospatial description that accompany and contextualize toponymic reference. Here, we develop computational methods to measure how cultural and economic capital shape the ways in which people refer to places, through a novel annotated dataset of 47,440 New York City Airbnb listings from the 2010s. Building on this dataset, we introduce a new named entity recognition (NER) model able to identify important discourse categories integral to the characterization of place. Our findings point toward new directions for critical toponymy and to a range of previously understudied linguistic signals relevant to research on neighborhood status, housing and tourism markets, and gentrification.", }
Critical toponymy examines the dynamics of power, capital, and resistance through place names and the sites to which they refer. Studies here have traditionally focused on the semantic content of toponyms and the top-down institutional processes that produce them. However, they have generally ignored the ways in which toponyms are used by ordinary people in everyday discourse, as well as the other strategies of geospatial description that accompany and contextualize toponymic reference. Here, we develop computational methods to measure how cultural and economic capital shape the ways in which people refer to places, through a novel annotated dataset of 47,440 New York City Airbnb listings from the 2010s. Building on this dataset, we introduce a new named entity recognition (NER) model able to identify important discourse categories integral to the characterization of place. Our findings point toward new directions for critical toponymy and to a range of previously understudied linguistic signals relevant to research on neighborhood status, housing and tourism markets, and gentrification.
[ "Brunila, Mikael", "LaViolette, Jack", "CH-Wang, Sky", "Verma, Priyanka", "F{\\'e}r{\\'e}, Clara", "McKenzie, Grant" ]
Toward a Critical Toponymy Framework for Named Entity Recognition: A Case Study of Airbnb in New York City
emnlp-main.284
2310.15302
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.285.bib
https://aclanthology.org/2023.emnlp-main.285/
@inproceedings{qin-etal-2023-well, title = "Well Begun is Half Done: Generator-agnostic Knowledge Pre-Selection for Knowledge-Grounded Dialogue", author = "Qin, Lang and Zhang, Yao and Liang, Hongru and Wang, Jun and Yang, Zhenglu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.285", doi = "10.18653/v1/2023.emnlp-main.285", pages = "4696--4709", abstract = "Accurate knowledge selection is critical in knowledge-grounded dialogue systems. Towards a closer look at it, we offer a novel perspective to organize existing literature, i.e., knowledge selection coupled with, after, and before generation. We focus on the third under-explored category of study, which can not only select knowledge accurately in advance, but has the advantage to reduce the learning, adjustment, and interpretation burden of subsequent response generation models, especially LLMs. We propose $\tt{GATE}$, a generator-agnostic knowledge selection method, to prepare knowledge for subsequent response generation models by selecting context-related knowledge among different knowledge structures and variable knowledge requirements. Experimental results demonstrate the superiority of $\tt{GATE}$, and indicate that knowledge selection before generation is a lightweight yet effective way to facilitate LLMs (e.g., ChatGPT) to generate more informative responses.", }
Accurate knowledge selection is critical in knowledge-grounded dialogue systems. Towards a closer look at it, we offer a novel perspective to organize existing literature, i.e., knowledge selection coupled with, after, and before generation. We focus on the third under-explored category of study, which can not only select knowledge accurately in advance, but has the advantage to reduce the learning, adjustment, and interpretation burden of subsequent response generation models, especially LLMs. We propose $\tt{GATE}$, a generator-agnostic knowledge selection method, to prepare knowledge for subsequent response generation models by selecting context-related knowledge among different knowledge structures and variable knowledge requirements. Experimental results demonstrate the superiority of $\tt{GATE}$, and indicate that knowledge selection before generation is a lightweight yet effective way to facilitate LLMs (e.g., ChatGPT) to generate more informative responses.
[ "Qin, Lang", "Zhang, Yao", "Liang, Hongru", "Wang, Jun", "Yang, Zhenglu" ]
Well Begun is Half Done: Generator-agnostic Knowledge Pre-Selection for Knowledge-Grounded Dialogue
emnlp-main.285
2310.07659
[ "https://github.com/qinlang14/gate" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.286.bib
https://aclanthology.org/2023.emnlp-main.286/
@inproceedings{zhang-etal-2023-merging, title = "Merging Generated and Retrieved Knowledge for Open-Domain {QA}", author = "Zhang, Yunxiang and Khalifa, Muhammad and Logeswaran, Lajanugen and Lee, Moontae and Lee, Honglak and Wang, Lu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.286", doi = "10.18653/v1/2023.emnlp-main.286", pages = "4710--4728", abstract = "Open-domain question answering (QA) systems are often built with retrieval modules. However, retrieving passages from a given source is known to suffer from insufficient knowledge coverage. Alternatively, prompting large language models (LLMs) to generate contextual passages based on their parametric knowledge has been shown to improve QA performance. Yet, LLMs tend to {``}hallucinate{''} content that conflicts with the retrieved knowledge. Based on the intuition that answers supported by both sources are more likely to be correct, we propose COMBO, a Compatibility-Oriented knowledge Merging for Better Open-domain QA framework, to effectively leverage the two sources of information. Concretely, we match LLM-generated passages with retrieved counterparts into compatible pairs, based on discriminators trained with silver compatibility labels. Then a Fusion-in-Decoder-based reader model handles passage pairs to arrive at the final answer. Experiments show that COMBO outperforms competitive baselines on three out of four tested open-domain QA benchmarks. Further analysis reveals that our proposed framework demonstrates greater efficacy in scenarios with a higher degree of knowledge conflicts.", }
Open-domain question answering (QA) systems are often built with retrieval modules. However, retrieving passages from a given source is known to suffer from insufficient knowledge coverage. Alternatively, prompting large language models (LLMs) to generate contextual passages based on their parametric knowledge has been shown to improve QA performance. Yet, LLMs tend to {``}hallucinate{''} content that conflicts with the retrieved knowledge. Based on the intuition that answers supported by both sources are more likely to be correct, we propose COMBO, a Compatibility-Oriented knowledge Merging for Better Open-domain QA framework, to effectively leverage the two sources of information. Concretely, we match LLM-generated passages with retrieved counterparts into compatible pairs, based on discriminators trained with silver compatibility labels. Then a Fusion-in-Decoder-based reader model handles passage pairs to arrive at the final answer. Experiments show that COMBO outperforms competitive baselines on three out of four tested open-domain QA benchmarks. Further analysis reveals that our proposed framework demonstrates greater efficacy in scenarios with a higher degree of knowledge conflicts.
[ "Zhang, Yunxiang", "Khalifa, Muhammad", "Logeswaran, Lajanugen", "Lee, Moontae", "Lee, Honglak", "Wang, Lu" ]
Merging Generated and Retrieved Knowledge for Open-Domain QA
emnlp-main.286
2310.14393
[ "https://github.com/yunx-z/combo" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.287.bib
https://aclanthology.org/2023.emnlp-main.287/
@inproceedings{kannen-etal-2023-best, title = "Best of Both Worlds: Towards Improving Temporal Knowledge Base Question Answering via Targeted Fact Extraction", author = "Kannen, Nithish and Sharma, Udit and Neelam, Sumit and Khandelwal, Dinesh and Ikbal, Shajith and Karanam, Hima and Subramaniam, L", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.287", doi = "10.18653/v1/2023.emnlp-main.287", pages = "4729--4744", abstract = "Temporal question answering (QA) is a special category of complex question answering task that requires reasoning over facts asserting time intervals of events. Previous works have predominately relied on Knowledge Base Question Answering (KBQA) for temporal QA. One of the major challenges faced by these systems is their inability to retrieve all relevant facts due to factors such as incomplete KB and entity/relation linking errors. A failure to fetch even a single fact will block KBQA from computing the answer. Such cases of KB incompleteness are even more profound in the temporal context. To address this issue, we explore an interesting direction where a targeted temporal fact extraction technique is used to assist KBQA whenever it fails to retrieve temporal facts from the KB. We model the extraction problem as an open-domain question answering task using off-the-shelf language models. This way, we target to extract from textual resources those facts that failed to get retrieved from the KB. Experimental results on two temporal QA benchmarks show promising {\textasciitilde}30{\%} {\&} {\textasciitilde}10{\%} relative improvements in answer accuracies without any additional training cost.", }
Temporal question answering (QA) is a special category of complex question answering task that requires reasoning over facts asserting time intervals of events. Previous works have predominately relied on Knowledge Base Question Answering (KBQA) for temporal QA. One of the major challenges faced by these systems is their inability to retrieve all relevant facts due to factors such as incomplete KB and entity/relation linking errors. A failure to fetch even a single fact will block KBQA from computing the answer. Such cases of KB incompleteness are even more profound in the temporal context. To address this issue, we explore an interesting direction where a targeted temporal fact extraction technique is used to assist KBQA whenever it fails to retrieve temporal facts from the KB. We model the extraction problem as an open-domain question answering task using off-the-shelf language models. This way, we target to extract from textual resources those facts that failed to get retrieved from the KB. Experimental results on two temporal QA benchmarks show promising {\textasciitilde}30{\%} {\&} {\textasciitilde}10{\%} relative improvements in answer accuracies without any additional training cost.
[ "Kannen, Nithish", "Sharma, Udit", "Neelam, Sumit", "Kh", "elwal, Dinesh", "Ikbal, Shajith", "Karanam, Hima", "Subramaniam, L" ]
Best of Both Worlds: Towards Improving Temporal Knowledge Base Question Answering via Targeted Fact Extraction
emnlp-main.287
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.288.bib
https://aclanthology.org/2023.emnlp-main.288/
@inproceedings{balepur-etal-2023-text, title = "Text Fact Transfer", author = "Balepur, Nishant and Huang, Jie and Chang, Kevin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.288", doi = "10.18653/v1/2023.emnlp-main.288", pages = "4745--4764", abstract = "Text style transfer is a prominent task that aims to control the style of text without inherently changing its factual content. To cover more text modification applications, such as adapting past news for current events and repurposing educational materials, we propose the task of text fact transfer, which seeks to transfer the factual content of a source text between topics without modifying its style. We find that existing language models struggle with text fact transfer, due to their inability to preserve the specificity and phrasing of the source text, and tendency to hallucinate errors. To address these issues, we design ModQGA, a framework that minimally modifies a source text with a novel combination of end-to-end question generation and specificity-aware question answering. Through experiments on four existing datasets adapted for text fact transfer, we show that ModQGA can accurately transfer factual content without sacrificing the style of the source text.", }
Text style transfer is a prominent task that aims to control the style of text without inherently changing its factual content. To cover more text modification applications, such as adapting past news for current events and repurposing educational materials, we propose the task of text fact transfer, which seeks to transfer the factual content of a source text between topics without modifying its style. We find that existing language models struggle with text fact transfer, due to their inability to preserve the specificity and phrasing of the source text, and tendency to hallucinate errors. To address these issues, we design ModQGA, a framework that minimally modifies a source text with a novel combination of end-to-end question generation and specificity-aware question answering. Through experiments on four existing datasets adapted for text fact transfer, we show that ModQGA can accurately transfer factual content without sacrificing the style of the source text.
[ "Balepur, Nishant", "Huang, Jie", "Chang, Kevin" ]
Text Fact Transfer
emnlp-main.288
2310.14486
[ "https://github.com/nbalepur/text-fact-transfer" ]
https://huggingface.co/papers/2310.14486
0
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.289.bib
https://aclanthology.org/2023.emnlp-main.289/
@inproceedings{chen-etal-2023-cheaper, title = "A Cheaper and Better Diffusion Language Model with Soft-Masked Noise", author = "Chen, Jiaao and Zhang, Aston and Li, Mu and Smola, Alex and Yang, Diyi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.289", doi = "10.18653/v1/2023.emnlp-main.289", pages = "4765--4775", abstract = "Diffusion models that are based on iterative denoising have been recently proposed and leveraged in various generation tasks like image generation. Whereas, as a way inherently built for continuous data, existing diffusion models still have some limitations in modeling discrete data, e.g., languages. For example, the generally used Gaussian noise can not handle the discrete corruption well, and the objectives in continuous spaces fail to be stable for textual data in the diffusion process especially when the dimension is high. To alleviate these issues, we introduce a novel diffusion model for language modeling, Masked-Diffuse LM, with lower training cost and better performances, inspired by linguistic features in languages. Specifically, we design a linguistic-informed forward process which adds corruptions to the text through strategically soft-masking to better noise the textual data. Also, we directly predict the categorical distribution with cross-entropy loss function in every diffusion step to connect the continuous space and discrete space in a more efficient and straightforward way. Through experiments on 5 controlled generation tasks, we demonstrate that our Masked-Diffuse LM can achieve better generation quality than the state-of-the-art diffusion models with better efficiency.", }
Diffusion models that are based on iterative denoising have been recently proposed and leveraged in various generation tasks like image generation. Whereas, as a way inherently built for continuous data, existing diffusion models still have some limitations in modeling discrete data, e.g., languages. For example, the generally used Gaussian noise can not handle the discrete corruption well, and the objectives in continuous spaces fail to be stable for textual data in the diffusion process especially when the dimension is high. To alleviate these issues, we introduce a novel diffusion model for language modeling, Masked-Diffuse LM, with lower training cost and better performances, inspired by linguistic features in languages. Specifically, we design a linguistic-informed forward process which adds corruptions to the text through strategically soft-masking to better noise the textual data. Also, we directly predict the categorical distribution with cross-entropy loss function in every diffusion step to connect the continuous space and discrete space in a more efficient and straightforward way. Through experiments on 5 controlled generation tasks, we demonstrate that our Masked-Diffuse LM can achieve better generation quality than the state-of-the-art diffusion models with better efficiency.
[ "Chen, Jiaao", "Zhang, Aston", "Li, Mu", "Smola, Alex", "Yang, Diyi" ]
A Cheaper and Better Diffusion Language Model with Soft-Masked Noise
emnlp-main.289
2304.04746
[ "https://github.com/amazon-science/masked-diffusion-lm" ]
https://huggingface.co/papers/2304.04746
0
0
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.290.bib
https://aclanthology.org/2023.emnlp-main.290/
@inproceedings{abercrombie-etal-2023-mirages, title = "Mirages. On Anthropomorphism in Dialogue Systems", author = "Abercrombie, Gavin and Cercas Curry, Amanda and Dinkar, Tanvi and Rieser, Verena and Talat, Zeerak", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.290", doi = "10.18653/v1/2023.emnlp-main.290", pages = "4776--4790", abstract = "Automated dialogue or conversational systems are anthropomorphised by developers and personified by users. While a degree of anthropomorphism is inevitable, conscious and unconscious design choices can guide users to personify them to varying degrees. Encouraging users to relate to automated systems as if they were human can lead to transparency and trust issues, and high risk scenarios caused by over-reliance on their outputs. As a result, natural language processing researchers have investigated the factors that induce personification and develop resources to mitigate such effects. However, these efforts are fragmented, and many aspects of anthropomorphism have yet to be explored. In this paper, we discuss the linguistic factors that contribute to the anthropomorphism of dialogue systems and the harms that can arise thereof, including reinforcing gender stereotypes and conceptions of acceptable language. We recommend that future efforts towards developing dialogue systems take particular care in their design, development, release, and description; and attend to the many linguistic cues that can elicit personification by users.", }
Automated dialogue or conversational systems are anthropomorphised by developers and personified by users. While a degree of anthropomorphism is inevitable, conscious and unconscious design choices can guide users to personify them to varying degrees. Encouraging users to relate to automated systems as if they were human can lead to transparency and trust issues, and high risk scenarios caused by over-reliance on their outputs. As a result, natural language processing researchers have investigated the factors that induce personification and develop resources to mitigate such effects. However, these efforts are fragmented, and many aspects of anthropomorphism have yet to be explored. In this paper, we discuss the linguistic factors that contribute to the anthropomorphism of dialogue systems and the harms that can arise thereof, including reinforcing gender stereotypes and conceptions of acceptable language. We recommend that future efforts towards developing dialogue systems take particular care in their design, development, release, and description; and attend to the many linguistic cues that can elicit personification by users.
[ "Abercrombie, Gavin", "Cercas Curry, Am", "a", "Dinkar, Tanvi", "Rieser, Verena", "Talat, Zeerak" ]
Mirages. On Anthropomorphism in Dialogue Systems
emnlp-main.290
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.291.bib
https://aclanthology.org/2023.emnlp-main.291/
@inproceedings{liu-etal-2023-cognitive, title = "Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness?", author = "Liu, Kevin and Casper, Stephen and Hadfield-Menell, Dylan and Andreas, Jacob", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.291", doi = "10.18653/v1/2023.emnlp-main.291", pages = "4791--4797", abstract = "Neural language models (LMs) can be used to evaluate the truth of factual statements in two ways: they can be either queried for statement probabilities, or probed for internal representations of truthfulness. Past work has found that these two procedures sometimes disagree, and that probes tend to be more accurate than LM outputs. This has led some researchers to conclude that LMs {``}lie{'} or otherwise encode non-cooperative communicative intents. Is this an accurate description of today{'}s LMs, or can query{--}probe disagreement arise in other ways? We identify three different classes of disagreement, which we term confabulation, deception, and heterogeneity. In many cases, the superiority of probes is simply attributable to better calibration on uncertain answers rather than a greater fraction of correct, high-confidence answers. In some cases, queries and probes perform better on different subsets of inputs, and accuracy can further be improved by ensembling the two.", }
Neural language models (LMs) can be used to evaluate the truth of factual statements in two ways: they can be either queried for statement probabilities, or probed for internal representations of truthfulness. Past work has found that these two procedures sometimes disagree, and that probes tend to be more accurate than LM outputs. This has led some researchers to conclude that LMs {``}lie{'} or otherwise encode non-cooperative communicative intents. Is this an accurate description of today{'}s LMs, or can query{--}probe disagreement arise in other ways? We identify three different classes of disagreement, which we term confabulation, deception, and heterogeneity. In many cases, the superiority of probes is simply attributable to better calibration on uncertain answers rather than a greater fraction of correct, high-confidence answers. In some cases, queries and probes perform better on different subsets of inputs, and accuracy can further be improved by ensembling the two.
[ "Liu, Kevin", "Casper, Stephen", "Hadfield-Menell, Dylan", "Andreas, Jacob" ]
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness?
emnlp-main.291
2312.03729
[ "https://github.com/lingo-mit/lm-truthfulness" ]
https://huggingface.co/papers/2312.03729
1
0
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.292.bib
https://aclanthology.org/2023.emnlp-main.292/
@inproceedings{koo-etal-2023-kebap, title = "{KEBAP}: {K}orean Error Explainable Benchmark Dataset for {ASR} and Post-processing", author = "Koo, Seonmin and Park, Chanjun and Kim, Jinsung and Seo, Jaehyung and Eo, Sugyeong and Moon, Hyeonseok and Lim, Heuiseok", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.292", doi = "10.18653/v1/2023.emnlp-main.292", pages = "4798--4815", abstract = "Automatic Speech Recognition (ASR) systems are instrumental across various applications, with their performance being critically tied to user satisfaction. Conventional evaluation metrics for ASR systems produce a singular aggregate score, which is insufficient for understanding specific system vulnerabilities. Therefore, we aim to address the limitations of the previous ASR evaluation methods by introducing the Korean Error Explainable Benchmark Dataset for ASR and Post-processing (KEBAP). KEBAP enables comprehensive analysis of ASR systems at both speech- and text levels, thereby facilitating a more balanced assessment encompassing speech recognition accuracy and user readability. KEBAP provides 37 newly defined speech-level resources incorporating diverse noise environments and speaker characteristics categories, also presenting 13 distinct text-level error types. This paper demonstrates detailed statistical analyses of colloquial noise categories and textual error types. Furthermore, we conduct extensive validation and analysis on commercially deployed ASR systems, providing valuable insights into their performance. As a more fine-grained and real-world-centric evaluation method, KEBAP contributes to identifying and mitigating potential weaknesses in ASR systems.", }
Automatic Speech Recognition (ASR) systems are instrumental across various applications, with their performance being critically tied to user satisfaction. Conventional evaluation metrics for ASR systems produce a singular aggregate score, which is insufficient for understanding specific system vulnerabilities. Therefore, we aim to address the limitations of the previous ASR evaluation methods by introducing the Korean Error Explainable Benchmark Dataset for ASR and Post-processing (KEBAP). KEBAP enables comprehensive analysis of ASR systems at both speech- and text levels, thereby facilitating a more balanced assessment encompassing speech recognition accuracy and user readability. KEBAP provides 37 newly defined speech-level resources incorporating diverse noise environments and speaker characteristics categories, also presenting 13 distinct text-level error types. This paper demonstrates detailed statistical analyses of colloquial noise categories and textual error types. Furthermore, we conduct extensive validation and analysis on commercially deployed ASR systems, providing valuable insights into their performance. As a more fine-grained and real-world-centric evaluation method, KEBAP contributes to identifying and mitigating potential weaknesses in ASR systems.
[ "Koo, Seonmin", "Park, Chanjun", "Kim, Jinsung", "Seo, Jaehyung", "Eo, Sugyeong", "Moon, Hyeonseok", "Lim, Heuiseok" ]
KEBAP: Korean Error Explainable Benchmark Dataset for ASR and Post-processing
emnlp-main.292
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.293.bib
https://aclanthology.org/2023.emnlp-main.293/
@inproceedings{zhao-etal-2023-adaptive, title = "Adaptive Policy with Wait-k Model for Simultaneous Translation", author = "Zhao, Libo and Fan, Kai and Luo, Wei and Jing, Wu and Wang, Shushu and Zeng, Ziqian and Huang, Zhongqiang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.293", doi = "10.18653/v1/2023.emnlp-main.293", pages = "4816--4832", abstract = "Simultaneous machine translation (SiMT) requires a robust read/write policy in conjunction with a high-quality translation model. Traditional methods rely on either a fixed wait-k policy coupled with a standalone wait-k translation model, or an adaptive policy jointly trained with the translation model. In this study, we propose a more flexible approach by decoupling the adaptive policy model from the translation model. Our motivation stems from the observation that a standalone multi-path wait-k model performs competitively with adaptive policies utilized in state-of-the-art SiMT approaches. Specifically, we introduce DaP, a divergence-based adaptive policy, that makes read/write decisions for any translation model based on the potential divergence in translation distributions resulting from future information. DaP extends a frozen wait-k model with lightweight parameters, and is both memory and computation efficient. Experimental results across various benchmarks demonstrate that our approach offers an improved trade-off between translation accuracy and latency, outperforming strong baselines.", }
Simultaneous machine translation (SiMT) requires a robust read/write policy in conjunction with a high-quality translation model. Traditional methods rely on either a fixed wait-k policy coupled with a standalone wait-k translation model, or an adaptive policy jointly trained with the translation model. In this study, we propose a more flexible approach by decoupling the adaptive policy model from the translation model. Our motivation stems from the observation that a standalone multi-path wait-k model performs competitively with adaptive policies utilized in state-of-the-art SiMT approaches. Specifically, we introduce DaP, a divergence-based adaptive policy, that makes read/write decisions for any translation model based on the potential divergence in translation distributions resulting from future information. DaP extends a frozen wait-k model with lightweight parameters, and is both memory and computation efficient. Experimental results across various benchmarks demonstrate that our approach offers an improved trade-off between translation accuracy and latency, outperforming strong baselines.
[ "Zhao, Libo", "Fan, Kai", "Luo, Wei", "Jing, Wu", "Wang, Shushu", "Zeng, Ziqian", "Huang, Zhongqiang" ]
Adaptive Policy with Wait-k Model for Simultaneous Translation
emnlp-main.293
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.294.bib
https://aclanthology.org/2023.emnlp-main.294/
@inproceedings{chen-etal-2023-cross, title = "Cross-Document Event Coreference Resolution on Discourse Structure", author = "Chen, Xinyu and Xu, Sheng and Li, Peifeng and Zhu, Qiaoming", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.294", doi = "10.18653/v1/2023.emnlp-main.294", pages = "4833--4843", abstract = "Cross-document event coreference resolution (CD-ECR) is a task of clustering event mentions across multiple documents that refer to the same real-world events. Previous studies usually model the CD-ECR task as a pairwise similarity comparison problem by using different event mention features, and consider the highly similar event mention pairs in the same cluster as coreferent. In general, most of them only consider the local context of event mentions and ignore their implicit global information, thus failing to capture the interactions of long-distance event mentions. To address the above issue, we regard discourse structure as global information to further improve CD-ECR. First, we use a discourse rhetorical structure constructor to construct tree structures to represent documents. Then, we obtain shortest dependency paths from the tree structures to represent interactions between event mention pairs. Finally, we feed the above information to a multi-layer perceptron to capture the similarities of event mention pairs for resolving coreferent events. Experimental results on the ECB+ dataset show that our proposed model outperforms several baselines and achieves the competitive performance with the start-of-the-art baselines.", }
Cross-document event coreference resolution (CD-ECR) is a task of clustering event mentions across multiple documents that refer to the same real-world events. Previous studies usually model the CD-ECR task as a pairwise similarity comparison problem by using different event mention features, and consider the highly similar event mention pairs in the same cluster as coreferent. In general, most of them only consider the local context of event mentions and ignore their implicit global information, thus failing to capture the interactions of long-distance event mentions. To address the above issue, we regard discourse structure as global information to further improve CD-ECR. First, we use a discourse rhetorical structure constructor to construct tree structures to represent documents. Then, we obtain shortest dependency paths from the tree structures to represent interactions between event mention pairs. Finally, we feed the above information to a multi-layer perceptron to capture the similarities of event mention pairs for resolving coreferent events. Experimental results on the ECB+ dataset show that our proposed model outperforms several baselines and achieves the competitive performance with the start-of-the-art baselines.
[ "Chen, Xinyu", "Xu, Sheng", "Li, Peifeng", "Zhu, Qiaoming" ]
Cross-Document Event Coreference Resolution on Discourse Structure
emnlp-main.294
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.295.bib
https://aclanthology.org/2023.emnlp-main.295/
@inproceedings{jang-etal-2023-post, title = "Post-hoc Utterance Refining Method by Entity Mining for Faithful Knowledge Grounded Conversations", author = "Jang, Yoonna and Son, Suhyune and Lee, Jeongwoo and Son, Junyoung and Hur, Yuna and Lim, Jungwoo and Moon, Hyeonseok and Yang, Kisu and Lim, Heuiseok", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.295", doi = "10.18653/v1/2023.emnlp-main.295", pages = "4844--4861", abstract = "Despite the striking advances in recent language generation performance, model-generated responses have suffered from the chronic problem of hallucinations that are either untrue or unfaithful to a given source. Especially in the task of knowledge grounded conversation, the models are required to generate informative responses, but hallucinated utterances lead to miscommunication. In particular, entity-level hallucination that causes critical misinformation and undesirable conversation is one of the major concerns. To address this issue, we propose a post-hoc refinement method called REM. It aims to enhance the quality and faithfulness of hallucinated utterances by refining them based on the source knowledge. If the generated utterance has a low source-faithfulness score with the given knowledge, REM mines the key entities in the knowledge and implicitly uses them for refining the utterances. We verify that our method reduces entity hallucination in the utterance. Also, we show the adaptability and efficacy of REM with extensive experiments and generative results. Our code is available at https://github.com/YOONNAJANG/REM.", }
Despite the striking advances in recent language generation performance, model-generated responses have suffered from the chronic problem of hallucinations that are either untrue or unfaithful to a given source. Especially in the task of knowledge grounded conversation, the models are required to generate informative responses, but hallucinated utterances lead to miscommunication. In particular, entity-level hallucination that causes critical misinformation and undesirable conversation is one of the major concerns. To address this issue, we propose a post-hoc refinement method called REM. It aims to enhance the quality and faithfulness of hallucinated utterances by refining them based on the source knowledge. If the generated utterance has a low source-faithfulness score with the given knowledge, REM mines the key entities in the knowledge and implicitly uses them for refining the utterances. We verify that our method reduces entity hallucination in the utterance. Also, we show the adaptability and efficacy of REM with extensive experiments and generative results. Our code is available at https://github.com/YOONNAJANG/REM.
[ "Jang, Yoonna", "Son, Suhyune", "Lee, Jeongwoo", "Son, Junyoung", "Hur, Yuna", "Lim, Jungwoo", "Moon, Hyeonseok", "Yang, Kisu", "Lim, Heuiseok" ]
Post-hoc Utterance Refining Method by Entity Mining for Faithful Knowledge Grounded Conversations
emnlp-main.295
2406.10809
[ "https://github.com/yoonnajang/rem" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.296.bib
https://aclanthology.org/2023.emnlp-main.296/
@inproceedings{zheng-etal-2023-edit, title = "Can We Edit Factual Knowledge by In-Context Learning?", author = "Zheng, Ce and Li, Lei and Dong, Qingxiu and Fan, Yuxuan and Wu, Zhiyong and Xu, Jingjing and Chang, Baobao", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.296", doi = "10.18653/v1/2023.emnlp-main.296", pages = "4862--4876", abstract = "Previous studies have shown that large language models (LLMs) like GPTs store massive factual knowledge in their parameters. However, the stored knowledge could be false or outdated. Traditional knowledge editing methods refine LLMs via fine-tuning on texts containing specific knowledge. However, with the increasing scales of LLMs, these gradient-based approaches bring large computation costs. The trend of model-as-a-service also makes it impossible to modify knowledge in black-box LMs. Inspired by in-context learning (ICL), a new paradigm based on demonstration contexts without parameter updating, we explore whether ICL can edit factual knowledge. To answer this question, we give a comprehensive empirical study of ICL strategies. Experiments show that in-context knowledge editing (IKE), without any gradient and parameter updating, achieves a competitive success rate compared to gradient-based methods on GPT-J (6B) but with much fewer side effects, including less over-editing on similar but unrelated facts and less knowledge forgetting on previously stored knowledge. We also apply the method to larger LMs with tens or hundreds of parameters like OPT-175B, which shows the scalability of our method. The code is available at \url{https://github.com/pkunlp-icler/IKE}.", }
Previous studies have shown that large language models (LLMs) like GPTs store massive factual knowledge in their parameters. However, the stored knowledge could be false or outdated. Traditional knowledge editing methods refine LLMs via fine-tuning on texts containing specific knowledge. However, with the increasing scales of LLMs, these gradient-based approaches bring large computation costs. The trend of model-as-a-service also makes it impossible to modify knowledge in black-box LMs. Inspired by in-context learning (ICL), a new paradigm based on demonstration contexts without parameter updating, we explore whether ICL can edit factual knowledge. To answer this question, we give a comprehensive empirical study of ICL strategies. Experiments show that in-context knowledge editing (IKE), without any gradient and parameter updating, achieves a competitive success rate compared to gradient-based methods on GPT-J (6B) but with much fewer side effects, including less over-editing on similar but unrelated facts and less knowledge forgetting on previously stored knowledge. We also apply the method to larger LMs with tens or hundreds of parameters like OPT-175B, which shows the scalability of our method. The code is available at \url{https://github.com/pkunlp-icler/IKE}.
[ "Zheng, Ce", "Li, Lei", "Dong, Qingxiu", "Fan, Yuxuan", "Wu, Zhiyong", "Xu, Jingjing", "Chang, Baobao" ]
Can We Edit Factual Knowledge by In-Context Learning?
emnlp-main.296
2305.12740
[ "https://github.com/zce1112zslx/ike" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.297.bib
https://aclanthology.org/2023.emnlp-main.297/
@inproceedings{liu-etal-2023-edis, title = "{EDIS}: Entity-Driven Image Search over Multimodal Web Content", author = "Liu, Siqi and Feng, Weixi and Fu, Tsu-Jui and Chen, Wenhu and Wang, William", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.297", doi = "10.18653/v1/2023.emnlp-main.297", pages = "4877--4894", abstract = "Making image retrieval methods practical for real-world search applications requires significant progress in dataset scales, entity comprehension, and multimodal information fusion. In this work, we introduce Entity-Driven Image Search (EDIS), a challenging dataset for cross-modal image search in the news domain. EDIS consists of 1 million web images from actual search engine results and curated datasets, with each image paired with a textual description. Unlike datasets that assume a small set of single-modality candidates, EDIS reflects real-world web image search scenarios by including a million multimodal image-text pairs as candidates. EDIS encourages the development of retrieval models that simultaneously address cross-modal information fusion and matching. To achieve accurate ranking results, a model must: 1) understand named entities and events from text queries, 2) ground entities onto images or text descriptions, and 3) effectively fuse textual and visual representations. Our experimental results show that EDIS challenges state-of-the-art methods with dense entities and the large-scale candidate set. The ablation study also proves that fusing textual features with visual features is critical in improving retrieval results.", }
Making image retrieval methods practical for real-world search applications requires significant progress in dataset scales, entity comprehension, and multimodal information fusion. In this work, we introduce Entity-Driven Image Search (EDIS), a challenging dataset for cross-modal image search in the news domain. EDIS consists of 1 million web images from actual search engine results and curated datasets, with each image paired with a textual description. Unlike datasets that assume a small set of single-modality candidates, EDIS reflects real-world web image search scenarios by including a million multimodal image-text pairs as candidates. EDIS encourages the development of retrieval models that simultaneously address cross-modal information fusion and matching. To achieve accurate ranking results, a model must: 1) understand named entities and events from text queries, 2) ground entities onto images or text descriptions, and 3) effectively fuse textual and visual representations. Our experimental results show that EDIS challenges state-of-the-art methods with dense entities and the large-scale candidate set. The ablation study also proves that fusing textual features with visual features is critical in improving retrieval results.
[ "Liu, Siqi", "Feng, Weixi", "Fu, Tsu-Jui", "Chen, Wenhu", "Wang, William" ]
EDIS: Entity-Driven Image Search over Multimodal Web Content
emnlp-main.297
2305.13631
[ "https://github.com/emerisly/edis" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.298.bib
https://aclanthology.org/2023.emnlp-main.298/
@inproceedings{ainslie-etal-2023-gqa, title = "{GQA}: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints", author = "Ainslie, Joshua and Lee-Thorp, James and de Jong, Michiel and Zemlyanskiy, Yury and Lebron, Federico and Sanghai, Sumit", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.298", doi = "10.18653/v1/2023.emnlp-main.298", pages = "4895--4901", abstract = "Multi-query attention (MQA), which only uses a single key-value head, drastically speeds up decoder inference. However, MQA can lead to quality degradation, and moreover it may not be desirable to train a separate model just for faster inference. We (1) propose a recipe for uptraining existing multi-head language model checkpoints into models with MQA using 5{\%} of original pre-training compute, and (2) introduce grouped-query attention (GQA), a generalization of multi-query attention which uses an intermediate (more than one, less than number of query heads) number of key-value heads. We show that uptrained GQA achieves quality close to multi-head attention with comparable speed to MQA.", }
Multi-query attention (MQA), which only uses a single key-value head, drastically speeds up decoder inference. However, MQA can lead to quality degradation, and moreover it may not be desirable to train a separate model just for faster inference. We (1) propose a recipe for uptraining existing multi-head language model checkpoints into models with MQA using 5{\%} of original pre-training compute, and (2) introduce grouped-query attention (GQA), a generalization of multi-query attention which uses an intermediate (more than one, less than number of query heads) number of key-value heads. We show that uptrained GQA achieves quality close to multi-head attention with comparable speed to MQA.
[ "Ainslie, Joshua", "Lee-Thorp, James", "de Jong, Michiel", "Zemlyanskiy, Yury", "Lebron, Federico", "Sanghai, Sumit" ]
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
emnlp-main.298
2305.13245
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.299.bib
https://aclanthology.org/2023.emnlp-main.299/
@inproceedings{hou-etal-2023-towards, title = "Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models", author = "Hou, Yifan and Li, Jiaoda and Fei, Yu and Stolfo, Alessandro and Zhou, Wangchunshu and Zeng, Guangtao and Bosselut, Antoine and Sachan, Mrinmaya", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.299", doi = "10.18653/v1/2023.emnlp-main.299", pages = "4902--4919", abstract = "Recent work has shown that language models (LMs) have strong multi-step (i.e., procedural) reasoning capabilities. However, it is unclear whether LMs perform these tasks by cheating with answers memorized from pretraining corpus, or, via a multi-step reasoning mechanism. In this paper, we try to answer this question by exploring a mechanistic interpretation of LMs for multi-step reasoning tasks. Concretely, we hypothesize that the LM implicitly embeds a reasoning tree resembling the correct reasoning process within it. We test this hypothesis by introducing a new probing approach (called MechanisticProbe) that recovers the reasoning tree from the model{'}s attention patterns. We use our probe to analyze two LMs: GPT-2 on a synthetic task (k-th smallest element), and LLaMA on two simple language-based reasoning tasks (ProofWriter {\&} AI2 Reasoning Challenge). We show that MechanisticProbe is able to detect the information of the reasoning tree from the model{'}s attentions for most examples, suggesting that the LM indeed is going through a process of multi-step reasoning within its architecture in many cases.", }
Recent work has shown that language models (LMs) have strong multi-step (i.e., procedural) reasoning capabilities. However, it is unclear whether LMs perform these tasks by cheating with answers memorized from pretraining corpus, or, via a multi-step reasoning mechanism. In this paper, we try to answer this question by exploring a mechanistic interpretation of LMs for multi-step reasoning tasks. Concretely, we hypothesize that the LM implicitly embeds a reasoning tree resembling the correct reasoning process within it. We test this hypothesis by introducing a new probing approach (called MechanisticProbe) that recovers the reasoning tree from the model{'}s attention patterns. We use our probe to analyze two LMs: GPT-2 on a synthetic task (k-th smallest element), and LLaMA on two simple language-based reasoning tasks (ProofWriter {\&} AI2 Reasoning Challenge). We show that MechanisticProbe is able to detect the information of the reasoning tree from the model{'}s attentions for most examples, suggesting that the LM indeed is going through a process of multi-step reasoning within its architecture in many cases.
[ "Hou, Yifan", "Li, Jiaoda", "Fei, Yu", "Stolfo, Aless", "ro", "Zhou, Wangchunshu", "Zeng, Guangtao", "Bosselut, Antoine", "Sachan, Mrinmaya" ]
Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models
emnlp-main.299
2310.14491
[ "https://github.com/yifan-h/mechanisticprobe" ]
https://huggingface.co/papers/2310.14491
2
0
0
8
[]
[ "yyyyifan/MechanisticProbe_ProofWriter_ARC" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.300.bib
https://aclanthology.org/2023.emnlp-main.300/
@inproceedings{zhang-etal-2023-biasx, title = "{B}ias{X}: {``}Thinking Slow{''} in Toxic Content Moderation with Explanations of Implied Social Biases", author = "Zhang, Yiming and Nanduri, Sravani and Jiang, Liwei and Wu, Tongshuang and Sap, Maarten", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.300", doi = "10.18653/v1/2023.emnlp-main.300", pages = "4920--4932", abstract = "Toxicity annotators and content moderators often default to mental shortcuts when making decisions. This can lead to subtle toxicity being missed, and seemingly toxic but harmless content being over-detected. We introduce BiasX, a framework that enhances content moderation setups with free-text explanations of statements{'} implied social biases, and explore its effectiveness through a large-scale crowdsourced user study. We show that indeed, participants substantially benefit from explanations for correctly identifying subtly (non-)toxic content. The quality of explanations is critical: imperfect machine-generated explanations (+2.4{\%} on hard toxic examples) help less compared to expert-written human explanations (+7.2{\%}). Our results showcase the promise of using free-text explanations to encourage more thoughtful toxicity moderation.", }
Toxicity annotators and content moderators often default to mental shortcuts when making decisions. This can lead to subtle toxicity being missed, and seemingly toxic but harmless content being over-detected. We introduce BiasX, a framework that enhances content moderation setups with free-text explanations of statements{'} implied social biases, and explore its effectiveness through a large-scale crowdsourced user study. We show that indeed, participants substantially benefit from explanations for correctly identifying subtly (non-)toxic content. The quality of explanations is critical: imperfect machine-generated explanations (+2.4{\%} on hard toxic examples) help less compared to expert-written human explanations (+7.2{\%}). Our results showcase the promise of using free-text explanations to encourage more thoughtful toxicity moderation.
[ "Zhang, Yiming", "N", "uri, Sravani", "Jiang, Liwei", "Wu, Tongshuang", "Sap, Maarten" ]
BiasX: “Thinking Slow” in Toxic Content Moderation with Explanations of Implied Social Biases
emnlp-main.300
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster