bibtex_url
stringlengths
41
53
acl_proceedings
stringlengths
38
50
bibtext
stringlengths
528
3.02k
abstract
stringlengths
17
2.35k
authors
sequencelengths
1
44
title
stringlengths
18
190
id
stringlengths
7
19
arxiv_id
stringlengths
10
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
528 values
n_linked_authors
int64
-1
15
upvotes
int64
-1
77
num_comments
int64
-1
10
n_authors
int64
-1
52
Models
sequencelengths
0
100
Datasets
sequencelengths
0
15
Spaces
sequencelengths
0
46
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
https://aclanthology.org/2023.emnlp-main.101.bib
https://aclanthology.org/2023.emnlp-main.101/
@inproceedings{jia-zhang-2023-memory, title = "Memory-Based Invariance Learning for Out-of-Domain Text Classification", author = "Jia, Chen and Zhang, Yue", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.101", doi = "10.18653/v1/2023.emnlp-main.101", pages = "1635--1647", abstract = "We investigate the task of out-of-domain (OOD) text classification with the aim of extending a classification model, trained on multiple source domains, to an unseen target domain. Recent studies have shown that learning invariant representations can enhance the performance of OOD generalization. However, the inherent disparity in data distribution across different domains poses challenges for achieving effective invariance learning. This study addresses this issue by employing memory augmentations. Specifically, we augment the original feature space using key-value memory and employ a meta-learning-based approach to enhance the quality of the invariant representations. Experimental results on sentiment analysis and natural language inference tasks show the effectiveness of memory-based method for invariance learning, leading to state-of-the-art performance on six datasets.", }
We investigate the task of out-of-domain (OOD) text classification with the aim of extending a classification model, trained on multiple source domains, to an unseen target domain. Recent studies have shown that learning invariant representations can enhance the performance of OOD generalization. However, the inherent disparity in data distribution across different domains poses challenges for achieving effective invariance learning. This study addresses this issue by employing memory augmentations. Specifically, we augment the original feature space using key-value memory and employ a meta-learning-based approach to enhance the quality of the invariant representations. Experimental results on sentiment analysis and natural language inference tasks show the effectiveness of memory-based method for invariance learning, leading to state-of-the-art performance on six datasets.
[ "Jia, Chen", "Zhang, Yue" ]
Memory-Based Invariance Learning for Out-of-Domain Text Classification
emnlp-main.101
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.102.bib
https://aclanthology.org/2023.emnlp-main.102/
@inproceedings{wei-etal-2023-outlier, title = "Outlier Suppression+: Accurate quantization of large language models by equivalent and effective shifting and scaling", author = "Wei, Xiuying and Zhang, Yunchen and Li, Yuhang and Zhang, Xiangguo and Gong, Ruihao and Guo, Jinyang and Liu, Xianglong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.102", doi = "10.18653/v1/2023.emnlp-main.102", pages = "1648--1665", abstract = "Post-training quantization (PTQ) of transformer language models faces significant challenges due to the existence of detrimental outliers in activations. We observe that these outliers are concentrated in specific channels and are asymmetric across channels. To address this issue, we propose the Outlier Suppression+ (OS+) framework, which contains the channel-wise shifting for asymmetry and channel-wise scaling for concentration. We show that these operations can be seamlessly migrated into subsequent modules while maintaining equivalence. Second, we propose a fast and stable scheme to calculate effective shifting and scaling values. The channel-wise shifting aligns the center of each channel for removal of outlier asymmetry. The channel-wise scaling quantitatively evaluates changes brought by migration and quantization for better quantization burden balance. We validate our OS+ under both standard and fine-grained quantization settings with models including BERT, OPT, BLOOM, BLOOMZ, and LLaMA. Comprehensive results across various tasks demonstrate the superiority of our approach. Especially, with standard quantization, OS+ can achieve near-floating-point performance on both small models and large language models on 8-bit and 6-bit. Besides, we establish a new state-of-the-art for 4-bit BERT with 15.5{\%} improvement. Our code is available at \url{https://github.com/ModelTC/Outlier_Suppression_Plus}.", }
Post-training quantization (PTQ) of transformer language models faces significant challenges due to the existence of detrimental outliers in activations. We observe that these outliers are concentrated in specific channels and are asymmetric across channels. To address this issue, we propose the Outlier Suppression+ (OS+) framework, which contains the channel-wise shifting for asymmetry and channel-wise scaling for concentration. We show that these operations can be seamlessly migrated into subsequent modules while maintaining equivalence. Second, we propose a fast and stable scheme to calculate effective shifting and scaling values. The channel-wise shifting aligns the center of each channel for removal of outlier asymmetry. The channel-wise scaling quantitatively evaluates changes brought by migration and quantization for better quantization burden balance. We validate our OS+ under both standard and fine-grained quantization settings with models including BERT, OPT, BLOOM, BLOOMZ, and LLaMA. Comprehensive results across various tasks demonstrate the superiority of our approach. Especially, with standard quantization, OS+ can achieve near-floating-point performance on both small models and large language models on 8-bit and 6-bit. Besides, we establish a new state-of-the-art for 4-bit BERT with 15.5{\%} improvement. Our code is available at \url{https://github.com/ModelTC/Outlier_Suppression_Plus}.
[ "Wei, Xiuying", "Zhang, Yunchen", "Li, Yuhang", "Zhang, Xiangguo", "Gong, Ruihao", "Guo, Jinyang", "Liu, Xianglong" ]
Outlier Suppression+: Accurate quantization of large language models by equivalent and effective shifting and scaling
emnlp-main.102
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.103.bib
https://aclanthology.org/2023.emnlp-main.103/
@inproceedings{li-etal-2023-three, title = "Three Stream Based Multi-level Event Contrastive Learning for Text-Video Event Extraction", author = "Li, Jiaqi and Zhang, Chuanyi and Du, Miaozeng and Min, Dehai and Chen, Yongrui and Qi, Guilin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.103", doi = "10.18653/v1/2023.emnlp-main.103", pages = "1666--1676", abstract = "Text-video based multimodal event extraction refers to identifying event information from the given text-video pairs. Existing methods predominantly utilize video appearance features (VAF) and text sequence features (TSF) as input information. Some of them employ contrastive learning to align VAF with the event types extracted from TSF. However, they disregard the motion representations in videos and the optimization of contrastive objective could be misguided by the background noise from RGB frames. We observe that the same event triggers correspond to similar motion trajectories, which are hardly affected by the background noise. Moviated by this, we propose a Three Stream Multimodal Event Extraction framework (TSEE) that simultaneously utilizes the features of text sequence and video appearance, as well as the motion representations to enhance the event extraction capacity. Firstly, we extract the optical flow features (OFF) as motion representations from videos to incorporate with VAF and TSF. Then we introduce a Multi-level Event Contrastive Learning module to align the embedding space between OFF and event triggers, as well as between event triggers and types. Finally, a Dual Querying Text module is proposed to enhance the interaction between modalities. Experimental results show that TSEE outperforms the state-of-the-art methods, which demonstrates its superiority.", }
Text-video based multimodal event extraction refers to identifying event information from the given text-video pairs. Existing methods predominantly utilize video appearance features (VAF) and text sequence features (TSF) as input information. Some of them employ contrastive learning to align VAF with the event types extracted from TSF. However, they disregard the motion representations in videos and the optimization of contrastive objective could be misguided by the background noise from RGB frames. We observe that the same event triggers correspond to similar motion trajectories, which are hardly affected by the background noise. Moviated by this, we propose a Three Stream Multimodal Event Extraction framework (TSEE) that simultaneously utilizes the features of text sequence and video appearance, as well as the motion representations to enhance the event extraction capacity. Firstly, we extract the optical flow features (OFF) as motion representations from videos to incorporate with VAF and TSF. Then we introduce a Multi-level Event Contrastive Learning module to align the embedding space between OFF and event triggers, as well as between event triggers and types. Finally, a Dual Querying Text module is proposed to enhance the interaction between modalities. Experimental results show that TSEE outperforms the state-of-the-art methods, which demonstrates its superiority.
[ "Li, Jiaqi", "Zhang, Chuanyi", "Du, Miaozeng", "Min, Dehai", "Chen, Yongrui", "Qi, Guilin" ]
Three Stream Based Multi-level Event Contrastive Learning for Text-Video Event Extraction
emnlp-main.103
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.104.bib
https://aclanthology.org/2023.emnlp-main.104/
@inproceedings{gou-etal-2023-diversify, title = "Diversify Question Generation with Retrieval-Augmented Style Transfer", author = "Gou, Qi and Xia, Zehua and Yu, Bowen and Yu, Haiyang and Huang, Fei and Li, Yongbin and Cam-Tu, Nguyen", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.104", doi = "10.18653/v1/2023.emnlp-main.104", pages = "1677--1690", abstract = "Given a textual passage and an answer, humans are able to ask questions with various expressions, but this ability is still challenging for most question generation (QG) systems. Existing solutions mainly focus on the internal knowledge within the given passage or the semantic word space for diverse content planning. These methods, however, have not considered the potential of external knowledge for expression diversity. To bridge this gap, we propose RAST, a framework for Retrieval-Augmented Style Transfer, where the objective is to utilize the style of diverse templates for question generation. For training RAST, we develop a novel Reinforcement Learning (RL) based approach that maximizes a weighted combination of diversity reward and consistency reward. Here, the consistency reward is computed by a Question-Answering (QA) model, whereas the diversity reward measures how much the final output mimics the retrieved template. Experimental results show that our method outperforms previous diversity-driven baselines on diversity while being comparable in terms of consistency scores. Our code is available at \url{https://github.com/gouqi666/RAST}.", }
Given a textual passage and an answer, humans are able to ask questions with various expressions, but this ability is still challenging for most question generation (QG) systems. Existing solutions mainly focus on the internal knowledge within the given passage or the semantic word space for diverse content planning. These methods, however, have not considered the potential of external knowledge for expression diversity. To bridge this gap, we propose RAST, a framework for Retrieval-Augmented Style Transfer, where the objective is to utilize the style of diverse templates for question generation. For training RAST, we develop a novel Reinforcement Learning (RL) based approach that maximizes a weighted combination of diversity reward and consistency reward. Here, the consistency reward is computed by a Question-Answering (QA) model, whereas the diversity reward measures how much the final output mimics the retrieved template. Experimental results show that our method outperforms previous diversity-driven baselines on diversity while being comparable in terms of consistency scores. Our code is available at \url{https://github.com/gouqi666/RAST}.
[ "Gou, Qi", "Xia, Zehua", "Yu, Bowen", "Yu, Haiyang", "Huang, Fei", "Li, Yongbin", "Cam-Tu, Nguyen" ]
Diversify Question Generation with Retrieval-Augmented Style Transfer
emnlp-main.104
2310.14503
[ "https://github.com/gouqi666/rast" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.105.bib
https://aclanthology.org/2023.emnlp-main.105/
@inproceedings{lattimer-etal-2023-fast, title = "Fast and Accurate Factual Inconsistency Detection Over Long Documents", author = "Lattimer, Barrett and CHen, Patrick and Zhang, Xinyuan and Yang, Yi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.105", doi = "10.18653/v1/2023.emnlp-main.105", pages = "1691--1703", abstract = "Generative AI models exhibit remarkable potential; however, hallucinations across various tasks present a significant challenge, particularly for longer inputs that current approaches struggle to address effectively. We introduce SCALE (Source Chunking Approach for Large-scale inconsistency Evaluation), a task-agnostic model for detecting factual inconsistencies using a novel chunking strategy. Specifically, SCALE is a Natural Language Inference (NLI) based model that uses large text chunks to condition over long texts. This approach achieves state-of-the-art performance in factual inconsistency detection for diverse tasks and long inputs. Additionally, we leverage the chunking mechanism and employ a novel algorithm to explain SCALE{'}s decisions through relevant source sentence retrieval. Our evaluations reveal that SCALE outperforms existing methods on both standard benchmarks and a new long-form dialogue dataset ScreenEval we constructed. Moreover, SCALE surpasses competitive systems in efficiency and model explanation evaluations. We have released our code and data publicly to GitHub.", }
Generative AI models exhibit remarkable potential; however, hallucinations across various tasks present a significant challenge, particularly for longer inputs that current approaches struggle to address effectively. We introduce SCALE (Source Chunking Approach for Large-scale inconsistency Evaluation), a task-agnostic model for detecting factual inconsistencies using a novel chunking strategy. Specifically, SCALE is a Natural Language Inference (NLI) based model that uses large text chunks to condition over long texts. This approach achieves state-of-the-art performance in factual inconsistency detection for diverse tasks and long inputs. Additionally, we leverage the chunking mechanism and employ a novel algorithm to explain SCALE{'}s decisions through relevant source sentence retrieval. Our evaluations reveal that SCALE outperforms existing methods on both standard benchmarks and a new long-form dialogue dataset ScreenEval we constructed. Moreover, SCALE surpasses competitive systems in efficiency and model explanation evaluations. We have released our code and data publicly to GitHub.
[ "Lattimer, Barrett", "CHen, Patrick", "Zhang, Xinyuan", "Yang, Yi" ]
Fast and Accurate Factual Inconsistency Detection Over Long Documents
emnlp-main.105
2310.13189
[ "https://github.com/asappresearch/scale-score" ]
https://huggingface.co/papers/2310.13189
2
0
1
4
[]
[ "blattimer/ScreenEval" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.106.bib
https://aclanthology.org/2023.emnlp-main.106/
@inproceedings{simhi-markovitch-2023-interpreting, title = "Interpreting Embedding Spaces by Conceptualization", author = "Simhi, Adi and Markovitch, Shaul", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.106", doi = "10.18653/v1/2023.emnlp-main.106", pages = "1704--1719", abstract = "One of the main methods for computational interpretation of a text is mapping it into a vector in some embedding space. Such vectors can then be used for a variety of textual processing tasks. Recently, most embedding spaces are a product of training large language models (LLMs). One major drawback of this type of representation is their incomprehensibility to humans. Understanding the embedding space is crucial for several important needs, including the need to debug the embedding method and compare it to alternatives, and the need to detect biases hidden in the model. In this paper, we present a novel method of understanding embeddings by transforming a latent embedding space into a comprehensible conceptual space. We present an algorithm for deriving a conceptual space with dynamic on-demand granularity. We devise a new evaluation method, using either human rater or LLM-based raters, to show that the conceptualized vectors indeed represent the semantics of the original latent ones. We show the use of our method for various tasks, including comparing the semantics of alternative models and tracing the layers of the LLM. The code is available online https://github.com/adiSimhi/Interpreting-Embedding-Spaces-by-Conceptualization.", }
One of the main methods for computational interpretation of a text is mapping it into a vector in some embedding space. Such vectors can then be used for a variety of textual processing tasks. Recently, most embedding spaces are a product of training large language models (LLMs). One major drawback of this type of representation is their incomprehensibility to humans. Understanding the embedding space is crucial for several important needs, including the need to debug the embedding method and compare it to alternatives, and the need to detect biases hidden in the model. In this paper, we present a novel method of understanding embeddings by transforming a latent embedding space into a comprehensible conceptual space. We present an algorithm for deriving a conceptual space with dynamic on-demand granularity. We devise a new evaluation method, using either human rater or LLM-based raters, to show that the conceptualized vectors indeed represent the semantics of the original latent ones. We show the use of our method for various tasks, including comparing the semantics of alternative models and tracing the layers of the LLM. The code is available online https://github.com/adiSimhi/Interpreting-Embedding-Spaces-by-Conceptualization.
[ "Simhi, Adi", "Markovitch, Shaul" ]
Interpreting Embedding Spaces by Conceptualization
emnlp-main.106
2209.00445
[ "https://github.com/adisimhi/interpreting-embedding-spaces-by-conceptualization" ]
https://huggingface.co/papers/2209.00445
0
0
0
2
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.107.bib
https://aclanthology.org/2023.emnlp-main.107/
@inproceedings{baek-etal-2023-knowledge-augmented-language, title = "Knowledge-Augmented Language Model Verification", author = "Baek, Jinheon and Jeong, Soyeong and Kang, Minki and Park, Jong and Hwang, Sung", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.107", doi = "10.18653/v1/2023.emnlp-main.107", pages = "1720--1736", abstract = "Recent Language Models (LMs) have shown impressive capabilities in generating texts with the knowledge internalized in parameters. Yet, LMs often generate the factually incorrect responses to the given queries, since their knowledge may be inaccurate, incomplete, and outdated. To address this problem, previous works propose to augment LMs with the knowledge retrieved from an external knowledge source. However, such approaches often show suboptimal text generation performance due to two reasons: 1) the model may fail to retrieve the knowledge relevant to the given query, or 2) the model may not faithfully reflect the retrieved knowledge in the generated text. To overcome these, we propose to verify the output and the knowledge of the knowledge-augmented LMs with a separate verifier, which is a small LM that is trained to detect those two types of errors through instruction-finetuning. Then, when the verifier recognizes an error, we can rectify it by either retrieving new knowledge or generating new text. Further, we use an ensemble of the outputs from different instructions with a single verifier to enhance the reliability of the verification processes. We validate the effectiveness of the proposed verification steps on multiple question answering benchmarks, whose results show that the proposed verifier effectively identifies retrieval and generation errors, allowing LMs to provide more factually correct outputs. Our code is available at https://github.com/JinheonBaek/KALMV.", }
Recent Language Models (LMs) have shown impressive capabilities in generating texts with the knowledge internalized in parameters. Yet, LMs often generate the factually incorrect responses to the given queries, since their knowledge may be inaccurate, incomplete, and outdated. To address this problem, previous works propose to augment LMs with the knowledge retrieved from an external knowledge source. However, such approaches often show suboptimal text generation performance due to two reasons: 1) the model may fail to retrieve the knowledge relevant to the given query, or 2) the model may not faithfully reflect the retrieved knowledge in the generated text. To overcome these, we propose to verify the output and the knowledge of the knowledge-augmented LMs with a separate verifier, which is a small LM that is trained to detect those two types of errors through instruction-finetuning. Then, when the verifier recognizes an error, we can rectify it by either retrieving new knowledge or generating new text. Further, we use an ensemble of the outputs from different instructions with a single verifier to enhance the reliability of the verification processes. We validate the effectiveness of the proposed verification steps on multiple question answering benchmarks, whose results show that the proposed verifier effectively identifies retrieval and generation errors, allowing LMs to provide more factually correct outputs. Our code is available at https://github.com/JinheonBaek/KALMV.
[ "Baek, Jinheon", "Jeong, Soyeong", "Kang, Minki", "Park, Jong", "Hwang, Sung" ]
Knowledge-Augmented Language Model Verification
emnlp-main.107
2402.18374
[ "https://github.com/jinheonbaek/kalmv" ]
https://huggingface.co/papers/2402.18374
1
2
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.108.bib
https://aclanthology.org/2023.emnlp-main.108/
@inproceedings{hu-etal-2023-generation, title = "A Generation-based Deductive Method for Math Word Problems", author = "Hu, Yuxuan and Zhang, Jing and Li, Haoyang and Li, Cuiping and Chen, Hong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.108", doi = "10.18653/v1/2023.emnlp-main.108", pages = "1737--1750", abstract = "Math word problems (MWP) involving advanced operators such as linear equation solver cannot be easily tackled by earlier MWP methods, because the existing generation methods suffer from repeated sub-expression generation and deductive methods are restricted to dealing with binary operations. This paper propose a new multivariate directed acyclic graph (mDAG) as an alternative to the generation methods{'} binary expression tree or the deductive methods{'} binary directed acyclic graph. Then to produce the topological ordering of mDAG, we propose a generation-based deductive (GeDe) model, which equips a generation model with a re-encoder to keep the deductive property but avoid the expensive enumeration of the deductive methods. GeDe performs well on math problems with many operators on the widely used benchmarks as well as solving multivariate operators on our own CMWPA benchmark. Our code is available at https://github.com/hyx1999/GeDe", }
Math word problems (MWP) involving advanced operators such as linear equation solver cannot be easily tackled by earlier MWP methods, because the existing generation methods suffer from repeated sub-expression generation and deductive methods are restricted to dealing with binary operations. This paper propose a new multivariate directed acyclic graph (mDAG) as an alternative to the generation methods{'} binary expression tree or the deductive methods{'} binary directed acyclic graph. Then to produce the topological ordering of mDAG, we propose a generation-based deductive (GeDe) model, which equips a generation model with a re-encoder to keep the deductive property but avoid the expensive enumeration of the deductive methods. GeDe performs well on math problems with many operators on the widely used benchmarks as well as solving multivariate operators on our own CMWPA benchmark. Our code is available at https://github.com/hyx1999/GeDe
[ "Hu, Yuxuan", "Zhang, Jing", "Li, Haoyang", "Li, Cuiping", "Chen, Hong" ]
A Generation-based Deductive Method for Math Word Problems
emnlp-main.108
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.109.bib
https://aclanthology.org/2023.emnlp-main.109/
@inproceedings{yang-etal-2023-failures, title = "Failures Pave the Way: Enhancing Large Language Models through Tuning-free Rule Accumulation", author = "Yang, Zeyuan and Li, Peng and Liu, Yang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.109", doi = "10.18653/v1/2023.emnlp-main.109", pages = "1751--1777", abstract = "Large Language Models (LLMs) have showcased impressive performance. However, due to their inability to capture relationships among samples, these frozen LLMs inevitably keep repeating similar mistakes. In this work, we propose our Tuning-free Rule Accumulation (TRAN) framework, which guides LLMs in improving their performance by learning from previous mistakes. Considering data arrives sequentially, LLMs gradually accumulate rules from incorrect cases, forming a rule collection. These rules are then utilized by the LLMs to avoid making similar mistakes when processing subsequent inputs. Moreover, the rules remain independent of the primary prompts, seamlessly complementing prompt design strategies. Experimentally, we show that TRAN improves over recent baselines by a large margin.", }
Large Language Models (LLMs) have showcased impressive performance. However, due to their inability to capture relationships among samples, these frozen LLMs inevitably keep repeating similar mistakes. In this work, we propose our Tuning-free Rule Accumulation (TRAN) framework, which guides LLMs in improving their performance by learning from previous mistakes. Considering data arrives sequentially, LLMs gradually accumulate rules from incorrect cases, forming a rule collection. These rules are then utilized by the LLMs to avoid making similar mistakes when processing subsequent inputs. Moreover, the rules remain independent of the primary prompts, seamlessly complementing prompt design strategies. Experimentally, we show that TRAN improves over recent baselines by a large margin.
[ "Yang, Zeyuan", "Li, Peng", "Liu, Yang" ]
Failures Pave the Way: Enhancing Large Language Models through Tuning-free Rule Accumulation
emnlp-main.109
2310.15746
[ "https://github.com/thunlp-mt/tran" ]
https://huggingface.co/papers/2310.15746
1
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.110.bib
https://aclanthology.org/2023.emnlp-main.110/
@inproceedings{shea-yu-2023-building, title = "Building Persona Consistent Dialogue Agents with Offline Reinforcement Learning", author = "Shea, Ryan and Yu, Zhou", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.110", doi = "10.18653/v1/2023.emnlp-main.110", pages = "1778--1795", abstract = "Maintaining a consistent persona is a key quality for any open domain dialogue system. Current state-of-the-art systems do this by training agents with supervised learning or online reinforcement learning (RL). However, systems trained with supervised learning often lack consistency as they are never punished for uttering contradictions. Additional training with RL can alleviate some of these issues, however the training process is expensive. Instead, we propose an offline RL framework to improve the persona consistency of dialogue systems. Our framework allows us to combine the advantages of previous methods as we can inexpensively train our model on existing data as in supervised learning, while punishing and rewarding specific utterances as in RL. We also introduce a simple importance sampling method to reduce the variance of importance weights in offline RL training which we call Variance-Reducing MLE-Initialized (VaRMI) importance sampling. Our automatic and human evaluations show that our framework improves both the persona consistency and dialogue quality of a state-of-the-art social chatbot.", }
Maintaining a consistent persona is a key quality for any open domain dialogue system. Current state-of-the-art systems do this by training agents with supervised learning or online reinforcement learning (RL). However, systems trained with supervised learning often lack consistency as they are never punished for uttering contradictions. Additional training with RL can alleviate some of these issues, however the training process is expensive. Instead, we propose an offline RL framework to improve the persona consistency of dialogue systems. Our framework allows us to combine the advantages of previous methods as we can inexpensively train our model on existing data as in supervised learning, while punishing and rewarding specific utterances as in RL. We also introduce a simple importance sampling method to reduce the variance of importance weights in offline RL training which we call Variance-Reducing MLE-Initialized (VaRMI) importance sampling. Our automatic and human evaluations show that our framework improves both the persona consistency and dialogue quality of a state-of-the-art social chatbot.
[ "Shea, Ryan", "Yu, Zhou" ]
Building Persona Consistent Dialogue Agents with Offline Reinforcement Learning
emnlp-main.110
2310.10735
[ "https://github.com/ryanshea10/personachat_offline_rl" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.111.bib
https://aclanthology.org/2023.emnlp-main.111/
@inproceedings{ge-etal-2023-augmenting, title = "Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories", author = "Ge, Suyu and Xiong, Chenyan and Rosset, Corby and Overwijk, Arnold and Han, Jiawei and Bennett, Paul", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.111", doi = "10.18653/v1/2023.emnlp-main.111", pages = "1796--1812", abstract = "In this paper we improve the zero-shot generalization ability of language models via Mixture-Of-Memory Augmentation (MoMA), a mechanism that retrieves augmentation documents from multiple information corpora (external memories), with the option to {``}plug in{''} unseen memory at inference time. We develop a joint learning mechanism that trains the augmentation component with latent labels derived from the end retrieval task, paired with hard negatives from the memory mixture. We instantiate the model in a zero-shot dense retrieval setting by augmenting strong T5-based retrievers with MoMA. With only T5-base, our model obtains strong zero-shot retrieval accuracy on the eighteen tasks included in the standard BEIR benchmark, outperforming some systems with larger model sizes. As a plug-in-play model, our model can efficiently generalize to any unseen corpus, meanwhile achieving comparable or even better performance than methods relying on target-specific pretraining. Our analysis further illustrates the necessity of augmenting with mixture-of-memory for robust generalization, the benefits of augmentation learning, and how MoMA utilizes the plug-in memory at inference time without changing its parameters. Our code can be found at https://github.com/gesy17/MoMA.", }
In this paper we improve the zero-shot generalization ability of language models via Mixture-Of-Memory Augmentation (MoMA), a mechanism that retrieves augmentation documents from multiple information corpora (external memories), with the option to {``}plug in{''} unseen memory at inference time. We develop a joint learning mechanism that trains the augmentation component with latent labels derived from the end retrieval task, paired with hard negatives from the memory mixture. We instantiate the model in a zero-shot dense retrieval setting by augmenting strong T5-based retrievers with MoMA. With only T5-base, our model obtains strong zero-shot retrieval accuracy on the eighteen tasks included in the standard BEIR benchmark, outperforming some systems with larger model sizes. As a plug-in-play model, our model can efficiently generalize to any unseen corpus, meanwhile achieving comparable or even better performance than methods relying on target-specific pretraining. Our analysis further illustrates the necessity of augmenting with mixture-of-memory for robust generalization, the benefits of augmentation learning, and how MoMA utilizes the plug-in memory at inference time without changing its parameters. Our code can be found at https://github.com/gesy17/MoMA.
[ "Ge, Suyu", "Xiong, Chenyan", "Rosset, Corby", "Overwijk, Arnold", "Han, Jiawei", "Bennett, Paul" ]
Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories
emnlp-main.111
2302.03754
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.112.bib
https://aclanthology.org/2023.emnlp-main.112/
@inproceedings{kung-etal-2023-active, title = "Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks", author = "Kung, Po-Nien and Yin, Fan and Wu, Di and Chang, Kai-Wei and Peng, Nanyun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.112", doi = "10.18653/v1/2023.emnlp-main.112", pages = "1813--1829", abstract = "Instruction tuning (IT) achieves impressive zero-shot generalization results by training large language models (LLMs) on a massive amount of diverse tasks with instructions. However, how to select new tasks to improve the performance and generalizability of IT models remains an open question. Training on all existing tasks is impractical due to prohibiting computation requirements, and randomly selecting tasks can lead to suboptimal performance. In this work, we propose active instruction tuning based on prompt uncertainty, a novel framework to identify informative tasks, and then actively tune the models on the selected tasks. We represent the informativeness of new tasks with the disagreement of the current model outputs over perturbed prompts. Our experiments on NIV2 and Self-Instruct datasets demonstrate that our method consistently outperforms other baseline strategies for task selection, achieving better out-of-distribution generalization with fewer training tasks. Additionally, we introduce a task map that categorizes and diagnoses tasks based on prompt uncertainty and prediction probability. We discover that training on ambiguous (prompt-uncertain) tasks improves generalization while training on difficult (prompt-certain and low-probability) tasks offers no benefit, underscoring the importance of task selection for instruction tuning.", }
Instruction tuning (IT) achieves impressive zero-shot generalization results by training large language models (LLMs) on a massive amount of diverse tasks with instructions. However, how to select new tasks to improve the performance and generalizability of IT models remains an open question. Training on all existing tasks is impractical due to prohibiting computation requirements, and randomly selecting tasks can lead to suboptimal performance. In this work, we propose active instruction tuning based on prompt uncertainty, a novel framework to identify informative tasks, and then actively tune the models on the selected tasks. We represent the informativeness of new tasks with the disagreement of the current model outputs over perturbed prompts. Our experiments on NIV2 and Self-Instruct datasets demonstrate that our method consistently outperforms other baseline strategies for task selection, achieving better out-of-distribution generalization with fewer training tasks. Additionally, we introduce a task map that categorizes and diagnoses tasks based on prompt uncertainty and prediction probability. We discover that training on ambiguous (prompt-uncertain) tasks improves generalization while training on difficult (prompt-certain and low-probability) tasks offers no benefit, underscoring the importance of task selection for instruction tuning.
[ "Kung, Po-Nien", "Yin, Fan", "Wu, Di", "Chang, Kai-Wei", "Peng, Nanyun" ]
Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks
emnlp-main.112
2311.00288
[ "https://github.com/pluslabnlp/active-it" ]
https://huggingface.co/papers/2311.00288
0
0
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.113.bib
https://aclanthology.org/2023.emnlp-main.113/
@inproceedings{bouthors-etal-2023-towards, title = "Towards Example-Based {NMT} with Multi-{L}evenshtein Transformers", author = "Bouthors, Maxime and Crego, Josep and Yvon, Fran{\c{c}}ois", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.113", doi = "10.18653/v1/2023.emnlp-main.113", pages = "1830--1846", abstract = "Retrieval-Augmented Machine Translation (RAMT) is attracting growing attention. This is because RAMT not only improves translation metrics, but is also assumed to implement some form of domain adaptation. In this contribution, we study another salient trait of RAMT, its ability to make translation decisions more transparent by allowing users to go back to examples that contributed to these decisions. For this, we propose a novel architecture aiming to increase this transparency. This model adapts a retrieval-augmented version of the Levenshtein Transformer and makes it amenable to simultaneously edit multiple fuzzy matches found in memory. We discuss how to perform training and inference in this model, based on multi-way alignment algorithms and imitation learning. Our experiments show that editing several examples positively impacts translation scores, notably increasing the number of target spans that are copied from existing instances.", }
Retrieval-Augmented Machine Translation (RAMT) is attracting growing attention. This is because RAMT not only improves translation metrics, but is also assumed to implement some form of domain adaptation. In this contribution, we study another salient trait of RAMT, its ability to make translation decisions more transparent by allowing users to go back to examples that contributed to these decisions. For this, we propose a novel architecture aiming to increase this transparency. This model adapts a retrieval-augmented version of the Levenshtein Transformer and makes it amenable to simultaneously edit multiple fuzzy matches found in memory. We discuss how to perform training and inference in this model, based on multi-way alignment algorithms and imitation learning. Our experiments show that editing several examples positively impacts translation scores, notably increasing the number of target spans that are copied from existing instances.
[ "Bouthors, Maxime", "Crego, Josep", "Yvon, Fran{\\c{c}}ois" ]
Towards Example-Based NMT with Multi-Levenshtein Transformers
emnlp-main.113
2310.08967
[ "https://github.com/maxwell1447/fairseq" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.114.bib
https://aclanthology.org/2023.emnlp-main.114/
@inproceedings{akyurek-etal-2023-dune, title = "{DU}n{E}: Dataset for Unified Editing", author = {Aky{\"u}rek, Afra and Pan, Eric and Kuwanto, Garry and Wijaya, Derry}, editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.114", doi = "10.18653/v1/2023.emnlp-main.114", pages = "1847--1861", abstract = "Even the most advanced language models remain susceptible to errors necessitating to modify these models without initiating a comprehensive retraining process. Model editing refers to the modification of a model{'}s knowledge or representations in a manner that produces the desired outcomes. Prior research primarily centered around editing factual data e.g. {``}Messi plays for Inter Miami{''} confining the definition of an edit to a knowledge triplet i.e. (subject, object, relation). However, as the applications of language models expand, so do the diverse ways in which we wish to edit and refine their outputs. In this study, we broaden the scope of the editing problem to include an array of editing cases such as debiasing and rectifying reasoning errors and define an edit as any natural language expression that solicits a change in the model{'}s outputs. We are introducing DUnE, an editing benchmark where edits are natural language sentences and propose that DUnE presents a challenging yet relevant task. To substantiate this claim, we conduct an extensive series of experiments testing various editing approaches to address DUnE, demonstrating their respective strengths and weaknesses. We argue that retrieval-augmented language modeling can outperform specialized editing techniques and neither set of approaches has fully solved the generalized editing problem covered by our benchmark.", }
Even the most advanced language models remain susceptible to errors necessitating to modify these models without initiating a comprehensive retraining process. Model editing refers to the modification of a model{'}s knowledge or representations in a manner that produces the desired outcomes. Prior research primarily centered around editing factual data e.g. {``}Messi plays for Inter Miami{''} confining the definition of an edit to a knowledge triplet i.e. (subject, object, relation). However, as the applications of language models expand, so do the diverse ways in which we wish to edit and refine their outputs. In this study, we broaden the scope of the editing problem to include an array of editing cases such as debiasing and rectifying reasoning errors and define an edit as any natural language expression that solicits a change in the model{'}s outputs. We are introducing DUnE, an editing benchmark where edits are natural language sentences and propose that DUnE presents a challenging yet relevant task. To substantiate this claim, we conduct an extensive series of experiments testing various editing approaches to address DUnE, demonstrating their respective strengths and weaknesses. We argue that retrieval-augmented language modeling can outperform specialized editing techniques and neither set of approaches has fully solved the generalized editing problem covered by our benchmark.
[ "Aky{\\\"u}rek, Afra", "Pan, Eric", "Kuwanto, Garry", "Wijaya, Derry" ]
DUnE: Dataset for Unified Editing
emnlp-main.114
2311.16087
[ "https://github.com/feyzaakyurek/dune" ]
https://huggingface.co/papers/2311.16087
1
1
1
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.115.bib
https://aclanthology.org/2023.emnlp-main.115/
@inproceedings{hada-etal-2023-fifty, title = "{``}Fifty Shades of Bias{''}: Normative Ratings of Gender Bias in {GPT} Generated {E}nglish Text", author = "Hada, Rishav and Seth, Agrima and Diddee, Harshita and Bali, Kalika", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.115", doi = "10.18653/v1/2023.emnlp-main.115", pages = "1862--1876", abstract = "Language serves as a powerful tool for the manifestation of societal belief systems. In doing so, it also perpetuates the prevalent biases in our society. Gender bias is one of the most pervasive biases in our society and is seen in online and offline discourses. With LLMs increasingly gaining human-like fluency in text generation, gaining a nuanced understanding of the biases these systems can generate is imperative. Prior work often treats gender bias as a binary classification task. However, acknowledging that bias must be perceived at a relative scale; we investigate the generation and consequent receptivity of manual annotators to bias of varying degrees. Specifically, we create the first dataset of GPT-generated English text with normative ratings of gender bias. Ratings were obtained using Best{--}Worst Scaling {--} an efficient comparative annotation framework. Next, we systematically analyze the variation of themes of gender biases in the observed ranking and show that identity-attack is most closely related to gender bias. Finally, we show the performance of existing automated models trained on related concepts on our dataset.", }
Language serves as a powerful tool for the manifestation of societal belief systems. In doing so, it also perpetuates the prevalent biases in our society. Gender bias is one of the most pervasive biases in our society and is seen in online and offline discourses. With LLMs increasingly gaining human-like fluency in text generation, gaining a nuanced understanding of the biases these systems can generate is imperative. Prior work often treats gender bias as a binary classification task. However, acknowledging that bias must be perceived at a relative scale; we investigate the generation and consequent receptivity of manual annotators to bias of varying degrees. Specifically, we create the first dataset of GPT-generated English text with normative ratings of gender bias. Ratings were obtained using Best{--}Worst Scaling {--} an efficient comparative annotation framework. Next, we systematically analyze the variation of themes of gender biases in the observed ranking and show that identity-attack is most closely related to gender bias. Finally, we show the performance of existing automated models trained on related concepts on our dataset.
[ "Hada, Rishav", "Seth, Agrima", "Diddee, Harshita", "Bali, Kalika" ]
“Fifty Shades of Bias”: Normative Ratings of Gender Bias in GPT Generated English Text
emnlp-main.115
2310.17428
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.116.bib
https://aclanthology.org/2023.emnlp-main.116/
@inproceedings{zhang-etal-2023-hybrid-inverted, title = "Hybrid Inverted Index Is a Robust Accelerator for Dense Retrieval", author = "Zhang, Peitian and Liu, Zheng and Xiao, Shitao and Dou, Zhicheng and Yao, Jing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.116", doi = "10.18653/v1/2023.emnlp-main.116", pages = "1877--1888", abstract = "Inverted file structure is a common technique for accelerating dense retrieval. It clusters documents based on their embeddings; during searching, it probes nearby clusters w.r.t. an input query and only evaluates documents within them by subsequent codecs, thus avoiding the expensive cost from exhaustive traversal. However, the clustering is always lossy, which results in the miss of relevant documents in the probed clusters and hence degrades retrieval quality. In contrast, lexical matching, such as overlaps of salient terms, tend to be strong features for identifying relevant documents. In this work, we present the Hybrid Inverted Index (HI$^2$), where the embedding clusters and salient terms work collaboratively to accelerate dense retrieval. To make best of both effectiveness and efficiency, we devise a cluster selector and a term selector, to construct compact inverted lists and efficiently searching through them. Moreover, we leverage simple unsupervised algorithms as well as end-to-end knowledge distillation to learn these two modules, with the latter further boosting the effectiveness. Based on comprehensive experiments on popular retrieval benchmarks, we verify that clusters and terms indeed complement each other, enabling HI$^2$ to achieve lossless retrieval quality with competitive efficiency across a variety of index settings.", }
Inverted file structure is a common technique for accelerating dense retrieval. It clusters documents based on their embeddings; during searching, it probes nearby clusters w.r.t. an input query and only evaluates documents within them by subsequent codecs, thus avoiding the expensive cost from exhaustive traversal. However, the clustering is always lossy, which results in the miss of relevant documents in the probed clusters and hence degrades retrieval quality. In contrast, lexical matching, such as overlaps of salient terms, tend to be strong features for identifying relevant documents. In this work, we present the Hybrid Inverted Index (HI$^2$), where the embedding clusters and salient terms work collaboratively to accelerate dense retrieval. To make best of both effectiveness and efficiency, we devise a cluster selector and a term selector, to construct compact inverted lists and efficiently searching through them. Moreover, we leverage simple unsupervised algorithms as well as end-to-end knowledge distillation to learn these two modules, with the latter further boosting the effectiveness. Based on comprehensive experiments on popular retrieval benchmarks, we verify that clusters and terms indeed complement each other, enabling HI$^2$ to achieve lossless retrieval quality with competitive efficiency across a variety of index settings.
[ "Zhang, Peitian", "Liu, Zheng", "Xiao, Shitao", "Dou, Zhicheng", "Yao, Jing" ]
Hybrid Inverted Index Is a Robust Accelerator for Dense Retrieval
emnlp-main.116
2210.05521
[ "https://github.com/namespace-pt/adon" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.117.bib
https://aclanthology.org/2023.emnlp-main.117/
@inproceedings{cegin-etal-2023-chatgpt, title = "{C}hat{GPT} to Replace Crowdsourcing of Paraphrases for Intent Classification: Higher Diversity and Comparable Model Robustness", author = "Cegin, Jan and Simko, Jakub and Brusilovsky, Peter", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.117", doi = "10.18653/v1/2023.emnlp-main.117", pages = "1889--1905", abstract = "The emergence of generative large language models (LLMs) raises the question: what will be its impact on crowdsourcing? Traditionally, crowdsourcing has been used for acquiring solutions to a wide variety of human-intelligence tasks, including ones involving text generation, modification or evaluation. For some of these tasks, models like ChatGPT can potentially substitute human workers. In this study, we investigate whether this is the case for the task of paraphrase generation for intent classification. We apply data collection methodology of an existing crowdsourcing study (similar scale, prompts and seed data) using ChatGPT and Falcon-40B. We show that ChatGPT-created paraphrases are more diverse and lead to at least as robust models.", }
The emergence of generative large language models (LLMs) raises the question: what will be its impact on crowdsourcing? Traditionally, crowdsourcing has been used for acquiring solutions to a wide variety of human-intelligence tasks, including ones involving text generation, modification or evaluation. For some of these tasks, models like ChatGPT can potentially substitute human workers. In this study, we investigate whether this is the case for the task of paraphrase generation for intent classification. We apply data collection methodology of an existing crowdsourcing study (similar scale, prompts and seed data) using ChatGPT and Falcon-40B. We show that ChatGPT-created paraphrases are more diverse and lead to at least as robust models.
[ "Cegin, Jan", "Simko, Jakub", "Brusilovsky, Peter" ]
ChatGPT to Replace Crowdsourcing of Paraphrases for Intent Classification: Higher Diversity and Comparable Model Robustness
emnlp-main.117
2305.12947
[ "https://github.com/kinit-sk/Crowd-vs-ChatGPT-Crowdsourcing-of-paraphrases-for-intent-classification" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.118.bib
https://aclanthology.org/2023.emnlp-main.118/
@inproceedings{w-etal-2023-query, title = "Query-as-context Pre-training for Dense Passage Retrieval", author = "W, Xing and Ma, Guangyuan and Qian, Wanhui and Lin, Zijia and Hu, Songlin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.118", doi = "10.18653/v1/2023.emnlp-main.118", pages = "1906--1916", abstract = "Recently, methods have been developed to improve the performance of dense passage retrieval by using context-supervised pre-training. These methods simply consider two passages from the same document to be relevant, without taking into account the potential negative impacts of weakly correlated pairs. Thus, this paper proposes query-as-context pre-training, a simple yet effective pre-training technique to alleviate the issue. Query-as-context pre-training assumes that the query derived from a passage is more likely to be relevant to that passage and forms a passage-query pair. These passage-query pairs are then used in contrastive or generative context-supervised pre-training. The pre-trained models are evaluated on large-scale passage retrieval benchmarks and out-of-domain zero-shot benchmarks. Experimental results show that query-as-context pre-training brings considerable gains for retrieval performances, demonstrating its effectiveness and efficiency.", }
Recently, methods have been developed to improve the performance of dense passage retrieval by using context-supervised pre-training. These methods simply consider two passages from the same document to be relevant, without taking into account the potential negative impacts of weakly correlated pairs. Thus, this paper proposes query-as-context pre-training, a simple yet effective pre-training technique to alleviate the issue. Query-as-context pre-training assumes that the query derived from a passage is more likely to be relevant to that passage and forms a passage-query pair. These passage-query pairs are then used in contrastive or generative context-supervised pre-training. The pre-trained models are evaluated on large-scale passage retrieval benchmarks and out-of-domain zero-shot benchmarks. Experimental results show that query-as-context pre-training brings considerable gains for retrieval performances, demonstrating its effectiveness and efficiency.
[ "W, Xing", "Ma, Guangyuan", "Qian, Wanhui", "Lin, Zijia", "Hu, Songlin" ]
Query-as-context Pre-training for Dense Passage Retrieval
emnlp-main.118
2212.09598
[ "https://github.com/caskcsg/ir" ]
https://huggingface.co/papers/2212.09598
0
0
0
6
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.119.bib
https://aclanthology.org/2023.emnlp-main.119/
@inproceedings{burns-etal-2023-suite, title = "A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding", author = "Burns, Andrea and Srinivasan, Krishna and Ainslie, Joshua and Brown, Geoff and Plummer, Bryan and Saenko, Kate and Ni, Jianmo and Guo, Mandy", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.119", doi = "10.18653/v1/2023.emnlp-main.119", pages = "1917--1947", abstract = "Webpages have been a rich, scalable resource for vision-language and language only tasks. Yet only pieces of webpages are kept in existing datasets: image-caption pairs, long text articles, or raw HTML, never all in one place. Webpage tasks have resultingly received little attention and structured image-text data left underused. To study multimodal webpage understanding, we introduce the Wikipedia Webpage suite (WikiWeb2M) containing 2M pages with all of the associated image, text, and structure data. We verify its utility on three generative tasks: page description generation, section summarization, and contextual image captioning. We design a novel attention mechanism Prefix Global, which selects the most relevant image and text content as global tokens to attend to the rest of the webpage for context. By using page structure to separate such tokens, it performs better than full attention with lower computational complexity. Extensive experiments show that the new data in WikiWeb2M improves task performance compared to prior work.", }
Webpages have been a rich, scalable resource for vision-language and language only tasks. Yet only pieces of webpages are kept in existing datasets: image-caption pairs, long text articles, or raw HTML, never all in one place. Webpage tasks have resultingly received little attention and structured image-text data left underused. To study multimodal webpage understanding, we introduce the Wikipedia Webpage suite (WikiWeb2M) containing 2M pages with all of the associated image, text, and structure data. We verify its utility on three generative tasks: page description generation, section summarization, and contextual image captioning. We design a novel attention mechanism Prefix Global, which selects the most relevant image and text content as global tokens to attend to the rest of the webpage for context. By using page structure to separate such tokens, it performs better than full attention with lower computational complexity. Extensive experiments show that the new data in WikiWeb2M improves task performance compared to prior work.
[ "Burns, Andrea", "Srinivasan, Krishna", "Ainslie, Joshua", "Brown, Geoff", "Plummer, Bryan", "Saenko, Kate", "Ni, Jianmo", "Guo, M", "y" ]
A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding
emnlp-main.119
2305.03668
[ "https://github.com/google-research-datasets/wit/blob/main/wikiweb2m.md" ]
https://huggingface.co/papers/2305.03668
1
1
4
8
[]
[ "aburns4/WikiWeb2M" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.120.bib
https://aclanthology.org/2023.emnlp-main.120/
@inproceedings{wang-etal-2023-democratizing, title = "Democratizing Reasoning Ability: Tailored Learning from Large Language Model", author = "Wang, Zhaoyang and Huang, Shaohan and Liu, Yuxuan and Wang, Jiahai and Song, Minghui and Zhang, Zihan and Huang, Haizhen and Wei, Furu and Deng, Weiwei and Sun, Feng and Zhang, Qi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.120", doi = "10.18653/v1/2023.emnlp-main.120", pages = "1948--1966", abstract = "Large language models (LLMs) exhibit impressive emergent abilities in natural language processing, but their democratization is hindered due to huge computation requirements and closed-source nature. Recent research on advancing open-source smaller LMs by distilling knowledge from black-box LLMs has obtained promising results in the instruction-following ability. However, the reasoning ability which is more challenging to foster, is relatively rarely explored. In this paper, we propose a tailored learning approach to distill such reasoning ability to smaller LMs to facilitate the democratization of the exclusive reasoning ability. In contrast to merely employing LLM as a data annotator, we exploit the potential of LLM as a reasoning teacher by building an interactive multi-round learning paradigm. This paradigm enables the student to expose its deficiencies to the black-box teacher who then can provide customized training data in return. Further, to exploit the reasoning potential of the smaller LM, we propose self-reflection learning to motivate the student to learn from self-made mistakes. The learning from self-reflection and LLM are all tailored to the student{'}s learning status, thanks to the seamless integration with the multi-round learning paradigm. Comprehensive experiments and analysis on mathematical and commonsense reasoning tasks demonstrate the effectiveness of our method. The code will be available at https://github.com/Raibows/Learn-to-Reason.", }
Large language models (LLMs) exhibit impressive emergent abilities in natural language processing, but their democratization is hindered due to huge computation requirements and closed-source nature. Recent research on advancing open-source smaller LMs by distilling knowledge from black-box LLMs has obtained promising results in the instruction-following ability. However, the reasoning ability which is more challenging to foster, is relatively rarely explored. In this paper, we propose a tailored learning approach to distill such reasoning ability to smaller LMs to facilitate the democratization of the exclusive reasoning ability. In contrast to merely employing LLM as a data annotator, we exploit the potential of LLM as a reasoning teacher by building an interactive multi-round learning paradigm. This paradigm enables the student to expose its deficiencies to the black-box teacher who then can provide customized training data in return. Further, to exploit the reasoning potential of the smaller LM, we propose self-reflection learning to motivate the student to learn from self-made mistakes. The learning from self-reflection and LLM are all tailored to the student{'}s learning status, thanks to the seamless integration with the multi-round learning paradigm. Comprehensive experiments and analysis on mathematical and commonsense reasoning tasks demonstrate the effectiveness of our method. The code will be available at https://github.com/Raibows/Learn-to-Reason.
[ "Wang, Zhaoyang", "Huang, Shaohan", "Liu, Yuxuan", "Wang, Jiahai", "Song, Minghui", "Zhang, Zihan", "Huang, Haizhen", "Wei, Furu", "Deng, Weiwei", "Sun, Feng", "Zhang, Qi" ]
Democratizing Reasoning Ability: Tailored Learning from Large Language Model
emnlp-main.120
2310.13332
[ "https://github.com/raibows/learn-to-reason" ]
https://huggingface.co/papers/2310.13332
4
14
1
11
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.121.bib
https://aclanthology.org/2023.emnlp-main.121/
@inproceedings{amar-etal-2023-openasp, title = "{O}pen{A}sp: A Benchmark for Multi-document Open Aspect-based Summarization", author = "Amar, Shmuel and Schiff, Liat and Ernst, Ori and Shefer, Asi and Shapira, Ori and Dagan, Ido", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.121", doi = "10.18653/v1/2023.emnlp-main.121", pages = "1967--1991", abstract = "The performance of automatic summarization models has improved dramatically in recent years. Yet, there is still a gap in meeting specific information needs of users in real-world scenarios, particularly when a targeted summary is sought, such as in the useful aspect-based summarization setting targeted in this paper. Previous datasets and studies for this setting have predominantly concentrated on a limited set of pre-defined aspects, focused solely on single document inputs, or relied on synthetic data. To advance research on more realistic scenarios, we introduce OpenAsp, a benchmark for multi-document open aspect-based summarization. This benchmark is created using a novel and cost-effective annotation protocol, by which an open aspect dataset is derived from existing generic multi-document summarization datasets. We analyze the properties of OpenAsp showcasing its high-quality content. Further, we show that the realistic open-aspect setting realized in OpenAsp poses a challenge for current state-of-the-art summarization models, as well as for large language models.", }
The performance of automatic summarization models has improved dramatically in recent years. Yet, there is still a gap in meeting specific information needs of users in real-world scenarios, particularly when a targeted summary is sought, such as in the useful aspect-based summarization setting targeted in this paper. Previous datasets and studies for this setting have predominantly concentrated on a limited set of pre-defined aspects, focused solely on single document inputs, or relied on synthetic data. To advance research on more realistic scenarios, we introduce OpenAsp, a benchmark for multi-document open aspect-based summarization. This benchmark is created using a novel and cost-effective annotation protocol, by which an open aspect dataset is derived from existing generic multi-document summarization datasets. We analyze the properties of OpenAsp showcasing its high-quality content. Further, we show that the realistic open-aspect setting realized in OpenAsp poses a challenge for current state-of-the-art summarization models, as well as for large language models.
[ "Amar, Shmuel", "Schiff, Liat", "Ernst, Ori", "Shefer, Asi", "Shapira, Ori", "Dagan, Ido" ]
OpenAsp: A Benchmark for Multi-document Open Aspect-based Summarization
emnlp-main.121
2312.04440
[ "https://github.com/liatschiff/openasp" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.122.bib
https://aclanthology.org/2023.emnlp-main.122/
@inproceedings{agarwal-etal-2023-peftdebias, title = "{PEFTD}ebias : Capturing debiasing information using {PEFT}s", author = "Agarwal, Sumit and Veerubhotla, Aditya and Bansal, Srijan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.122", doi = "10.18653/v1/2023.emnlp-main.122", pages = "1992--2000", abstract = "The increasing use of foundation models highlights the urgent need to address and eliminate implicit biases present in them that arise during pretraining. In this paper, we introduce PEFTDebias, a novel approach that employs parameter-efficient fine-tuning (PEFT) to mitigate the biases within foundation models. PEFTDebias consists of two main phases: an upstream phase for acquiring debiasing parameters along a specific bias axis, and a downstream phase where these parameters are incorporated into the model and frozen during the fine-tuning process. By evaluating on four datasets across two bias axes namely gender and race, we find that downstream biases can be effectively reduced with PEFTs. In addition, we show that these parameters possess axis-specific debiasing characteristics, enabling their effective transferability in mitigating biases in various downstream tasks.", }
The increasing use of foundation models highlights the urgent need to address and eliminate implicit biases present in them that arise during pretraining. In this paper, we introduce PEFTDebias, a novel approach that employs parameter-efficient fine-tuning (PEFT) to mitigate the biases within foundation models. PEFTDebias consists of two main phases: an upstream phase for acquiring debiasing parameters along a specific bias axis, and a downstream phase where these parameters are incorporated into the model and frozen during the fine-tuning process. By evaluating on four datasets across two bias axes namely gender and race, we find that downstream biases can be effectively reduced with PEFTs. In addition, we show that these parameters possess axis-specific debiasing characteristics, enabling their effective transferability in mitigating biases in various downstream tasks.
[ "Agarwal, Sumit", "Veerubhotla, Aditya", "Bansal, Srijan" ]
PEFTDebias : Capturing debiasing information using PEFTs
emnlp-main.122
2312.00434
[ "" ]
https://huggingface.co/papers/2312.00434
0
1
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.123.bib
https://aclanthology.org/2023.emnlp-main.123/
@inproceedings{fradet-etal-2023-byte, title = "Byte Pair Encoding for Symbolic Music", author = "Fradet, Nathan and Gutowski, Nicolas and Chhel, Fabien and Briot, Jean-Pierre", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.123", doi = "10.18653/v1/2023.emnlp-main.123", pages = "2001--2020", abstract = "When used with deep learning, the symbolic music modality is often coupled with language model architectures. To do so, the music needs to be tokenized, i.e. converted into a sequence of discrete tokens. This can be achieved by different approaches, as music can be composed of simultaneous tracks, of simultaneous notes with several attributes. Until now, the proposed tokenizations rely on small vocabularies of tokens describing the note attributes and time events, resulting in fairly long token sequences, and a sub-optimal use of the embedding space of language models. Recent research has put efforts on reducing the overall sequence length by merging embeddings or combining tokens. In this paper, we show that Byte Pair Encoding, a compression technique widely used for natural language, significantly decreases the sequence length while increasing the vocabulary size. By doing so, we leverage the embedding capabilities of such models with more expressive tokens, resulting in both better results and faster inference in generation and classification tasks. The [source code is shared on Github](https://github.com/Natooz/bpe-symbolic-music), along with a [companion website](https://Natooz.github.io/BPE-Symbolic-Music). Finally, BPE is directly implemented in [MidiTok](https://github.com/Natooz/MidiTok), allowing the reader to easily benefit from this method.", }
When used with deep learning, the symbolic music modality is often coupled with language model architectures. To do so, the music needs to be tokenized, i.e. converted into a sequence of discrete tokens. This can be achieved by different approaches, as music can be composed of simultaneous tracks, of simultaneous notes with several attributes. Until now, the proposed tokenizations rely on small vocabularies of tokens describing the note attributes and time events, resulting in fairly long token sequences, and a sub-optimal use of the embedding space of language models. Recent research has put efforts on reducing the overall sequence length by merging embeddings or combining tokens. In this paper, we show that Byte Pair Encoding, a compression technique widely used for natural language, significantly decreases the sequence length while increasing the vocabulary size. By doing so, we leverage the embedding capabilities of such models with more expressive tokens, resulting in both better results and faster inference in generation and classification tasks. The [source code is shared on Github](https://github.com/Natooz/bpe-symbolic-music), along with a [companion website](https://Natooz.github.io/BPE-Symbolic-Music). Finally, BPE is directly implemented in [MidiTok](https://github.com/Natooz/MidiTok), allowing the reader to easily benefit from this method.
[ "Fradet, Nathan", "Gutowski, Nicolas", "Chhel, Fabien", "Briot, Jean-Pierre" ]
Byte Pair Encoding for Symbolic Music
emnlp-main.123
2301.11975
[ "https://github.com/natooz/bpe-symbolic-music" ]
https://huggingface.co/papers/2301.11975
1
0
0
4
[ "Natooz/Maestro-REMI-bpe20k", "Natooz/Maestro-TSD-bpe20k" ]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.124.bib
https://aclanthology.org/2023.emnlp-main.124/
@inproceedings{lopez-avila-suarez-paniagua-2023-combining, title = "Combining Denoising Autoencoders with Contrastive Learning to fine-tune Transformer Models", author = "Lopez-Avila, Alejo and Su{\'a}rez-Paniagua, V{\'\i}ctor", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.124", doi = "10.18653/v1/2023.emnlp-main.124", pages = "2021--2032", abstract = "Recently, using large pre-trained Transformer models for transfer learning tasks has evolved to the point where they have become one of the flagship trends in the Natural Language Processing (NLP) community, giving rise to various outlooks such as prompt-based, adapters, or combinations with unsupervised approaches, among many others. In this work, we propose a 3-Phase technique to adjust a base model for a classification task. First, we adapt the model{'}s signal to the data distribution by performing further training with a Denoising Autoencoder (DAE). Second, we adjust the representation space of the output to the corresponding classes by clustering through a Contrastive Learning (CL) method. In addition, we introduce a new data augmentation approach for Supervised Contrastive Learning to correct the unbalanced datasets. Third, we apply fine-tuning to delimit the predefined categories. These different phases provide relevant and complementary knowledge to the model to learn the final task. We supply extensive experimental results on several datasets to demonstrate these claims. Moreover, we include an ablation study and compare the proposed method against other ways of combining these techniques.", }
Recently, using large pre-trained Transformer models for transfer learning tasks has evolved to the point where they have become one of the flagship trends in the Natural Language Processing (NLP) community, giving rise to various outlooks such as prompt-based, adapters, or combinations with unsupervised approaches, among many others. In this work, we propose a 3-Phase technique to adjust a base model for a classification task. First, we adapt the model{'}s signal to the data distribution by performing further training with a Denoising Autoencoder (DAE). Second, we adjust the representation space of the output to the corresponding classes by clustering through a Contrastive Learning (CL) method. In addition, we introduce a new data augmentation approach for Supervised Contrastive Learning to correct the unbalanced datasets. Third, we apply fine-tuning to delimit the predefined categories. These different phases provide relevant and complementary knowledge to the model to learn the final task. We supply extensive experimental results on several datasets to demonstrate these claims. Moreover, we include an ablation study and compare the proposed method against other ways of combining these techniques.
[ "Lopez-Avila, Alejo", "Su{\\'a}rez-Paniagua, V{\\'\\i}ctor" ]
Combining Denoising Autoencoders with Contrastive Learning to fine-tune Transformer Models
emnlp-main.124
2405.14437
[ "https://github.com/vsuarezpaniagua/3-phase_finetuning" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.125.bib
https://aclanthology.org/2023.emnlp-main.125/
@inproceedings{thakkar-etal-2023-self, title = "Self-Influence Guided Data Reweighting for Language Model Pre-training", author = "Thakkar, Megh and Bolukbasi, Tolga and Ganapathy, Sriram and Vashishth, Shikhar and Chandar, Sarath and Talukdar, Partha", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.125", doi = "10.18653/v1/2023.emnlp-main.125", pages = "2033--2045", abstract = "Language Models (LMs) pre-trained with selfsupervision on large text corpora have become the default starting point for developing models for various NLP tasks. Once the pre-training corpus has been assembled, all data samples in the corpus are treated with equal importance during LM pre-training. However, due to varying levels of relevance and quality of data, equal importance to all the data samples may not be the optimal choice. While data reweighting has been explored in the context of task-specific supervised learning and LM fine-tuning, model-driven reweighting for pretraining data has not been explored. We fill this important gap and propose PRESENCE, a method for jointly reweighting samples by leveraging self-influence (SI) scores as an indicator of sample importance and pre-training. PRESENCE promotes novelty and stability for model pre-training. Through extensive analysis spanning multiple model sizes, datasets, and tasks, we present PRESENCE as an important first step in the research direction of sample reweighting for pre-training language models.", }
Language Models (LMs) pre-trained with selfsupervision on large text corpora have become the default starting point for developing models for various NLP tasks. Once the pre-training corpus has been assembled, all data samples in the corpus are treated with equal importance during LM pre-training. However, due to varying levels of relevance and quality of data, equal importance to all the data samples may not be the optimal choice. While data reweighting has been explored in the context of task-specific supervised learning and LM fine-tuning, model-driven reweighting for pretraining data has not been explored. We fill this important gap and propose PRESENCE, a method for jointly reweighting samples by leveraging self-influence (SI) scores as an indicator of sample importance and pre-training. PRESENCE promotes novelty and stability for model pre-training. Through extensive analysis spanning multiple model sizes, datasets, and tasks, we present PRESENCE as an important first step in the research direction of sample reweighting for pre-training language models.
[ "Thakkar, Megh", "Bolukbasi, Tolga", "Ganapathy, Sriram", "Vashishth, Shikhar", "Ch", "ar, Sarath", "Talukdar, Partha" ]
Self-Influence Guided Data Reweighting for Language Model Pre-training
emnlp-main.125
2311.00913
[ "" ]
https://huggingface.co/papers/2311.00913
1
1
1
6
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.126.bib
https://aclanthology.org/2023.emnlp-main.126/
@inproceedings{wang-plank-2023-actor, title = "{ACTOR}: Active Learning with Annotator-specific Classification Heads to Embrace Human Label Variation", author = "Wang, Xinpeng and Plank, Barbara", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.126", doi = "10.18653/v1/2023.emnlp-main.126", pages = "2046--2052", abstract = "Label aggregation such as majority voting is commonly used to resolve annotator disagreement in dataset creation. However, this may disregard minority values and opinions. Recent studies indicate that learning from individual annotations outperforms learning from aggregated labels, though they require a considerable amount of annotation. Active learning, as an annotation cost-saving strategy, has not been fully explored in the context of learning from disagreement. We show that in the active learning setting, a multi-head model performs significantly better than a single-head model in terms of uncertainty estimation. By designing and evaluating acquisition functions with annotator-specific heads on two datasets, we show that group-level entropy works generally well on both datasets. Importantly, it achieves performance in terms of both prediction and uncertainty estimation comparable to full-scale training from disagreement, while saving 70{\%} of the annotation budget.", }
Label aggregation such as majority voting is commonly used to resolve annotator disagreement in dataset creation. However, this may disregard minority values and opinions. Recent studies indicate that learning from individual annotations outperforms learning from aggregated labels, though they require a considerable amount of annotation. Active learning, as an annotation cost-saving strategy, has not been fully explored in the context of learning from disagreement. We show that in the active learning setting, a multi-head model performs significantly better than a single-head model in terms of uncertainty estimation. By designing and evaluating acquisition functions with annotator-specific heads on two datasets, we show that group-level entropy works generally well on both datasets. Importantly, it achieves performance in terms of both prediction and uncertainty estimation comparable to full-scale training from disagreement, while saving 70{\%} of the annotation budget.
[ "Wang, Xinpeng", "Plank, Barbara" ]
ACTOR: Active Learning with Annotator-specific Classification Heads to Embrace Human Label Variation
emnlp-main.126
2310.14979
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.127.bib
https://aclanthology.org/2023.emnlp-main.127/
@inproceedings{gekhman-etal-2023-trueteacher, title = "{T}rue{T}eacher: Learning Factual Consistency Evaluation with Large Language Models", author = "Gekhman, Zorik and Herzig, Jonathan and Aharoni, Roee and Elkind, Chen and Szpektor, Idan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.127", doi = "10.18653/v1/2023.emnlp-main.127", pages = "2053--2070", abstract = "Factual consistency evaluation is often conducted using Natural Language Inference (NLI) models, yet these models exhibit limited success in evaluating summaries. Previous work improved such models with synthetic training data. However, the data is typically based on perturbed human-written summaries, which often differ in their characteristics from real model-generated summaries and have limited coverage of possible factual errors. Alternatively, large language models (LLMs) have recently shown promising results in directly evaluating generative tasks, but are too computationally expensive for practical use. Motivated by these limitations, we introduce TrueTeacher, a method for generating synthetic data by annotating diverse model-generated summaries using a LLM. Unlike prior work, TrueTeacher does not rely on human-written summaries, and is multilingual by nature. Experiments on the TRUE benchmark show that a student model trained using our data, substantially outperforms both the state-of-the-art model with similar capacity, and the LLM teacher. In a systematic study, we compare TrueTeacher to existing synthetic data generation methods and demonstrate its superiority and robustness to domain-shift. We also show that our method generalizes to multilingual scenarios. Lastly, we release our large scale synthetic dataset (1.4M examples), generated using TrueTeacher, and a checkpoint trained on this data.", }
Factual consistency evaluation is often conducted using Natural Language Inference (NLI) models, yet these models exhibit limited success in evaluating summaries. Previous work improved such models with synthetic training data. However, the data is typically based on perturbed human-written summaries, which often differ in their characteristics from real model-generated summaries and have limited coverage of possible factual errors. Alternatively, large language models (LLMs) have recently shown promising results in directly evaluating generative tasks, but are too computationally expensive for practical use. Motivated by these limitations, we introduce TrueTeacher, a method for generating synthetic data by annotating diverse model-generated summaries using a LLM. Unlike prior work, TrueTeacher does not rely on human-written summaries, and is multilingual by nature. Experiments on the TRUE benchmark show that a student model trained using our data, substantially outperforms both the state-of-the-art model with similar capacity, and the LLM teacher. In a systematic study, we compare TrueTeacher to existing synthetic data generation methods and demonstrate its superiority and robustness to domain-shift. We also show that our method generalizes to multilingual scenarios. Lastly, we release our large scale synthetic dataset (1.4M examples), generated using TrueTeacher, and a checkpoint trained on this data.
[ "Gekhman, Zorik", "Herzig, Jonathan", "Aharoni, Roee", "Elkind, Chen", "Szpektor, Idan" ]
TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models
emnlp-main.127
2305.11171
[ "https://github.com/google-research/google-research/tree/master/true_teacher" ]
https://huggingface.co/papers/2305.11171
1
2
0
5
[ "google/t5_11b_trueteacher_and_anli" ]
[ "google/trueteacher" ]
[ "Sharathhebbar24/One-stop-for-Open-source-models", "K00B404/One-stop-till-you-drop" ]
1
Poster
https://aclanthology.org/2023.emnlp-main.128.bib
https://aclanthology.org/2023.emnlp-main.128/
@inproceedings{ruiz-dolz-sanchez-2023-vivesdebate, title = "{V}ives{D}ebate-Speech: A Corpus of Spoken Argumentation to Leverage Audio Features for Argument Mining", author = "Ruiz-Dolz, Ramon and Iranzo-S{\'a}nchez, Javier", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.128", doi = "10.18653/v1/2023.emnlp-main.128", pages = "2071--2077", abstract = "In this paper, we describe VivesDebate-Speech, a corpus of spoken argumentation created to leverage audio features for argument mining tasks. The creation of this corpus represents an important contribution to the intersection of speech processing and argument mining communities, and one of the most complete publicly available resources in this topic. Moreover, we have performed a set of first-of-their-kind experiments which show an improvement when integrating audio features into the argument mining pipeline. The provided results can be used as a baseline for future research.", }
In this paper, we describe VivesDebate-Speech, a corpus of spoken argumentation created to leverage audio features for argument mining tasks. The creation of this corpus represents an important contribution to the intersection of speech processing and argument mining communities, and one of the most complete publicly available resources in this topic. Moreover, we have performed a set of first-of-their-kind experiments which show an improvement when integrating audio features into the argument mining pipeline. The provided results can be used as a baseline for future research.
[ "Ruiz-Dolz, Ramon", "Iranzo-S{\\'a}nchez, Javier" ]
VivesDebate-Speech: A Corpus of Spoken Argumentation to Leverage Audio Features for Argument Mining
emnlp-main.128
2302.12584
[ "https://github.com/jairsan/vivesdebate-speech" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.129.bib
https://aclanthology.org/2023.emnlp-main.129/
@inproceedings{xianlong-etal-2023-tagging, title = "Tagging-Assisted Generation Model with Encoder and Decoder Supervision for Aspect Sentiment Triplet Extraction", author = "Xianlong, Luo and Yang, Meng and Wang, Yihao", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.129", doi = "10.18653/v1/2023.emnlp-main.129", pages = "2078--2093", abstract = "ASTE (Aspect Sentiment Triplet Extraction) has gained increasing attention. Recent advancements in the ASTE task have been primarily driven by Natural Language Generation-based (NLG) approaches. However, most NLG methods overlook the supervision of the encoder-decoder hidden representations and fail to fully utilize the semantic information provided by the labels to enhance supervision. These limitations can hinder the extraction of implicit aspects and opinions. To address these challenges, we propose a tagging-assisted generation model with encoder and decoder supervision (TAGS), which enhances the supervision of the encoder and decoder through multiple-perspective tagging assistance and label semantic representations. Specifically, TAGS enhances the generation task by integrating an additional sequence tagging task, which improves the encoder{'}s capability to distinguish the words of triplets. Moreover, it utilizes sequence tagging probabilities to guide the decoder, improving the generated content{'}s quality. Furthermore, TAGS employs a self-decoding process for labels to acquire the semantic representations of the labels and aligns the decoder{'}s hidden states with these semantic representations, thereby achieving enhanced semantic supervision for the decoder{'}s hidden states. Extensive experiments on various public benchmarks demonstrate that TAGS achieves state-of-the-art performance.", }
ASTE (Aspect Sentiment Triplet Extraction) has gained increasing attention. Recent advancements in the ASTE task have been primarily driven by Natural Language Generation-based (NLG) approaches. However, most NLG methods overlook the supervision of the encoder-decoder hidden representations and fail to fully utilize the semantic information provided by the labels to enhance supervision. These limitations can hinder the extraction of implicit aspects and opinions. To address these challenges, we propose a tagging-assisted generation model with encoder and decoder supervision (TAGS), which enhances the supervision of the encoder and decoder through multiple-perspective tagging assistance and label semantic representations. Specifically, TAGS enhances the generation task by integrating an additional sequence tagging task, which improves the encoder{'}s capability to distinguish the words of triplets. Moreover, it utilizes sequence tagging probabilities to guide the decoder, improving the generated content{'}s quality. Furthermore, TAGS employs a self-decoding process for labels to acquire the semantic representations of the labels and aligns the decoder{'}s hidden states with these semantic representations, thereby achieving enhanced semantic supervision for the decoder{'}s hidden states. Extensive experiments on various public benchmarks demonstrate that TAGS achieves state-of-the-art performance.
[ "Xianlong, Luo", "Yang, Meng", "Wang, Yihao" ]
Tagging-Assisted Generation Model with Encoder and Decoder Supervision for Aspect Sentiment Triplet Extraction
emnlp-main.129
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.130.bib
https://aclanthology.org/2023.emnlp-main.130/
@inproceedings{shivagunde-etal-2023-larger, title = "Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning", author = "Shivagunde, Namrata and Lialin, Vladislav and Rumshisky, Anna", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.130", doi = "10.18653/v1/2023.emnlp-main.130", pages = "2094--2107", abstract = "Language model probing is often used to test specific capabilities of models. However, conclusions from such studies may be limited when the probing benchmarks are small and lack statistical power. In this work, we introduce new, larger datasets for negation (NEG-1500-SIMP) and role reversal (ROLE-1500) inspired by psycholinguistic studies. We dramatically extend existing NEG-136 and ROLE-88 benchmarks using GPT3, increasing their size from 18 and 44 sentence pairs to 750 each. We also create another version of extended negation dataset (NEG-1500-SIMP-TEMP), created using template-based generation. It consists of 770 sentence pairs. We evaluate 22 models on the extended datasets, seeing model performance dip 20-57{\%} compared to the original smaller benchmarks. We observe high levels of negation sensitivity in models like BERT and ALBERT demonstrating that previous findings might have been skewed due to smaller test sets. Finally, we observe that while GPT3 has generated all the examples in ROLE-1500 is only able to solve 24.6{\%} of them during probing. The datasets and code are available on Github.", }
Language model probing is often used to test specific capabilities of models. However, conclusions from such studies may be limited when the probing benchmarks are small and lack statistical power. In this work, we introduce new, larger datasets for negation (NEG-1500-SIMP) and role reversal (ROLE-1500) inspired by psycholinguistic studies. We dramatically extend existing NEG-136 and ROLE-88 benchmarks using GPT3, increasing their size from 18 and 44 sentence pairs to 750 each. We also create another version of extended negation dataset (NEG-1500-SIMP-TEMP), created using template-based generation. It consists of 770 sentence pairs. We evaluate 22 models on the extended datasets, seeing model performance dip 20-57{\%} compared to the original smaller benchmarks. We observe high levels of negation sensitivity in models like BERT and ALBERT demonstrating that previous findings might have been skewed due to smaller test sets. Finally, we observe that while GPT3 has generated all the examples in ROLE-1500 is only able to solve 24.6{\%} of them during probing. The datasets and code are available on Github.
[ "Shivagunde, Namrata", "Lialin, Vladislav", "Rumshisky, Anna" ]
Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
emnlp-main.130
2303.16445
[ "https://github.com/text-machine-lab/extending_psycholinguistic_dataset" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.131.bib
https://aclanthology.org/2023.emnlp-main.131/
@inproceedings{oyama-etal-2023-norm, title = "Norm of Word Embedding Encodes Information Gain", author = "Oyama, Momose and Yokoi, Sho and Shimodaira, Hidetoshi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.131", doi = "10.18653/v1/2023.emnlp-main.131", pages = "2108--2130", abstract = "Distributed representations of words encode lexical semantic information, but what type of information is encoded and how? Focusing on the skip-gram with negative-sampling method, we found that the squared norm of static word embedding encodes the information gain conveyed by the word; the information gain is defined by the Kullback-Leibler divergence of the co-occurrence distribution of the word to the unigram distribution. Our findings are explained by the theoretical framework of the exponential family of probability distributions and confirmed through precise experiments that remove spurious correlations arising from word frequency. This theory also extends to contextualized word embeddings in language models or any neural networks with the softmax output layer. We also demonstrate that both the KL divergence and the squared norm of embedding provide a useful metric of the informativeness of a word in tasks such as keyword extraction, proper-noun discrimination, and hypernym discrimination.", }
Distributed representations of words encode lexical semantic information, but what type of information is encoded and how? Focusing on the skip-gram with negative-sampling method, we found that the squared norm of static word embedding encodes the information gain conveyed by the word; the information gain is defined by the Kullback-Leibler divergence of the co-occurrence distribution of the word to the unigram distribution. Our findings are explained by the theoretical framework of the exponential family of probability distributions and confirmed through precise experiments that remove spurious correlations arising from word frequency. This theory also extends to contextualized word embeddings in language models or any neural networks with the softmax output layer. We also demonstrate that both the KL divergence and the squared norm of embedding provide a useful metric of the informativeness of a word in tasks such as keyword extraction, proper-noun discrimination, and hypernym discrimination.
[ "Oyama, Momose", "Yokoi, Sho", "Shimodaira, Hidetoshi" ]
Norm of Word Embedding Encodes Information Gain
emnlp-main.131
2212.09663
[ "" ]
https://huggingface.co/papers/2212.09663
0
0
0
3
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.132.bib
https://aclanthology.org/2023.emnlp-main.132/
@inproceedings{zhang-etal-2023-crt, title = "{CRT}-{QA}: A Dataset of Complex Reasoning Question Answering over Tabular Data", author = "Zhang, Zhehao and Li, Xitao and Gao, Yan and Lou, Jian-Guang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.132", doi = "10.18653/v1/2023.emnlp-main.132", pages = "2131--2153", abstract = "Large language models (LLMs) show powerful reasoning abilities on various text-based tasks. However, their reasoning capability on structured data such as tables has not been systematically explored. In this work, we first establish a comprehensive taxonomy of reasoning and operation types for tabular data analysis. Then, we construct a complex reasoning QA dataset over tabular data, named CRT-QA dataset (Complex Reasoning QA over Tabular data), with the following unique features: (1) it is the first Table QA dataset with multi-step operation and informal reasoning; (2) it contains fine-grained annotations on questions{'} directness, composition types of sub-questions, and human reasoning paths which can be used to conduct a thorough investigation on LLMs{'} reasoning ability; (3) it contains a collection of unanswerable and indeterminate questions that commonly arise in real-world situations. We further introduce an efficient and effective tool-augmented method, named ARC (Auto-exemplar-guided Reasoning with Code), to use external tools such as Pandas to solve table reasoning tasks without handcrafted demonstrations. The experiment results show that CRT-QA presents a strong challenge for baseline methods and ARC achieves the best result.", }
Large language models (LLMs) show powerful reasoning abilities on various text-based tasks. However, their reasoning capability on structured data such as tables has not been systematically explored. In this work, we first establish a comprehensive taxonomy of reasoning and operation types for tabular data analysis. Then, we construct a complex reasoning QA dataset over tabular data, named CRT-QA dataset (Complex Reasoning QA over Tabular data), with the following unique features: (1) it is the first Table QA dataset with multi-step operation and informal reasoning; (2) it contains fine-grained annotations on questions{'} directness, composition types of sub-questions, and human reasoning paths which can be used to conduct a thorough investigation on LLMs{'} reasoning ability; (3) it contains a collection of unanswerable and indeterminate questions that commonly arise in real-world situations. We further introduce an efficient and effective tool-augmented method, named ARC (Auto-exemplar-guided Reasoning with Code), to use external tools such as Pandas to solve table reasoning tasks without handcrafted demonstrations. The experiment results show that CRT-QA presents a strong challenge for baseline methods and ARC achieves the best result.
[ "Zhang, Zhehao", "Li, Xitao", "Gao, Yan", "Lou, Jian-Guang" ]
CRT-QA: A Dataset of Complex Reasoning Question Answering over Tabular Data
emnlp-main.132
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.133.bib
https://aclanthology.org/2023.emnlp-main.133/
@inproceedings{atri-etal-2023-promoting, title = "Promoting Topic Coherence and Inter-Document Consorts in Multi-Document Summarization via Simplicial Complex and Sheaf Graph", author = "Atri, Yash and Iyer, Arun and Chakraborty, Tanmoy and Goyal, Vikram", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.133", doi = "10.18653/v1/2023.emnlp-main.133", pages = "2154--2166", abstract = "Multi-document Summarization (MDS) characterizes compressing information from multiple source documents to its succinct summary. An ideal summary should encompass all topics and accurately model cross-document relations expounded upon in the source documents. However, existing systems either impose constraints on the length of tokens during the encoding or falter in capturing the intricate cross-document relationships. These limitations impel the systems to produce summaries that are non-factual and unfaithful, thereby imparting an unfair comprehension of the topic to the readers. To counter these limitations and promote the information equivalence between the source document and generated summary, we propose FIBER, a novel encoder-decoder model that uses pre-trained BART to comprehensively analyze linguistic nuances, simplicial complex layer to apprehend inherent properties that transcend pairwise associations and sheaf graph attention to effectively capture the heterophilic properties. We benchmark FIBER with eleven baselines over four widely-used MDS datasets {--} Multinews, CQASumm, DUC and Opinosis, and show that FIBER achieves consistent performance improvement across all the evaluation metrics (syntactical, semantical and faithfulness). We corroborate these improvements further through qualitative human evaluation.", }
Multi-document Summarization (MDS) characterizes compressing information from multiple source documents to its succinct summary. An ideal summary should encompass all topics and accurately model cross-document relations expounded upon in the source documents. However, existing systems either impose constraints on the length of tokens during the encoding or falter in capturing the intricate cross-document relationships. These limitations impel the systems to produce summaries that are non-factual and unfaithful, thereby imparting an unfair comprehension of the topic to the readers. To counter these limitations and promote the information equivalence between the source document and generated summary, we propose FIBER, a novel encoder-decoder model that uses pre-trained BART to comprehensively analyze linguistic nuances, simplicial complex layer to apprehend inherent properties that transcend pairwise associations and sheaf graph attention to effectively capture the heterophilic properties. We benchmark FIBER with eleven baselines over four widely-used MDS datasets {--} Multinews, CQASumm, DUC and Opinosis, and show that FIBER achieves consistent performance improvement across all the evaluation metrics (syntactical, semantical and faithfulness). We corroborate these improvements further through qualitative human evaluation.
[ "Atri, Yash", "Iyer, Arun", "Chakraborty, Tanmoy", "Goyal, Vikram" ]
Promoting Topic Coherence and Inter-Document Consorts in Multi-Document Summarization via Simplicial Complex and Sheaf Graph
emnlp-main.133
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.134.bib
https://aclanthology.org/2023.emnlp-main.134/
@inproceedings{patel-etal-2023-magnifico, title = "{MAGNIFIC}o: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations", author = "Patel, Arkil and Bhattamishra, Satwik and Reddy, Siva and Bahdanau, Dzmitry", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.134", doi = "10.18653/v1/2023.emnlp-main.134", pages = "2167--2189", abstract = "Humans possess a remarkable ability to assign novel interpretations to linguistic expressions, enabling them to learn new words and understand community-specific connotations. However, Large Language Models (LLMs) have a knowledge cutoff and are costly to finetune repeatedly. Therefore, it is crucial for LLMs to learn novel interpretations in-context. In this paper, we systematically analyse the ability of LLMs to acquire novel interpretations using in-context learning. To facilitate our study, we introduce MAGNIFICo, an evaluation suite implemented within a text-to-SQL semantic parsing framework that incorporates diverse tokens and prompt settings to simulate real-world complexity. Experimental results on MAGNIFICo demonstrate that LLMs exhibit a surprisingly robust capacity for comprehending novel interpretations from natural language descriptions as well as from discussions within long conversations. Nevertheless, our findings also highlight the need for further improvements, particularly when interpreting unfamiliar words or when composing multiple novel interpretations simultaneously in the same example. Additionally, our analysis uncovers the semantic predispositions in LLMs and reveals the impact of recency bias for information presented in long contexts.", }
Humans possess a remarkable ability to assign novel interpretations to linguistic expressions, enabling them to learn new words and understand community-specific connotations. However, Large Language Models (LLMs) have a knowledge cutoff and are costly to finetune repeatedly. Therefore, it is crucial for LLMs to learn novel interpretations in-context. In this paper, we systematically analyse the ability of LLMs to acquire novel interpretations using in-context learning. To facilitate our study, we introduce MAGNIFICo, an evaluation suite implemented within a text-to-SQL semantic parsing framework that incorporates diverse tokens and prompt settings to simulate real-world complexity. Experimental results on MAGNIFICo demonstrate that LLMs exhibit a surprisingly robust capacity for comprehending novel interpretations from natural language descriptions as well as from discussions within long conversations. Nevertheless, our findings also highlight the need for further improvements, particularly when interpreting unfamiliar words or when composing multiple novel interpretations simultaneously in the same example. Additionally, our analysis uncovers the semantic predispositions in LLMs and reveals the impact of recency bias for information presented in long contexts.
[ "Patel, Arkil", "Bhattamishra, Satwik", "Reddy, Siva", "Bahdanau, Dzmitry" ]
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations
emnlp-main.134
2310.11634
[ "https://github.com/mcgill-nlp/magnifico" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.135.bib
https://aclanthology.org/2023.emnlp-main.135/
@inproceedings{zelikman-etal-2023-generating, title = "Generating and Evaluating Tests for K-12 Students with Language Model Simulations: A Case Study on Sentence Reading Efficiency", author = "Zelikman, Eric and Ma, Wanjing and Tran, Jasmine and Yang, Diyi and Yeatman, Jason and Haber, Nick", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.135", doi = "10.18653/v1/2023.emnlp-main.135", pages = "2190--2205", abstract = "Developing an educational test can be expensive and time-consuming, as each item must be written by experts and then evaluated by collecting hundreds of student responses. Moreover, many tests require multiple distinct sets of questions administered throughout the school year to closely monitor students{'} progress, known as parallel tests. In this study, we focus on tests of silent sentence reading efficiency, used to assess students{'} reading ability over time. To generate high-quality parallel tests, we propose to fine-tune large language models (LLMs) to simulate how previous students would have responded to unseen items. With these simulated responses, we can estimate each item{'}s difficulty and ambiguity. We first use GPT-4 to generate new test items following a list of expert-developed rules and then apply a fine-tuned LLM to filter the items based on criteria from psychological measurements. We also propose an optimal-transport-inspired technique for generating parallel tests and show the generated tests closely correspond to the original test{'}s difficulty and reliability based on crowdworker responses. Our evaluation of a generated test with 234 students from grades 2 to 8 produces test scores highly correlated (r=0.93) to those of a standard test form written by human experts and evaluated across thousands of K-12 students.", }
Developing an educational test can be expensive and time-consuming, as each item must be written by experts and then evaluated by collecting hundreds of student responses. Moreover, many tests require multiple distinct sets of questions administered throughout the school year to closely monitor students{'} progress, known as parallel tests. In this study, we focus on tests of silent sentence reading efficiency, used to assess students{'} reading ability over time. To generate high-quality parallel tests, we propose to fine-tune large language models (LLMs) to simulate how previous students would have responded to unseen items. With these simulated responses, we can estimate each item{'}s difficulty and ambiguity. We first use GPT-4 to generate new test items following a list of expert-developed rules and then apply a fine-tuned LLM to filter the items based on criteria from psychological measurements. We also propose an optimal-transport-inspired technique for generating parallel tests and show the generated tests closely correspond to the original test{'}s difficulty and reliability based on crowdworker responses. Our evaluation of a generated test with 234 students from grades 2 to 8 produces test scores highly correlated (r=0.93) to those of a standard test form written by human experts and evaluated across thousands of K-12 students.
[ "Zelikman, Eric", "Ma, Wanjing", "Tran, Jasmine", "Yang, Diyi", "Yeatman, Jason", "Haber, Nick" ]
Generating and Evaluating Tests for K-12 Students with Language Model Simulations: A Case Study on Sentence Reading Efficiency
emnlp-main.135
2310.06837
[ "" ]
https://huggingface.co/papers/2310.06837
2
1
0
6
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.136.bib
https://aclanthology.org/2023.emnlp-main.136/
@inproceedings{chakraborty-etal-2023-counter, title = "Counter {T}uring Test ({CT}2): {AI}-Generated Text Detection is Not as Easy as You May Think - Introducing {AI} Detectability Index ({ADI})", author = "Chakraborty, Megha and Tonmoy, S.M Towhidul Islam and Zaman, S M Mehedi and Gautam, Shreya and Kumar, Tanay and Sharma, Krish and Barman, Niyar and Gupta, Chandan and Jain, Vinija and Chadha, Aman and Sheth, Amit and Das, Amitava", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.136", doi = "10.18653/v1/2023.emnlp-main.136", pages = "2206--2239", abstract = "With the rise of prolific ChatGPT, the risk and consequences of AI-generated text has increased alarmingly. This triggered a series of events, including an open letter, signed by thousands of researchers and tech leaders in March 2023, demanding a six-month moratorium on the training of AI systems more sophisticated than GPT-4. To address the inevitable question of ownership attribution for AI-generated artifacts, the US Copyright Office released a statement stating that {``}if the content is traditional elements of authorship produced by a machine, the work lacks human authorship and the office will not register it for copyright{''}. Furthermore, both the US and the EU governments have recently drafted their initial proposals regarding the regulatory framework for AI. Given this cynosural spotlight on generative AI, AI-generated text detection (AGTD) has emerged as a topic that has already received immediate attention in research, with some initial methods having been proposed, soon followed by the emergence of techniques to bypass detection. This paper introduces the Counter Turing Test (CT2), a benchmark consisting of techniques aiming to offer a comprehensive evaluation of the robustness of existing AGTD techniques. Our empirical findings unequivocally highlight the fragility of the proposed AGTD methods under scrutiny. Amidst the extensive deliberations on policy-making for regulating AI development, it is of utmost importance to assess the detectability of content generated by LLMs. Thus, to establish a quantifiable spectrum facilitating the evaluation and ranking of LLMs according to their detectability levels, we propose the AI Detectability Index (ADI). We conduct a thorough examination of 15 contemporary LLMs, empirically demonstrating that larger LLMs tend to have a lower ADI, indicating they are less detectable compared to smaller LLMs. We firmly believe that ADI holds significant value as a tool for the wider NLP community, with the potential to serve as a rubric in AI-related policy-making.", }
With the rise of prolific ChatGPT, the risk and consequences of AI-generated text has increased alarmingly. This triggered a series of events, including an open letter, signed by thousands of researchers and tech leaders in March 2023, demanding a six-month moratorium on the training of AI systems more sophisticated than GPT-4. To address the inevitable question of ownership attribution for AI-generated artifacts, the US Copyright Office released a statement stating that {``}if the content is traditional elements of authorship produced by a machine, the work lacks human authorship and the office will not register it for copyright{''}. Furthermore, both the US and the EU governments have recently drafted their initial proposals regarding the regulatory framework for AI. Given this cynosural spotlight on generative AI, AI-generated text detection (AGTD) has emerged as a topic that has already received immediate attention in research, with some initial methods having been proposed, soon followed by the emergence of techniques to bypass detection. This paper introduces the Counter Turing Test (CT2), a benchmark consisting of techniques aiming to offer a comprehensive evaluation of the robustness of existing AGTD techniques. Our empirical findings unequivocally highlight the fragility of the proposed AGTD methods under scrutiny. Amidst the extensive deliberations on policy-making for regulating AI development, it is of utmost importance to assess the detectability of content generated by LLMs. Thus, to establish a quantifiable spectrum facilitating the evaluation and ranking of LLMs according to their detectability levels, we propose the AI Detectability Index (ADI). We conduct a thorough examination of 15 contemporary LLMs, empirically demonstrating that larger LLMs tend to have a lower ADI, indicating they are less detectable compared to smaller LLMs. We firmly believe that ADI holds significant value as a tool for the wider NLP community, with the potential to serve as a rubric in AI-related policy-making.
[ "Chakraborty, Megha", "Tonmoy, S.M Towhidul Islam", "Zaman, S M Mehedi", "Gautam, Shreya", "Kumar, Tanay", "Sharma, Krish", "Barman, Niyar", "Gupta, Ch", "an", "Jain, Vinija", "Chadha, Aman", "Sheth, Amit", "Das, Amitava" ]
Counter Turing Test (CT2): AI-Generated Text Detection is Not as Easy as You May Think - Introducing AI Detectability Index (ADI)
emnlp-main.136
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.137.bib
https://aclanthology.org/2023.emnlp-main.137/
@inproceedings{pimentel-etal-2023-revisiting, title = "Revisiting the Optimality of Word Lengths", author = "Pimentel, Tiago and Meister, Clara and Wilcox, Ethan and Mahowald, Kyle and Cotterell, Ryan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.137", doi = "10.18653/v1/2023.emnlp-main.137", pages = "2240--2255", abstract = "Zipf (1935) posited that wordforms are optimized to minimize utterances{'} communicative costs. Under the assumption that cost is given by an utterance{'}s length, he supported this claim by showing that words{'} lengths are inversely correlated with their frequencies. Communicative cost, however, can be operationalized in different ways. Piantadosi et al. (2011) claim that cost should be measured as the distance between an utterance{'}s information rate and channel capacity, which we dub the channel capacity hypothesis (CCH) here. Following this logic, they then proposed that a word{'}s length should be proportional to the expected value of its surprisal (negative log-probability in context). In this work, we show that Piantadosi et al.{'}s derivation does not minimize CCH{'}s cost, but rather a lower bound, which we term CCH-lower. We propose a novel derivation, suggesting an improved way to minimize CCH{'}s cost. Under this method, we find that a language{'}s word lengths should instead be proportional to the surprisal{'}s expectation plus its variance-to-mean ratio. Experimentally, we compare these three communicative cost functions: Zipf{'}s, CCH-lower , and CCH. Across 13 languages and several experimental settings, we find that length is better predicted by frequency than either of the other hypotheses. In fact, when surprisal{'}s expectation, or expectation plus variance-to-mean ratio, is estimated using better language models, it leads to worse word length predictions. We take these results as evidence that Zipf{'}s longstanding hypothesis holds.", }
Zipf (1935) posited that wordforms are optimized to minimize utterances{'} communicative costs. Under the assumption that cost is given by an utterance{'}s length, he supported this claim by showing that words{'} lengths are inversely correlated with their frequencies. Communicative cost, however, can be operationalized in different ways. Piantadosi et al. (2011) claim that cost should be measured as the distance between an utterance{'}s information rate and channel capacity, which we dub the channel capacity hypothesis (CCH) here. Following this logic, they then proposed that a word{'}s length should be proportional to the expected value of its surprisal (negative log-probability in context). In this work, we show that Piantadosi et al.{'}s derivation does not minimize CCH{'}s cost, but rather a lower bound, which we term CCH-lower. We propose a novel derivation, suggesting an improved way to minimize CCH{'}s cost. Under this method, we find that a language{'}s word lengths should instead be proportional to the surprisal{'}s expectation plus its variance-to-mean ratio. Experimentally, we compare these three communicative cost functions: Zipf{'}s, CCH-lower , and CCH. Across 13 languages and several experimental settings, we find that length is better predicted by frequency than either of the other hypotheses. In fact, when surprisal{'}s expectation, or expectation plus variance-to-mean ratio, is estimated using better language models, it leads to worse word length predictions. We take these results as evidence that Zipf{'}s longstanding hypothesis holds.
[ "Pimentel, Tiago", "Meister, Clara", "Wilcox, Ethan", "Mahowald, Kyle", "Cotterell, Ryan" ]
Revisiting the Optimality of Word Lengths
emnlp-main.137
2312.03897
[ "https://github.com/tpimentelms/optimality-of-word-lengths" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.138.bib
https://aclanthology.org/2023.emnlp-main.138/
@inproceedings{liu-etal-2023-document-level, title = "Document-level Relationship Extraction by Bidirectional Constraints of Beta Rules", author = "Liu, Yichun and Zhu, Zizhong and Zhang, Xiaowang and Feng, Zhiyong and Chen, Daoqi and Li, Yaxin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.138", doi = "10.18653/v1/2023.emnlp-main.138", pages = "2256--2266", abstract = "Document-level Relation Extraction (DocRE) aims to extract relations among entity pairs in documents. Some works introduce logic constraints into DocRE, addressing the issues of opacity and weak logic in original DocRE models. However, they only focus on forward logic constraints and the rules mined in these works often suffer from pseudo rules with high standard-confidence but low support. In this paper, we proposes Bidirectional Constraints of Beta Rules(BCBR), a novel logic constraint framework. BCBR first introduces a new rule miner which model rules by beta contribtion. Then forward and reverse logic constraints are constructed based on beta rules. Finally, BCBR reconstruct rule consistency loss by bidirectional constraints to regulate the output of the DocRE model. Experiments show that BCBR outperforms original DocRE models in terms of relation extraction performance ({\textasciitilde}2.7 F1 score) and logical consistency({\textasciitilde}3.1 logic score). Furthermore, BCBR consistently outperforms two other logic constraint frameworks.", }
Document-level Relation Extraction (DocRE) aims to extract relations among entity pairs in documents. Some works introduce logic constraints into DocRE, addressing the issues of opacity and weak logic in original DocRE models. However, they only focus on forward logic constraints and the rules mined in these works often suffer from pseudo rules with high standard-confidence but low support. In this paper, we proposes Bidirectional Constraints of Beta Rules(BCBR), a novel logic constraint framework. BCBR first introduces a new rule miner which model rules by beta contribtion. Then forward and reverse logic constraints are constructed based on beta rules. Finally, BCBR reconstruct rule consistency loss by bidirectional constraints to regulate the output of the DocRE model. Experiments show that BCBR outperforms original DocRE models in terms of relation extraction performance ({\textasciitilde}2.7 F1 score) and logical consistency({\textasciitilde}3.1 logic score). Furthermore, BCBR consistently outperforms two other logic constraint frameworks.
[ "Liu, Yichun", "Zhu, Zizhong", "Zhang, Xiaowang", "Feng, Zhiyong", "Chen, Daoqi", "Li, Yaxin" ]
Document-level Relationship Extraction by Bidirectional Constraints of Beta Rules
emnlp-main.138
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.139.bib
https://aclanthology.org/2023.emnlp-main.139/
@inproceedings{xiao-etal-2023-instructed, title = "Instructed Language Models with Retrievers Are Powerful Entity Linkers", author = "Xiao, Zilin and Gong, Ming and Wu, Jie and Zhang, Xingyao and Shou, Linjun and Jiang, Daxin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.139", doi = "10.18653/v1/2023.emnlp-main.139", pages = "2267--2282", abstract = "Generative approaches powered by large language models (LLMs) have demonstrated emergent abilities in tasks that require complex reasoning abilities. Yet the generative nature still makes the generated content suffer from hallucinations, thus unsuitable for entity-centric tasks like entity linking (EL) requiring precise entity predictions over a large knowledge base. We present Instructed Generative Entity Linker (INSGENEL), the first approach that enables casual language models to perform entity linking over knowledge bases. Several methods of equipping language models with EL ability were proposed in this work, including (i) a sequence-to-sequence training EL objective with instruction-tuning, (ii) a novel generative EL framework based on a light-weight potential mention retriever that frees the model from heavy and non-parallelizable decoding, achieving 4$\times$ speedup without compromise on linking metrics. INSGENEL outperforms previous generative alternatives with +6.8 F1 points gain on average, also with a huge advantage in training data efficiency and training compute consumption. In addition, our skillfully-engineered in-context learning (ICL) framework for EL still lags behind INSGENEL significantly, reaffirming that the EL task remains a persistent hurdle for general LLMs.", }
Generative approaches powered by large language models (LLMs) have demonstrated emergent abilities in tasks that require complex reasoning abilities. Yet the generative nature still makes the generated content suffer from hallucinations, thus unsuitable for entity-centric tasks like entity linking (EL) requiring precise entity predictions over a large knowledge base. We present Instructed Generative Entity Linker (INSGENEL), the first approach that enables casual language models to perform entity linking over knowledge bases. Several methods of equipping language models with EL ability were proposed in this work, including (i) a sequence-to-sequence training EL objective with instruction-tuning, (ii) a novel generative EL framework based on a light-weight potential mention retriever that frees the model from heavy and non-parallelizable decoding, achieving 4$\times$ speedup without compromise on linking metrics. INSGENEL outperforms previous generative alternatives with +6.8 F1 points gain on average, also with a huge advantage in training data efficiency and training compute consumption. In addition, our skillfully-engineered in-context learning (ICL) framework for EL still lags behind INSGENEL significantly, reaffirming that the EL task remains a persistent hurdle for general LLMs.
[ "Xiao, Zilin", "Gong, Ming", "Wu, Jie", "Zhang, Xingyao", "Shou, Linjun", "Jiang, Daxin" ]
Instructed Language Models with Retrievers Are Powerful Entity Linkers
emnlp-main.139
2311.03250
[ "https://github.com/mrzilinxiao/insgenentitylinking" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.140.bib
https://aclanthology.org/2023.emnlp-main.140/
@inproceedings{li-etal-2023-towards-noise, title = "Towards Noise-Tolerant Speech-Referring Video Object Segmentation: Bridging Speech and Text", author = "Li, Xiang and Wang, Jinglu and Xu, Xiaohao and Yang, Muqiao and Yang, Fan and Zhao, Yizhou and Singh, Rita and Raj, Bhiksha", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.140", doi = "10.18653/v1/2023.emnlp-main.140", pages = "2283--2296", abstract = "Linguistic communication is prevalent in Human-Computer Interaction (HCI). Speech (spoken language) serves as a convenient yet potentially ambiguous form due to noise and accents, exposing a gap compared to text. In this study, we investigate the prominent HCI task, Referring Video Object Segmentation (R-VOS), which aims to segment and track objects using linguistic references. While text input is well-investigated, speech input is under-explored. Our objective is to bridge the gap between speech and text, enabling the adaptation of existing text-input R-VOS models to accommodate noisy speech input effectively. Specifically, we propose a method to align the semantic spaces between speech and text by incorporating two key modules: 1) Noise-Aware Semantic Adjustment (NSA) for clear semantics extraction from noisy speech; and 2) Semantic Jitter Suppression (SJS) enabling R-VOS models to tolerate noisy queries. Comprehensive experiments conducted on the challenging AVOS benchmarks reveal that our proposed method outperforms state-of-the-art approaches.", }
Linguistic communication is prevalent in Human-Computer Interaction (HCI). Speech (spoken language) serves as a convenient yet potentially ambiguous form due to noise and accents, exposing a gap compared to text. In this study, we investigate the prominent HCI task, Referring Video Object Segmentation (R-VOS), which aims to segment and track objects using linguistic references. While text input is well-investigated, speech input is under-explored. Our objective is to bridge the gap between speech and text, enabling the adaptation of existing text-input R-VOS models to accommodate noisy speech input effectively. Specifically, we propose a method to align the semantic spaces between speech and text by incorporating two key modules: 1) Noise-Aware Semantic Adjustment (NSA) for clear semantics extraction from noisy speech; and 2) Semantic Jitter Suppression (SJS) enabling R-VOS models to tolerate noisy queries. Comprehensive experiments conducted on the challenging AVOS benchmarks reveal that our proposed method outperforms state-of-the-art approaches.
[ "Li, Xiang", "Wang, Jinglu", "Xu, Xiaohao", "Yang, Muqiao", "Yang, Fan", "Zhao, Yizhou", "Singh, Rita", "Raj, Bhiksha" ]
Towards Noise-Tolerant Speech-Referring Video Object Segmentation: Bridging Speech and Text
emnlp-main.140
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.141.bib
https://aclanthology.org/2023.emnlp-main.141/
@inproceedings{wang-etal-2023-prose, title = "{PROSE}: A Pronoun Omission Solution for {C}hinese-{E}nglish Spoken Language Translation", author = "Wang, Ke and Zhao, Xiutian and Li, Yanghui and Peng, Wei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.141", doi = "10.18653/v1/2023.emnlp-main.141", pages = "2297--2311", abstract = "Neural Machine Translation (NMT) systems encounter a significant challenge when translating a pro-drop ({`}pronoun-dropping{'}) language (e.g., Chinese) to a non-pro-drop one (e.g., English), since the pro-drop phenomenon demands NMT systems to recover omitted pronouns. This unique and crucial task, however, lacks sufficient datasets for benchmarking. To bridge this gap, we introduce PROSE, a new benchmark featured in diverse pro-drop instances for document-level Chinese-English spoken language translation. Furthermore, we conduct an in-depth investigation of the pro-drop phenomenon in spoken Chinese on this dataset, reconfirming that pro-drop reduces the performance of NMT systems in Chinese-English translation. To alleviate the negative impact introduced by pro-drop, we propose Mention-Aware Semantic Augmentation, a novel approach that leverages the semantic embedding of dropped pronouns to augment training pairs. Results from the experiments on four Chinese-English translation corpora show that our proposed method outperforms existing methods regarding omitted pronoun retrieval and overall translation quality.", }
Neural Machine Translation (NMT) systems encounter a significant challenge when translating a pro-drop ({`}pronoun-dropping{'}) language (e.g., Chinese) to a non-pro-drop one (e.g., English), since the pro-drop phenomenon demands NMT systems to recover omitted pronouns. This unique and crucial task, however, lacks sufficient datasets for benchmarking. To bridge this gap, we introduce PROSE, a new benchmark featured in diverse pro-drop instances for document-level Chinese-English spoken language translation. Furthermore, we conduct an in-depth investigation of the pro-drop phenomenon in spoken Chinese on this dataset, reconfirming that pro-drop reduces the performance of NMT systems in Chinese-English translation. To alleviate the negative impact introduced by pro-drop, we propose Mention-Aware Semantic Augmentation, a novel approach that leverages the semantic embedding of dropped pronouns to augment training pairs. Results from the experiments on four Chinese-English translation corpora show that our proposed method outperforms existing methods regarding omitted pronoun retrieval and overall translation quality.
[ "Wang, Ke", "Zhao, Xiutian", "Li, Yanghui", "Peng, Wei" ]
PROSE: A Pronoun Omission Solution for Chinese-English Spoken Language Translation
emnlp-main.141
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.142.bib
https://aclanthology.org/2023.emnlp-main.142/
@inproceedings{pramanick-etal-2023-diachronic, title = "A Diachronic Analysis of Paradigm Shifts in {NLP} Research: When, How, and Why?", author = "Pramanick, Aniket and Hou, Yufang and Mohammad, Saif and Gurevych, Iryna", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.142", doi = "10.18653/v1/2023.emnlp-main.142", pages = "2312--2326", abstract = "Understanding the fundamental concepts and trends in a scientific field is crucial for keeping abreast of its continuous advancement. In this study, we propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques. We define three variables to encompass diverse facets of the evolution of research topics within NLP and utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data. Subsequently, we leverage this structure to measure the intensity of these relationships. By conducting extensive experiments on the ACL Anthology corpus, we demonstrate that our framework effectively uncovers evolutionary trends and the underlying causes for a wide range of NLP research topics. Specifically, we show that tasks and methods are primary drivers of research in NLP, with datasets following, while metrics have minimal impact.", }
Understanding the fundamental concepts and trends in a scientific field is crucial for keeping abreast of its continuous advancement. In this study, we propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques. We define three variables to encompass diverse facets of the evolution of research topics within NLP and utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data. Subsequently, we leverage this structure to measure the intensity of these relationships. By conducting extensive experiments on the ACL Anthology corpus, we demonstrate that our framework effectively uncovers evolutionary trends and the underlying causes for a wide range of NLP research topics. Specifically, we show that tasks and methods are primary drivers of research in NLP, with datasets following, while metrics have minimal impact.
[ "Pramanick, Aniket", "Hou, Yufang", "Mohammad, Saif", "Gurevych, Iryna" ]
A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and Why?
emnlp-main.142
2305.12920
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.143.bib
https://aclanthology.org/2023.emnlp-main.143/
@inproceedings{cao-etal-2023-correctness, title = "Does the Correctness of Factual Knowledge Matter for Factual Knowledge-Enhanced Pre-trained Language Models?", author = "Cao, Boxi and Tang, Qiaoyu and Lin, Hongyu and Han, Xianpei and Sun, Le", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.143", doi = "10.18653/v1/2023.emnlp-main.143", pages = "2327--2340", abstract = "In recent years, the injection of factual knowledge has been observed to have a significant positive correlation to the downstream task performance of pre-trained language models. However, existing work neither demonstrates that pre-trained models successfully learn the injected factual knowledge nor proves that there is a causal relation between injected factual knowledge and downstream performance improvements. In this paper, we introduce a counterfactual-based analysis framework to explore the causal effects of factual knowledge injection on the performance of language models within pretrain-finetune paradigm. Instead of directly probing the language model or exhaustively enumerating potential confounding factors, we analyze this issue by perturbing the factual knowledge sources at different scales and comparing the performance of pre-trained language models before and after the perturbation. Surprisingly, throughout our experiments, we find that although the knowledge seems to be successfully injected, the correctness of injected knowledge only has a very limited effect on the models{'} downstream performance. This finding strongly challenges previous assumptions that the injected factual knowledge is the key for language models to achieve performance improvements on downstream tasks in pretrain-finetune paradigm.", }
In recent years, the injection of factual knowledge has been observed to have a significant positive correlation to the downstream task performance of pre-trained language models. However, existing work neither demonstrates that pre-trained models successfully learn the injected factual knowledge nor proves that there is a causal relation between injected factual knowledge and downstream performance improvements. In this paper, we introduce a counterfactual-based analysis framework to explore the causal effects of factual knowledge injection on the performance of language models within pretrain-finetune paradigm. Instead of directly probing the language model or exhaustively enumerating potential confounding factors, we analyze this issue by perturbing the factual knowledge sources at different scales and comparing the performance of pre-trained language models before and after the perturbation. Surprisingly, throughout our experiments, we find that although the knowledge seems to be successfully injected, the correctness of injected knowledge only has a very limited effect on the models{'} downstream performance. This finding strongly challenges previous assumptions that the injected factual knowledge is the key for language models to achieve performance improvements on downstream tasks in pretrain-finetune paradigm.
[ "Cao, Boxi", "Tang, Qiaoyu", "Lin, Hongyu", "Han, Xianpei", "Sun, Le" ]
Does the Correctness of Factual Knowledge Matter for Factual Knowledge-Enhanced Pre-trained Language Models?
emnlp-main.143
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.144.bib
https://aclanthology.org/2023.emnlp-main.144/
@inproceedings{jian-reddy-2023-syntactic, title = "Syntactic Substitutability as Unsupervised Dependency Syntax", author = "Jian, Jasper and Reddy, Siva", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.144", doi = "10.18653/v1/2023.emnlp-main.144", pages = "2341--2360", abstract = "Syntax is a latent hierarchical structure which underpins the robust and compositional nature of human language. In this work, we explore the hypothesis that syntactic dependencies can be represented in language model attention distributions and propose a new method to induce these structures theory-agnostically. Instead of modeling syntactic relations as defined by annotation schemata, we model a more general property implicit in the definition of dependency relations, syntactic substitutability. This property captures the fact that words at either end of a dependency can be substituted with words from the same category. Substitutions can be used to generate a set of syntactically invariant sentences whose representations are then used for parsing. We show that increasing the number of substitutions used improves parsing accuracy on natural data. On long-distance subject-verb agreement constructions, our method achieves 79.5{\%} recall compared to 8.9{\%} using a previous method. Our method also provides improvements when transferred to a different parsing setup, demonstrating that it generalizes.", }
Syntax is a latent hierarchical structure which underpins the robust and compositional nature of human language. In this work, we explore the hypothesis that syntactic dependencies can be represented in language model attention distributions and propose a new method to induce these structures theory-agnostically. Instead of modeling syntactic relations as defined by annotation schemata, we model a more general property implicit in the definition of dependency relations, syntactic substitutability. This property captures the fact that words at either end of a dependency can be substituted with words from the same category. Substitutions can be used to generate a set of syntactically invariant sentences whose representations are then used for parsing. We show that increasing the number of substitutions used improves parsing accuracy on natural data. On long-distance subject-verb agreement constructions, our method achieves 79.5{\%} recall compared to 8.9{\%} using a previous method. Our method also provides improvements when transferred to a different parsing setup, demonstrating that it generalizes.
[ "Jian, Jasper", "Reddy, Siva" ]
Syntactic Substitutability as Unsupervised Dependency Syntax
emnlp-main.144
2211.16031
[ "https://github.com/mcgill-nlp/syntactic-substitutability" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.145.bib
https://aclanthology.org/2023.emnlp-main.145/
@inproceedings{wu-etal-2023-mproto, title = "{MP}roto: Multi-Prototype Network with Denoised Optimal Transport for Distantly Supervised Named Entity Recognition", author = "Wu, Shuhui and Shen, Yongliang and Tan, Zeqi and Ren, Wenqi and Guo, Jietian and Pu, Shiliang and Lu, Weiming", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.145", doi = "10.18653/v1/2023.emnlp-main.145", pages = "2361--2374", abstract = "Distantly supervised named entity recognition (DS-NER) aims to locate entity mentions and classify their types with only knowledge bases or gazetteers and unlabeled corpus. However, distant annotations are noisy and degrade the performance of NER models. In this paper, we propose a noise-robust prototype network named MProto for the DS-NER task. Different from previous prototype-based NER methods, MProto represents each entity type with multiple prototypes to characterize the intra-class variance among entity representations. To optimize the classifier, each token should be assigned an appropriate ground-truth prototype and we consider such token-prototype assignment as an optimal transport (OT) problem. Furthermore, to mitigate the noise from incomplete labeling, we propose a novel denoised optimal transport (DOT) algorithm. Specifically, we utilize the assignment result between *Other* class tokens and all prototypes to distinguish unlabeled entity tokens from true negatives. Experiments on several DS-NER benchmarks demonstrate that our MProto achieves state-of-the-art performance. The source code is now available on Github.", }
Distantly supervised named entity recognition (DS-NER) aims to locate entity mentions and classify their types with only knowledge bases or gazetteers and unlabeled corpus. However, distant annotations are noisy and degrade the performance of NER models. In this paper, we propose a noise-robust prototype network named MProto for the DS-NER task. Different from previous prototype-based NER methods, MProto represents each entity type with multiple prototypes to characterize the intra-class variance among entity representations. To optimize the classifier, each token should be assigned an appropriate ground-truth prototype and we consider such token-prototype assignment as an optimal transport (OT) problem. Furthermore, to mitigate the noise from incomplete labeling, we propose a novel denoised optimal transport (DOT) algorithm. Specifically, we utilize the assignment result between *Other* class tokens and all prototypes to distinguish unlabeled entity tokens from true negatives. Experiments on several DS-NER benchmarks demonstrate that our MProto achieves state-of-the-art performance. The source code is now available on Github.
[ "Wu, Shuhui", "Shen, Yongliang", "Tan, Zeqi", "Ren, Wenqi", "Guo, Jietian", "Pu, Shiliang", "Lu, Weiming" ]
MProto: Multi-Prototype Network with Denoised Optimal Transport for Distantly Supervised Named Entity Recognition
emnlp-main.145
2310.08298
[ "https://github.com/XiPotatonium/mproto" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.146.bib
https://aclanthology.org/2023.emnlp-main.146/
@inproceedings{ouyang-etal-2023-shifted, title = "The Shifted and The Overlooked: A Task-oriented Investigation of User-{GPT} Interactions", author = "Ouyang, Siru and Wang, Shuohang and Liu, Yang and Zhong, Ming and Jiao, Yizhu and Iter, Dan and Pryzant, Reid and Zhu, Chenguang and Ji, Heng and Han, Jiawei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.146", doi = "10.18653/v1/2023.emnlp-main.146", pages = "2375--2393", abstract = "Recent progress in Large Language Models (LLMs) has produced models that exhibit remarkable performance across a variety of NLP tasks. However, it remains unclear whether the existing focus of NLP research accurately captures the genuine requirements of human users. This paper provides a comprehensive analysis of the divergence between academic research in NLP and the needs of real-world NLP applications via a large-scale collection of user-GPT conversations. We analyze a large-scale collection of real user queries to GPT. We compare these queries against existing NLP benchmark tasks and identify a significant gap between the tasks that users frequently request from LLMs and the tasks that are commonly studied in academic research. For example, we find that tasks such as {``}design{''} and {``}planning{''} are prevalent in user interactions but largely neglected or different from traditional NLP benchmarks. We investigate these overlooked tasks, dissect the practical challenges, and provide insights toward a roadmap to make LLMs better aligned with user needs.", }
Recent progress in Large Language Models (LLMs) has produced models that exhibit remarkable performance across a variety of NLP tasks. However, it remains unclear whether the existing focus of NLP research accurately captures the genuine requirements of human users. This paper provides a comprehensive analysis of the divergence between academic research in NLP and the needs of real-world NLP applications via a large-scale collection of user-GPT conversations. We analyze a large-scale collection of real user queries to GPT. We compare these queries against existing NLP benchmark tasks and identify a significant gap between the tasks that users frequently request from LLMs and the tasks that are commonly studied in academic research. For example, we find that tasks such as {``}design{''} and {``}planning{''} are prevalent in user interactions but largely neglected or different from traditional NLP benchmarks. We investigate these overlooked tasks, dissect the practical challenges, and provide insights toward a roadmap to make LLMs better aligned with user needs.
[ "Ouyang, Siru", "Wang, Shuohang", "Liu, Yang", "Zhong, Ming", "Jiao, Yizhu", "Iter, Dan", "Pryzant, Reid", "Zhu, Chenguang", "Ji, Heng", "Han, Jiawei" ]
The Shifted and The Overlooked: A Task-oriented Investigation of User-GPT Interactions
emnlp-main.146
2310.12418
[ "https://github.com/ozyyshr/sharegpt_investigation" ]
https://huggingface.co/papers/2310.12418
2
0
0
10
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.147.bib
https://aclanthology.org/2023.emnlp-main.147/
@inproceedings{verma-etal-2023-learning, title = "Learning the Visualness of Text Using Large Vision-Language Models", author = "Verma, Gaurav and Rossi, Ryan and Tensmeyer, Christopher and Gu, Jiuxiang and Nenkova, Ani", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.147", doi = "10.18653/v1/2023.emnlp-main.147", pages = "2394--2408", abstract = "Visual text evokes an image in a person{'}s mind, while non-visual text fails to do so. A method to automatically detect visualness in text will enable text-to-image retrieval and generation models to augment text with relevant images. This is particularly challenging with long-form text as text-to-image generation and retrieval models are often triggered for text that is designed to be explicitly visual in nature, whereas long-form text could contain many non-visual sentences. To this end, we curate a dataset of 3,620 English sentences and their visualness scores provided by multiple human annotators. We also propose a fine-tuning strategy that adapts large vision-language models like CLIP by modifying the model{'}s contrastive learning objective to map text identified as non-visual to a common NULL image while matching visual text to their corresponding images in the document. We evaluate the proposed approach on its ability to (i) classify visual and non-visual text accurately, and (ii) attend over words that are identified as visual in psycholinguistic studies. Empirical evaluation indicates that our approach performs better than several heuristics and baseline models for the proposed task. Furthermore, to highlight the importance of modeling the visualness of text, we conduct qualitative analyses of text-to-image generation systems like DALL-E.", }
Visual text evokes an image in a person{'}s mind, while non-visual text fails to do so. A method to automatically detect visualness in text will enable text-to-image retrieval and generation models to augment text with relevant images. This is particularly challenging with long-form text as text-to-image generation and retrieval models are often triggered for text that is designed to be explicitly visual in nature, whereas long-form text could contain many non-visual sentences. To this end, we curate a dataset of 3,620 English sentences and their visualness scores provided by multiple human annotators. We also propose a fine-tuning strategy that adapts large vision-language models like CLIP by modifying the model{'}s contrastive learning objective to map text identified as non-visual to a common NULL image while matching visual text to their corresponding images in the document. We evaluate the proposed approach on its ability to (i) classify visual and non-visual text accurately, and (ii) attend over words that are identified as visual in psycholinguistic studies. Empirical evaluation indicates that our approach performs better than several heuristics and baseline models for the proposed task. Furthermore, to highlight the importance of modeling the visualness of text, we conduct qualitative analyses of text-to-image generation systems like DALL-E.
[ "Verma, Gaurav", "Rossi, Ryan", "Tensmeyer, Christopher", "Gu, Jiuxiang", "Nenkova, Ani" ]
Learning the Visualness of Text Using Large Vision-Language Models
emnlp-main.147
2305.10434
[ "" ]
https://huggingface.co/papers/2305.10434
1
2
0
5
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.148.bib
https://aclanthology.org/2023.emnlp-main.148/
@inproceedings{kirk-etal-2023-past, title = "The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values", author = "Kirk, Hannah and Bean, Andrew and Vidgen, Bertie and Rottger, Paul and Hale, Scott", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.148", doi = "10.18653/v1/2023.emnlp-main.148", pages = "2409--2430", abstract = "Human feedback is increasingly used to steer the behaviours of Large Language Models (LLMs). However, it is unclear how to collect and incorporate feedback in a way that is efficient, effective and unbiased, especially for highly subjective human preferences and values. In this paper, we survey existing approaches for learning from human feedback, drawing on 95 papers primarily from the ACL and arXiv repositories. First, we summarise the past, pre-LLM trends for integrating human feedback into language models. Second, we give an overview of present techniques and practices, as well as the motivations for using feedback; conceptual frameworks for defining values and preferences; and how feedback is collected and from whom. Finally, we encourage a better future of feedback learning in LLMs by raising five unresolved conceptual and practical challenges.", }
Human feedback is increasingly used to steer the behaviours of Large Language Models (LLMs). However, it is unclear how to collect and incorporate feedback in a way that is efficient, effective and unbiased, especially for highly subjective human preferences and values. In this paper, we survey existing approaches for learning from human feedback, drawing on 95 papers primarily from the ACL and arXiv repositories. First, we summarise the past, pre-LLM trends for integrating human feedback into language models. Second, we give an overview of present techniques and practices, as well as the motivations for using feedback; conceptual frameworks for defining values and preferences; and how feedback is collected and from whom. Finally, we encourage a better future of feedback learning in LLMs by raising five unresolved conceptual and practical challenges.
[ "Kirk, Hannah", "Bean, Andrew", "Vidgen, Bertie", "Rottger, Paul", "Hale, Scott" ]
The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values
emnlp-main.148
2310.07629
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.149.bib
https://aclanthology.org/2023.emnlp-main.149/
@inproceedings{gupta-etal-2023-temptabqa, title = "{T}emp{T}ab{QA}: Temporal Question Answering for Semi-Structured Tables", author = "Gupta, Vivek and Kandoi, Pranshu and Vora, Mahek and Zhang, Shuo and He, Yujie and Reinanda, Ridho and Srikumar, Vivek", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.149", doi = "10.18653/v1/2023.emnlp-main.149", pages = "2431--2453", abstract = "Semi-structured data, such as Infobox tables, often include temporal information about entities, either implicitly or explicitly. Can current NLP systems reason about such information in semi-structured tables? To tackle this question, we introduce the task of temporal question answering on semi-structured tables. We present a dataset, TEMPTABQA, which comprises 11,454 question-answer pairs extracted from 1,208 Wikipedia Infobox tables spanning more than 90 distinct domains. Using this dataset, we evaluate several state-of-the-art models for temporal reasoning. We observe that even the top-performing LLMs lag behind human performance by more than 13.5 F1 points. Given these results, our dataset has the potential to serve as a challenging benchmark to improve the temporal reasoning capabilities of NLP models.", }
Semi-structured data, such as Infobox tables, often include temporal information about entities, either implicitly or explicitly. Can current NLP systems reason about such information in semi-structured tables? To tackle this question, we introduce the task of temporal question answering on semi-structured tables. We present a dataset, TEMPTABQA, which comprises 11,454 question-answer pairs extracted from 1,208 Wikipedia Infobox tables spanning more than 90 distinct domains. Using this dataset, we evaluate several state-of-the-art models for temporal reasoning. We observe that even the top-performing LLMs lag behind human performance by more than 13.5 F1 points. Given these results, our dataset has the potential to serve as a challenging benchmark to improve the temporal reasoning capabilities of NLP models.
[ "Gupta, Vivek", "K", "oi, Pranshu", "Vora, Mahek", "Zhang, Shuo", "He, Yujie", "Rein", "a, Ridho", "Srikumar, Vivek" ]
TempTabQA: Temporal Question Answering for Semi-Structured Tables
emnlp-main.149
2311.08002
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.150.bib
https://aclanthology.org/2023.emnlp-main.150/
@inproceedings{du-etal-2023-task, title = "Task-Level Thinking Steps Help Large Language Models for Challenging Classification Task", author = "Du, Chunhui and Tian, Jidong and Liao, Haoran and Chen, Jindou and He, Hao and Jin, Yaohui", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.150", doi = "10.18653/v1/2023.emnlp-main.150", pages = "2454--2470", abstract = "Large language models (LLMs) have shown incredible performance on many tasks such as dialogue generation, commonsense reasoning and question answering. In-context learning (ICL) is an important paradigm for adapting LLMs to the downstream tasks by prompting few demonstrations. However, the distribution of demonstrations can severely affect the performance, especially for challenging classification tasks. In this paper, we propose the concept of task-level thinking steps that can eliminate bias introduced by demonstrations. Further, to help LLMs distinguish confusing classes, we design a progressive revision framework, which can improve the thinking steps by correcting hard demonstrations. Experimental results prove the superiority of our proposed method, achieving best performance on three kinds of challenging classification tasks in the zero-shot and few-shot settings. Besides, with task-level thinking steps, automatically generated chain-of-thoughts (CoTs) bring more competitive performance.", }
Large language models (LLMs) have shown incredible performance on many tasks such as dialogue generation, commonsense reasoning and question answering. In-context learning (ICL) is an important paradigm for adapting LLMs to the downstream tasks by prompting few demonstrations. However, the distribution of demonstrations can severely affect the performance, especially for challenging classification tasks. In this paper, we propose the concept of task-level thinking steps that can eliminate bias introduced by demonstrations. Further, to help LLMs distinguish confusing classes, we design a progressive revision framework, which can improve the thinking steps by correcting hard demonstrations. Experimental results prove the superiority of our proposed method, achieving best performance on three kinds of challenging classification tasks in the zero-shot and few-shot settings. Besides, with task-level thinking steps, automatically generated chain-of-thoughts (CoTs) bring more competitive performance.
[ "Du, Chunhui", "Tian, Jidong", "Liao, Haoran", "Chen, Jindou", "He, Hao", "Jin, Yaohui" ]
Task-Level Thinking Steps Help Large Language Models for Challenging Classification Task
emnlp-main.150
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.151.bib
https://aclanthology.org/2023.emnlp-main.151/
@inproceedings{zhang-etal-2023-repocoder, title = "{R}epo{C}oder: Repository-Level Code Completion Through Iterative Retrieval and Generation", author = "Zhang, Fengji and Chen, Bei and Zhang, Yue and Keung, Jacky and Liu, Jin and Zan, Daoguang and Mao, Yi and Lou, Jian-Guang and Chen, Weizhu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.151", doi = "10.18653/v1/2023.emnlp-main.151", pages = "2471--2484", abstract = "The task of repository-level code completion is to continue writing the unfinished code based on a broader context of the repository. While for automated code completion tools, it is difficult to utilize the useful information scattered in different files. We propose RepoCoder, a simple, generic, and effective framework to address the challenge. It streamlines the repository-level code completion process by incorporating a similarity-based retriever and a pre-trained code language model in an iterative retrieval-generation pipeline. RepoCoder makes effective utilization of repository-level information for code completion and has the ability to generate code at various levels of granularity. Moreover, we propose a new benchmark RepoBench, which consists of the latest and high-quality real-world repositories covering line, API invocation, and function body completion scenarios. Experimental results indicate that RepoCoder significantly improves the In-File completion baseline by over 10{\%} in all settings and consistently outperforms the vanilla retrieval-augmented code completion approach. Furthermore, we validate the effectiveness of RepoCoder through comprehensive analysis, providing valuable insights for future research. Our source code and benchmark will be publicly available after the paper review.", }
The task of repository-level code completion is to continue writing the unfinished code based on a broader context of the repository. While for automated code completion tools, it is difficult to utilize the useful information scattered in different files. We propose RepoCoder, a simple, generic, and effective framework to address the challenge. It streamlines the repository-level code completion process by incorporating a similarity-based retriever and a pre-trained code language model in an iterative retrieval-generation pipeline. RepoCoder makes effective utilization of repository-level information for code completion and has the ability to generate code at various levels of granularity. Moreover, we propose a new benchmark RepoBench, which consists of the latest and high-quality real-world repositories covering line, API invocation, and function body completion scenarios. Experimental results indicate that RepoCoder significantly improves the In-File completion baseline by over 10{\%} in all settings and consistently outperforms the vanilla retrieval-augmented code completion approach. Furthermore, we validate the effectiveness of RepoCoder through comprehensive analysis, providing valuable insights for future research. Our source code and benchmark will be publicly available after the paper review.
[ "Zhang, Fengji", "Chen, Bei", "Zhang, Yue", "Keung, Jacky", "Liu, Jin", "Zan, Daoguang", "Mao, Yi", "Lou, Jian-Guang", "Chen, Weizhu" ]
RepoCoder: Repository-Level Code Completion Through Iterative Retrieval and Generation
emnlp-main.151
2303.12570
[ "https://github.com/microsoft/codet" ]
https://huggingface.co/papers/2303.12570
1
0
0
9
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.152.bib
https://aclanthology.org/2023.emnlp-main.152/
@inproceedings{anand-etal-2023-influence, title = "Influence Scores at Scale for Efficient Language Data Sampling", author = "Anand, Nikhil and Tan, Joshua and Minakova, Maria", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.152", doi = "10.18653/v1/2023.emnlp-main.152", pages = "2485--2510", abstract = "Modern ML systems ingest data aggregated from diverse sources, such as synthetic, human-annotated, and live customer traffic. Understanding \textit{which} examples are important to the performance of a learning algorithm is crucial for efficient model training. Recently, a growing body of literature has given rise to various {``}influence scores,{''} which use training artifacts such as model confidence or checkpointed gradients to identify important subsets of data. However, these methods have primarily been developed in computer vision settings, and it remains unclear how well they generalize to language-based tasks using pretrained models. In this paper, we explore the applicability of influence scores in language classification tasks. We evaluate a diverse subset of these scores on the SNLI dataset by quantifying accuracy changes in response to pruning training data through random and influence-score-based sampling. We then stress-test one of the scores {--} {``}variance of gradients{''} (VoG) from Agarwal and Hooker (2022) {--} in an NLU model stack that was exposed to dynamic user speech patterns in a voice assistant type of setting. Our experiments demonstrate that in many cases, encoder-based language models can be fine-tuned on roughly 50{\%} of the original data without degradation in performance metrics. Along the way, we summarize lessons learned from applying out-of-the-box implementations of influence scores, quantify the effects of noisy and class-imbalanced data, and offer recommendations on score-based sampling for better accuracy and training efficiency.", }
Modern ML systems ingest data aggregated from diverse sources, such as synthetic, human-annotated, and live customer traffic. Understanding \textit{which} examples are important to the performance of a learning algorithm is crucial for efficient model training. Recently, a growing body of literature has given rise to various {``}influence scores,{''} which use training artifacts such as model confidence or checkpointed gradients to identify important subsets of data. However, these methods have primarily been developed in computer vision settings, and it remains unclear how well they generalize to language-based tasks using pretrained models. In this paper, we explore the applicability of influence scores in language classification tasks. We evaluate a diverse subset of these scores on the SNLI dataset by quantifying accuracy changes in response to pruning training data through random and influence-score-based sampling. We then stress-test one of the scores {--} {``}variance of gradients{''} (VoG) from Agarwal and Hooker (2022) {--} in an NLU model stack that was exposed to dynamic user speech patterns in a voice assistant type of setting. Our experiments demonstrate that in many cases, encoder-based language models can be fine-tuned on roughly 50{\%} of the original data without degradation in performance metrics. Along the way, we summarize lessons learned from applying out-of-the-box implementations of influence scores, quantify the effects of noisy and class-imbalanced data, and offer recommendations on score-based sampling for better accuracy and training efficiency.
[ "An", ", Nikhil", "Tan, Joshua", "Minakova, Maria" ]
Influence Scores at Scale for Efficient Language Data Sampling
emnlp-main.152
2311.16298
[ "" ]
https://huggingface.co/papers/2311.16298
0
1
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.153.bib
https://aclanthology.org/2023.emnlp-main.153/
@inproceedings{liu-etal-2023-g, title = "{G}-Eval: {NLG} Evaluation using Gpt-4 with Better Human Alignment", author = "Liu, Yang and Iter, Dan and Xu, Yichong and Wang, Shuohang and Xu, Ruochen and Zhu, Chenguang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.153", doi = "10.18653/v1/2023.emnlp-main.153", pages = "2511--2522", abstract = "The quality of texts generated by natural language generation (NLG) systems is hard to measure automatically. Conventional reference-based metrics, such as BLEU and ROUGE, have been shown to have relatively low correlation with human judgments, especially for tasks that require creativity and diversity. Recent studies suggest using large language models (LLMs) as reference-free metrics for NLG evaluation, which have the benefit of being applicable to new tasks that lack human references. However, these LLM-based evaluators still have lower human correspondence than medium-size neural evaluators. In this work, we present G-Eval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm, to assess the quality of NLG outputs. We experiment with two generation tasks, text summarization and dialogue generation. We show that G-Eval with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin. We also propose analysis on the behavior of LLM-based evaluators, and highlight the potential concern of LLM-based evaluators having a bias towards the LLM-generated texts.", }
The quality of texts generated by natural language generation (NLG) systems is hard to measure automatically. Conventional reference-based metrics, such as BLEU and ROUGE, have been shown to have relatively low correlation with human judgments, especially for tasks that require creativity and diversity. Recent studies suggest using large language models (LLMs) as reference-free metrics for NLG evaluation, which have the benefit of being applicable to new tasks that lack human references. However, these LLM-based evaluators still have lower human correspondence than medium-size neural evaluators. In this work, we present G-Eval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm, to assess the quality of NLG outputs. We experiment with two generation tasks, text summarization and dialogue generation. We show that G-Eval with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin. We also propose analysis on the behavior of LLM-based evaluators, and highlight the potential concern of LLM-based evaluators having a bias towards the LLM-generated texts.
[ "Liu, Yang", "Iter, Dan", "Xu, Yichong", "Wang, Shuohang", "Xu, Ruochen", "Zhu, Chenguang" ]
G-Eval: NLG Evaluation using Gpt-4 with Better Human Alignment
emnlp-main.153
2303.16634
[ "https://github.com/nlpyang/geval" ]
https://huggingface.co/papers/2303.16634
0
3
0
6
[ "Trofish/KULLM-RLHF" ]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.154.bib
https://aclanthology.org/2023.emnlp-main.154/
@inproceedings{huang-etal-2023-learning, title = "Learning Retrieval Augmentation for Personalized Dialogue Generation", author = "Huang, Qiushi and Fu, Shuai and Liu, Xubo and Wang, Wenwu and Ko, Tom and Zhang, Yu and Tang, Lilian", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.154", doi = "10.18653/v1/2023.emnlp-main.154", pages = "2523--2540", abstract = "Personalized dialogue generation, focusing on generating highly tailored responses by leveraging persona profiles and dialogue context, has gained significant attention in conversational AI applications. However, persona profiles, a prevalent setting in current personalized dialogue datasets, typically composed of merely four to five sentences, may not offer comprehensive descriptions of the persona about the agent, posing a challenge to generate truly personalized dialogues. To handle this problem, we propose $\textbf{L}$earning Retrieval $\textbf{A}$ugmentation for $\textbf{P}$ersonalized $\textbf{D}$ial$\textbf{O}$gue $\textbf{G}$eneration ($\textbf{LAPDOG}$), which studies the potential of leveraging external knowledge for persona dialogue generation. Specifically, the proposed LAPDOG model consists of a story retriever and a dialogue generator. The story retriever uses a given persona profile as queries to retrieve relevant information from the story document, which serves as a supplementary context to augment the persona profile. The dialogue generator utilizes both the dialogue history and the augmented persona profile to generate personalized responses. For optimization, we adopt a joint training framework that collaboratively learns the story retriever and dialogue generator, where the story retriever is optimized towards desired ultimate metrics (e.g., BLEU) to retrieve content for the dialogue generator to generate personalized responses. Experiments conducted on the CONVAI2 dataset with ROCStory as a supplementary data source show that the proposed LAPDOG method substantially outperforms the baselines, indicating the effectiveness of the proposed method. The LAPDOG model code is publicly available for further exploration.", }
Personalized dialogue generation, focusing on generating highly tailored responses by leveraging persona profiles and dialogue context, has gained significant attention in conversational AI applications. However, persona profiles, a prevalent setting in current personalized dialogue datasets, typically composed of merely four to five sentences, may not offer comprehensive descriptions of the persona about the agent, posing a challenge to generate truly personalized dialogues. To handle this problem, we propose $\textbf{L}$earning Retrieval $\textbf{A}$ugmentation for $\textbf{P}$ersonalized $\textbf{D}$ial$\textbf{O}$gue $\textbf{G}$eneration ($\textbf{LAPDOG}$), which studies the potential of leveraging external knowledge for persona dialogue generation. Specifically, the proposed LAPDOG model consists of a story retriever and a dialogue generator. The story retriever uses a given persona profile as queries to retrieve relevant information from the story document, which serves as a supplementary context to augment the persona profile. The dialogue generator utilizes both the dialogue history and the augmented persona profile to generate personalized responses. For optimization, we adopt a joint training framework that collaboratively learns the story retriever and dialogue generator, where the story retriever is optimized towards desired ultimate metrics (e.g., BLEU) to retrieve content for the dialogue generator to generate personalized responses. Experiments conducted on the CONVAI2 dataset with ROCStory as a supplementary data source show that the proposed LAPDOG method substantially outperforms the baselines, indicating the effectiveness of the proposed method. The LAPDOG model code is publicly available for further exploration.
[ "Huang, Qiushi", "Fu, Shuai", "Liu, Xubo", "Wang, Wenwu", "Ko, Tom", "Zhang, Yu", "Tang, Lilian" ]
Learning Retrieval Augmentation for Personalized Dialogue Generation
emnlp-main.154
2406.18847
[ "https://github.com/hqsiswiliam/lapdog" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.155.bib
https://aclanthology.org/2023.emnlp-main.155/
@inproceedings{rawte-etal-2023-troubling, title = "The Troubling Emergence of Hallucination in Large Language Models - An Extensive Definition, Quantification, and Prescriptive Remediations", author = "Rawte, Vipula and Chakraborty, Swagata and Pathak, Agnibh and Sarkar, Anubhav and Tonmoy, S.M Towhidul Islam and Chadha, Aman and Sheth, Amit and Das, Amitava", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.155", doi = "10.18653/v1/2023.emnlp-main.155", pages = "2541--2573", abstract = "The recent advancements in Large Language Models (LLMs) have garnered widespread acclaim for their remarkable emerging capabilities. However, the issue of hallucination has parallelly emerged as a by-product, posing significant concerns. While some recent endeavors have been made to identify and mitigate different types of hallucination, there has been a limited emphasis on the nuanced categorization of hallucination and associated mitigation methods. To address this gap, we offer a fine-grained discourse on profiling hallucination based on its degree, orientation, and category, along with offering strategies for alleviation. As such, we define two overarching orientations of hallucination: (i) factual mirage (FM) and (ii) silver lining (SL). To provide a more comprehensive understanding, both orientations are further sub-categorized into intrinsic and extrinsic, with three degrees of severity - (i) mild, (ii) moderate, and (iii) alarming. We also meticulously categorize hallucination into six types: (i) acronym ambiguity, (ii) numeric nuisance, (iii) generated golem, (iv) virtual voice, (v) geographic erratum, and (vi) time wrap. Furthermore, we curate HallucInation eLiciTation (HILT), a publicly available dataset comprising of 75,000 samples generated using 15 contemporary LLMs along with human annotations for the aforementioned categories. Finally, to establish a method for quantifying and to offer a comparative spectrum that allows us to evaluate and rank LLMs based on their vulnerability to producing hallucinations, we propose Hallucination Vulnerability Index (HVI). Amidst the extensive deliberations on policy-making for regulating AI development, it is of utmost importance to assess and measure which LLM is more vulnerable towards hallucination. We firmly believe that HVI holds significant value as a tool for the wider NLP community, with the potential to serve as a rubric in AI-related policy-making. In conclusion, we propose two solution strategies for mitigating hallucinations.", }
The recent advancements in Large Language Models (LLMs) have garnered widespread acclaim for their remarkable emerging capabilities. However, the issue of hallucination has parallelly emerged as a by-product, posing significant concerns. While some recent endeavors have been made to identify and mitigate different types of hallucination, there has been a limited emphasis on the nuanced categorization of hallucination and associated mitigation methods. To address this gap, we offer a fine-grained discourse on profiling hallucination based on its degree, orientation, and category, along with offering strategies for alleviation. As such, we define two overarching orientations of hallucination: (i) factual mirage (FM) and (ii) silver lining (SL). To provide a more comprehensive understanding, both orientations are further sub-categorized into intrinsic and extrinsic, with three degrees of severity - (i) mild, (ii) moderate, and (iii) alarming. We also meticulously categorize hallucination into six types: (i) acronym ambiguity, (ii) numeric nuisance, (iii) generated golem, (iv) virtual voice, (v) geographic erratum, and (vi) time wrap. Furthermore, we curate HallucInation eLiciTation (HILT), a publicly available dataset comprising of 75,000 samples generated using 15 contemporary LLMs along with human annotations for the aforementioned categories. Finally, to establish a method for quantifying and to offer a comparative spectrum that allows us to evaluate and rank LLMs based on their vulnerability to producing hallucinations, we propose Hallucination Vulnerability Index (HVI). Amidst the extensive deliberations on policy-making for regulating AI development, it is of utmost importance to assess and measure which LLM is more vulnerable towards hallucination. We firmly believe that HVI holds significant value as a tool for the wider NLP community, with the potential to serve as a rubric in AI-related policy-making. In conclusion, we propose two solution strategies for mitigating hallucinations.
[ "Rawte, Vipula", "Chakraborty, Swagata", "Pathak, Agnibh", "Sarkar, Anubhav", "Tonmoy, S.M Towhidul Islam", "Chadha, Aman", "Sheth, Amit", "Das, Amitava" ]
The Troubling Emergence of Hallucination in Large Language Models - An Extensive Definition, Quantification, and Prescriptive Remediations
emnlp-main.155
2310.04988
[ "" ]
https://huggingface.co/papers/2310.04988
1
0
0
8
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.156.bib
https://aclanthology.org/2023.emnlp-main.156/
@inproceedings{soares-etal-2023-nail, title = "{NAIL}: Lexical Retrieval Indices with Efficient Non-Autoregressive Decoders", author = "Soares, Livio and Gillick, Daniel and Cole, Jeremy and Kwiatkowski, Tom", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.156", doi = "10.18653/v1/2023.emnlp-main.156", pages = "2574--2589", abstract = "Neural document rerankers are extremely effective in terms of accuracy. However, the best models require dedicated hardware for serving, which is costly and often not feasible. To avoid this servingtime requirement, we present a method of capturing up to 86{\%} of the gains of a Transformer cross-attention model with a lexicalized scoring function that only requires 10$^{-6}${\%} of the Transformer{'}s FLOPs per document and can be served using commodity CPUs. When combined with a BM25 retriever, this approach matches the quality of a state-of-the art dual encoder retriever, that still requires an accelerator for query encoding. We introduce nail (Non-Autoregressive Indexing with Language models) as a model architecture that is compatible with recent encoder-decoder and decoder-only large language models, such as T5, GPT-3 and PaLM. This model architecture can leverage existing pre-trained checkpoints and can be fine-tuned for efficiently constructing document representations that do not require neural processing of queries.", }
Neural document rerankers are extremely effective in terms of accuracy. However, the best models require dedicated hardware for serving, which is costly and often not feasible. To avoid this servingtime requirement, we present a method of capturing up to 86{\%} of the gains of a Transformer cross-attention model with a lexicalized scoring function that only requires 10$^{-6}${\%} of the Transformer{'}s FLOPs per document and can be served using commodity CPUs. When combined with a BM25 retriever, this approach matches the quality of a state-of-the art dual encoder retriever, that still requires an accelerator for query encoding. We introduce nail (Non-Autoregressive Indexing with Language models) as a model architecture that is compatible with recent encoder-decoder and decoder-only large language models, such as T5, GPT-3 and PaLM. This model architecture can leverage existing pre-trained checkpoints and can be fine-tuned for efficiently constructing document representations that do not require neural processing of queries.
[ "Soares, Livio", "Gillick, Daniel", "Cole, Jeremy", "Kwiatkowski, Tom" ]
NAIL: Lexical Retrieval Indices with Efficient Non-Autoregressive Decoders
emnlp-main.156
2305.14499
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.157.bib
https://aclanthology.org/2023.emnlp-main.157/
@inproceedings{khandelwal-etal-2023-analyzing, title = "Analyzing Modular Approaches for Visual Question Decomposition", author = "Khandelwal, Apoorv and Pavlick, Ellie and Sun, Chen", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.157", doi = "10.18653/v1/2023.emnlp-main.157", pages = "2590--2603", abstract = "Modular neural networks without additional training have recently been shown to surpass end-to-end neural networks on challenging vision{--}language tasks. The latest such methods simultaneously introduce LLM-based code generation to build programs and a number of skill-specific, task-oriented modules to execute them. In this paper, we focus on ViperGPT and ask where its additional performance comes from and how much is due to the (state-of-art, end-to-end) BLIP-2 model it subsumes vs. additional symbolic components. To do so, we conduct a controlled study (comparing end-to-end, modular, and prompting-based methods across several VQA benchmarks). We find that ViperGPT{'}s reported gains over BLIP-2 can be attributed to its selection of task-specific modules, and when we run ViperGPT using a more task-agnostic selection of modules, these gains go away. ViperGPT retains much of its performance if we make prominent alterations to its selection of modules: e.g. removing or retaining only BLIP-2. We also compare ViperGPT against a prompting-based decomposition strategy and find that, on some benchmarks, modular approaches significantly benefit by representing subtasks with natural language, instead of code. Our code is fully available at https://github.com/brown-palm/visual-question-decomposition.", }
Modular neural networks without additional training have recently been shown to surpass end-to-end neural networks on challenging vision{--}language tasks. The latest such methods simultaneously introduce LLM-based code generation to build programs and a number of skill-specific, task-oriented modules to execute them. In this paper, we focus on ViperGPT and ask where its additional performance comes from and how much is due to the (state-of-art, end-to-end) BLIP-2 model it subsumes vs. additional symbolic components. To do so, we conduct a controlled study (comparing end-to-end, modular, and prompting-based methods across several VQA benchmarks). We find that ViperGPT{'}s reported gains over BLIP-2 can be attributed to its selection of task-specific modules, and when we run ViperGPT using a more task-agnostic selection of modules, these gains go away. ViperGPT retains much of its performance if we make prominent alterations to its selection of modules: e.g. removing or retaining only BLIP-2. We also compare ViperGPT against a prompting-based decomposition strategy and find that, on some benchmarks, modular approaches significantly benefit by representing subtasks with natural language, instead of code. Our code is fully available at https://github.com/brown-palm/visual-question-decomposition.
[ "Kh", "elwal, Apoorv", "Pavlick, Ellie", "Sun, Chen" ]
Analyzing Modular Approaches for Visual Question Decomposition
emnlp-main.157
2311.06411
[ "https://github.com/brown-palm/visual-question-decomposition" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.158.bib
https://aclanthology.org/2023.emnlp-main.158/
@inproceedings{yao-etal-2023-improving, title = "Improving Summarization with Human Edits", author = "Yao, Zonghai and Schloss, Benjamin and Selvaraj, Sai", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.158", doi = "10.18653/v1/2023.emnlp-main.158", pages = "2604--2620", abstract = "Recent work has shown the promise of learning with human feedback paradigms to produce human-determined high-quality text. Existing works use human feedback to train large language models (LLMs) in general domain abstractive summarization and have obtained summary quality exceeding traditional likelihood training. In this paper, we focus on a less explored form of human feedback {--} Human Edits. We propose Sequence Alignment (un)Likelihood Training (SALT), a novel technique to use both the human-edited and model-generated data together in the training loop. In addition, we demonstrate simulating Human Edits with ground truth summaries coming from existing training data {--} Imitation edits, along with the model-generated summaries obtained after the training, to reduce the need for expensive human-edit data. In our experiments, we extend human feedback exploration from general domain summarization to medical domain summarization. Our results demonstrate the effectiveness of SALT in improving the summary quality with Human and Imitation Edits. Through additional experiments, we show that SALT outperforms the conventional RLHF method (designed for human preferences) {--} DPO, when applied to human-edit data. We hope the evidence in our paper prompts researchers to explore, collect, and better use different human feedback approaches scalably.", }
Recent work has shown the promise of learning with human feedback paradigms to produce human-determined high-quality text. Existing works use human feedback to train large language models (LLMs) in general domain abstractive summarization and have obtained summary quality exceeding traditional likelihood training. In this paper, we focus on a less explored form of human feedback {--} Human Edits. We propose Sequence Alignment (un)Likelihood Training (SALT), a novel technique to use both the human-edited and model-generated data together in the training loop. In addition, we demonstrate simulating Human Edits with ground truth summaries coming from existing training data {--} Imitation edits, along with the model-generated summaries obtained after the training, to reduce the need for expensive human-edit data. In our experiments, we extend human feedback exploration from general domain summarization to medical domain summarization. Our results demonstrate the effectiveness of SALT in improving the summary quality with Human and Imitation Edits. Through additional experiments, we show that SALT outperforms the conventional RLHF method (designed for human preferences) {--} DPO, when applied to human-edit data. We hope the evidence in our paper prompts researchers to explore, collect, and better use different human feedback approaches scalably.
[ "Yao, Zonghai", "Schloss, Benjamin", "Selvaraj, Sai" ]
Improving Summarization with Human Edits
emnlp-main.158
2310.05857
[ "https://github.com/seasonyao/learnfromhumanedit" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.159.bib
https://aclanthology.org/2023.emnlp-main.159/
@inproceedings{stengel-eskin-van-durme-2023-mean, title = "Did You Mean...? Confidence-based Trade-offs in Semantic Parsing", author = "Stengel-Eskin, Elias and Van Durme, Benjamin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.159", doi = "10.18653/v1/2023.emnlp-main.159", pages = "2621--2629", abstract = "We illustrate how a calibrated model can help balance common trade-offs in task-oriented parsing. In a simulated annotator-in-the-loop experiment, we show that well-calibrated confidence scores allow us to balance cost with annotator load, improving accuracy with a small number of interactions. We then examine how confidence scores can help optimize the trade-off between usability and safety. We show that confidence-based thresholding can substantially reduce the number of incorrect low-confidence programs executed; however, this comes at a cost to usability. We propose the DidYouMean system which better balances usability and safety by rephrasing low-confidence inputs.", }
We illustrate how a calibrated model can help balance common trade-offs in task-oriented parsing. In a simulated annotator-in-the-loop experiment, we show that well-calibrated confidence scores allow us to balance cost with annotator load, improving accuracy with a small number of interactions. We then examine how confidence scores can help optimize the trade-off between usability and safety. We show that confidence-based thresholding can substantially reduce the number of incorrect low-confidence programs executed; however, this comes at a cost to usability. We propose the DidYouMean system which better balances usability and safety by rephrasing low-confidence inputs.
[ "Stengel-Eskin, Elias", "Van Durme, Benjamin" ]
Did You Mean...? Confidence-based Trade-offs in Semantic Parsing
emnlp-main.159
2303.16857
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.160.bib
https://aclanthology.org/2023.emnlp-main.160/
@inproceedings{zhang-etal-2023-skipped, title = "The Skipped Beat: A Study of Sociopragmatic Understanding in {LLM}s for 64 Languages", author = "Zhang, Chiyu and Doan, Khai and Liao, Qisheng and Abdul-Mageed, Muhammad", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.160", doi = "10.18653/v1/2023.emnlp-main.160", pages = "2630--2662", abstract = "Instruction tuned large language models (LLMs), such as ChatGPT, demonstrate remarkable performance in a wide range of tasks. Despite numerous recent studies that examine the performance of instruction-tuned LLMs on various NLP benchmarks, there remains a lack of comprehensive investigation into their ability to understand cross-lingual sociopragmatic meaning (SM), i.e., meaning embedded within social and interactive contexts. This deficiency arises partly from SM not being adequately represented in any of the existing benchmarks. To address this gap, we present SPARROW, an extensive multilingual benchmark specifically designed for SM understanding. SPARROW comprises 169 datasets covering 13 task types across six primary categories (e.g., anti-social language detection, emotion recognition). SPARROW datasets encompass 64 different languages originating from 12 language families representing 16 writing scripts. We evaluate the performance of various multilingual pretrained language models (e.g., mT5) and instruction-tuned LLMs (e.g., BLOOMZ, ChatGPT) on SPARROW through fine-tuning, zero-shot, and/or few-shot learning. Our comprehensive analysis reveals that existing open-source instruction tuned LLMs still struggle to understand SM across various languages, performing close to a random baseline in some cases. We also find that although ChatGPT outperforms many LLMs, it still falls behind task-specific finetuned models with a gap of 12.19 SPARROW score. Our benchmark is available at: https://github.com/UBC-NLP/SPARROW", }
Instruction tuned large language models (LLMs), such as ChatGPT, demonstrate remarkable performance in a wide range of tasks. Despite numerous recent studies that examine the performance of instruction-tuned LLMs on various NLP benchmarks, there remains a lack of comprehensive investigation into their ability to understand cross-lingual sociopragmatic meaning (SM), i.e., meaning embedded within social and interactive contexts. This deficiency arises partly from SM not being adequately represented in any of the existing benchmarks. To address this gap, we present SPARROW, an extensive multilingual benchmark specifically designed for SM understanding. SPARROW comprises 169 datasets covering 13 task types across six primary categories (e.g., anti-social language detection, emotion recognition). SPARROW datasets encompass 64 different languages originating from 12 language families representing 16 writing scripts. We evaluate the performance of various multilingual pretrained language models (e.g., mT5) and instruction-tuned LLMs (e.g., BLOOMZ, ChatGPT) on SPARROW through fine-tuning, zero-shot, and/or few-shot learning. Our comprehensive analysis reveals that existing open-source instruction tuned LLMs still struggle to understand SM across various languages, performing close to a random baseline in some cases. We also find that although ChatGPT outperforms many LLMs, it still falls behind task-specific finetuned models with a gap of 12.19 SPARROW score. Our benchmark is available at: https://github.com/UBC-NLP/SPARROW
[ "Zhang, Chiyu", "Doan, Khai", "Liao, Qisheng", "Abdul-Mageed, Muhammad" ]
The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 Languages
emnlp-main.160
2310.14557
[ "https://github.com/ubc-nlp/sparrow" ]
https://huggingface.co/papers/2310.14557
2
0
0
4
[ "UBC-NLP/InfoDCL-Emoji-XLMR-Base" ]
[ "UBC-NLP/sparrow" ]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.161.bib
https://aclanthology.org/2023.emnlp-main.161/
@inproceedings{goncalves-strubell-2023-understanding, title = "Understanding the Effect of Model Compression on Social Bias in Large Language Models", author = "Gon{\c{c}}alves, Gustavo and Strubell, Emma", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.161", doi = "10.18653/v1/2023.emnlp-main.161", pages = "2663--2675", abstract = "Large Language Models (LLMs) trained with self-supervision on vast corpora of web text fit to the social biases of that text. Without intervention, these social biases persist in the model{'}s predictions in downstream tasks, leading to representational harm. Many strategies have been proposed to mitigate the effects of inappropriate social biases learned during pretraining. Simultaneously, methods for model compression have become increasingly popular to reduce the computational burden of LLMs. Despite the popularity and need for both approaches, little work has been done to explore the interplay between these two. We perform a carefully controlled study of the impact of model compression via quantization and knowledge distillation on measures of social bias in LLMs. Longer pretraining and larger models led to higher social bias, and quantization showed a regularizer effect with its best trade-off around 20{\%} of the original pretraining time.", }
Large Language Models (LLMs) trained with self-supervision on vast corpora of web text fit to the social biases of that text. Without intervention, these social biases persist in the model{'}s predictions in downstream tasks, leading to representational harm. Many strategies have been proposed to mitigate the effects of inappropriate social biases learned during pretraining. Simultaneously, methods for model compression have become increasingly popular to reduce the computational burden of LLMs. Despite the popularity and need for both approaches, little work has been done to explore the interplay between these two. We perform a carefully controlled study of the impact of model compression via quantization and knowledge distillation on measures of social bias in LLMs. Longer pretraining and larger models led to higher social bias, and quantization showed a regularizer effect with its best trade-off around 20{\%} of the original pretraining time.
[ "Gon{\\c{c}}alves, Gustavo", "Strubell, Emma" ]
Understanding the Effect of Model Compression on Social Bias in Large Language Models
emnlp-main.161
2312.05662
[ "https://github.com/gsgoncalves/emnlp2023_llm_compression_and_social_bias" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.162.bib
https://aclanthology.org/2023.emnlp-main.162/
@inproceedings{odonoghue-etal-2023-bioplanner, title = "{B}io{P}lanner: Automatic Evaluation of {LLM}s on Protocol Planning in Biology", author = "O{'}Donoghue, Odhran and Shtedritski, Aleksandar and Ginger, John and Abboud, Ralph and Ghareeb, Ali and Rodriques, Samuel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.162", doi = "10.18653/v1/2023.emnlp-main.162", pages = "2676--2694", abstract = "The ability to automatically generate accurate protocols for scientific experiments would represent a major step towards the automation of science. Large Language Models (LLMs) have impressive capabilities on a wide range of tasks, such as question answering and the generation of coherent text and code. However, LLMs can struggle with multi-step problems and long-term planning, which are crucial for designing scientific experiments. Moreover, evaluation of the accuracy of scientific protocols is challenging, because experiments can be described correctly in many different ways, require expert knowledge to evaluate, and cannot usually be executed automatically. Here we present an automatic evaluation framework for the task of planning experimental protocols, and we introduce BioProt: a dataset of biology protocols with corresponding pseudocode representations. To measure performance on generating scientific protocols, we use an LLM to convert a natural language protocol into pseudocode, and then evaluate an LLM{'}s ability to reconstruct the pseudocode from a high-level description and a list of admissible pseudocode functions. We evaluate GPT-3 and GPT-4 on this task and explore their robustness. We externally validate the utility of pseudocode representations of text by generating accurate novel protocols using retrieved pseudocode, and we run a generated protocol successfully in our biological laboratory. Our framework is extensible to the evaluation and improvement of language model", }
The ability to automatically generate accurate protocols for scientific experiments would represent a major step towards the automation of science. Large Language Models (LLMs) have impressive capabilities on a wide range of tasks, such as question answering and the generation of coherent text and code. However, LLMs can struggle with multi-step problems and long-term planning, which are crucial for designing scientific experiments. Moreover, evaluation of the accuracy of scientific protocols is challenging, because experiments can be described correctly in many different ways, require expert knowledge to evaluate, and cannot usually be executed automatically. Here we present an automatic evaluation framework for the task of planning experimental protocols, and we introduce BioProt: a dataset of biology protocols with corresponding pseudocode representations. To measure performance on generating scientific protocols, we use an LLM to convert a natural language protocol into pseudocode, and then evaluate an LLM{'}s ability to reconstruct the pseudocode from a high-level description and a list of admissible pseudocode functions. We evaluate GPT-3 and GPT-4 on this task and explore their robustness. We externally validate the utility of pseudocode representations of text by generating accurate novel protocols using retrieved pseudocode, and we run a generated protocol successfully in our biological laboratory. Our framework is extensible to the evaluation and improvement of language model
[ "O{'}Donoghue, Odhran", "Shtedritski, Aleks", "ar", "Ginger, John", "Abboud, Ralph", "Ghareeb, Ali", "Rodriques, Samuel" ]
BioPlanner: Automatic Evaluation of LLMs on Protocol Planning in Biology
emnlp-main.162
2310.10632
[ "https://github.com/bioplanner/bioplanner" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.163.bib
https://aclanthology.org/2023.emnlp-main.163/
@inproceedings{qin-etal-2023-cross, title = "Cross-lingual Prompting: Improving Zero-shot Chain-of-Thought Reasoning across Languages", author = "Qin, Libo and Chen, Qiguang and Wei, Fuxuan and Huang, Shijue and Che, Wanxiang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.163", doi = "10.18653/v1/2023.emnlp-main.163", pages = "2695--2709", abstract = "Chain-of-thought (CoT) is capable of eliciting models to explicitly generate reasoning paths, thus promoting reasoning accuracy and attracting increasing attention. Specifically, zero-shot CoT achieves remarkable improvements in a wide range of reasoning tasks by simply instructing the LLM with the prompt {``}Let{'}s think step by step!{''}. Despite the success of zero-shot CoT, the existing zero-shot prompting techniques remain limited to a single language, making it challenging to generalize to other languages and hindering global development. In this work, we introduce cross-lingual prompting (CLP), aiming to improve zero-shot CoT reasoning across languages. Specifically, CLP consists of two main components: (1) cross-lingual alignment prompting and (2) task-specific solver prompting. The cross-lingual alignment prompting is responsible for aligning representations across different languages, whereas the task-specific solver prompting is used to generate the final chain of thoughts and results for the reasoning task. In addition, we further introduce cross-lingual self-consistent prompting (CLSP) to ensemble different reasoning paths across languages. Our experimental evaluations on several benchmarks demonstrate that CLP and CLSP significantly outperform the existing prompting methods and achieve state-of-the-art performance. We hope this work will inspire further breakthroughs in cross-lingual CoT.", }
Chain-of-thought (CoT) is capable of eliciting models to explicitly generate reasoning paths, thus promoting reasoning accuracy and attracting increasing attention. Specifically, zero-shot CoT achieves remarkable improvements in a wide range of reasoning tasks by simply instructing the LLM with the prompt {``}Let{'}s think step by step!{''}. Despite the success of zero-shot CoT, the existing zero-shot prompting techniques remain limited to a single language, making it challenging to generalize to other languages and hindering global development. In this work, we introduce cross-lingual prompting (CLP), aiming to improve zero-shot CoT reasoning across languages. Specifically, CLP consists of two main components: (1) cross-lingual alignment prompting and (2) task-specific solver prompting. The cross-lingual alignment prompting is responsible for aligning representations across different languages, whereas the task-specific solver prompting is used to generate the final chain of thoughts and results for the reasoning task. In addition, we further introduce cross-lingual self-consistent prompting (CLSP) to ensemble different reasoning paths across languages. Our experimental evaluations on several benchmarks demonstrate that CLP and CLSP significantly outperform the existing prompting methods and achieve state-of-the-art performance. We hope this work will inspire further breakthroughs in cross-lingual CoT.
[ "Qin, Libo", "Chen, Qiguang", "Wei, Fuxuan", "Huang, Shijue", "Che, Wanxiang" ]
Cross-lingual Prompting: Improving Zero-shot Chain-of-Thought Reasoning across Languages
emnlp-main.163
2310.14799
[ "https://github.com/lightchen233/cross-lingual-prompting" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.164.bib
https://aclanthology.org/2023.emnlp-main.164/
@inproceedings{luukkonen-etal-2023-fingpt, title = "{F}in{GPT}: Large Generative Models for a Small Language", author = "Luukkonen, Risto and Komulainen, Ville and Luoma, Jouni and Eskelinen, Anni and Kanerva, Jenna and Kupari, Hanna-Mari and Ginter, Filip and Laippala, Veronika and Muennighoff, Niklas and Piktus, Aleksandra and Wang, Thomas and Tazi, Nouamane and Scao, Teven and Wolf, Thomas and Suominen, Osma and Sairanen, Samuli and Merioksa, Mikko and Heinonen, Jyrki and Vahtola, Aija and Antao, Samuel and Pyysalo, Sampo", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.164", doi = "10.18653/v1/2023.emnlp-main.164", pages = "2710--2726", abstract = "Large language models (LLMs) excel in many tasks in NLP and beyond, but most open models have very limited coverage of smaller languages and LLM work tends to focus on languages where nearly unlimited data is available for pretraining. In this work, we study the challenges of creating LLMs for Finnish, a language spoken by less than 0.1{\%} of the world population. We compile an extensive dataset of Finnish combining web crawls, news, social media and eBooks. We pursue two approaches to pretrain models: 1) we train seven monolingual models from scratch (186M to 13B parameters) dubbed FinGPT, 2) we continue the pretraining of the multilingual BLOOM model on a mix of its original training data and Finnish, resulting in a 176 billion parameter model we call BLUUMI. For model evaluation, we introduce FIN-bench, a version of BIG-bench with Finnish tasks. We also assess other model qualities such as toxicity and bias. Our models and tools are openly available at \url{https://turkunlp.org/gpt3-finnish}.", }
Large language models (LLMs) excel in many tasks in NLP and beyond, but most open models have very limited coverage of smaller languages and LLM work tends to focus on languages where nearly unlimited data is available for pretraining. In this work, we study the challenges of creating LLMs for Finnish, a language spoken by less than 0.1{\%} of the world population. We compile an extensive dataset of Finnish combining web crawls, news, social media and eBooks. We pursue two approaches to pretrain models: 1) we train seven monolingual models from scratch (186M to 13B parameters) dubbed FinGPT, 2) we continue the pretraining of the multilingual BLOOM model on a mix of its original training data and Finnish, resulting in a 176 billion parameter model we call BLUUMI. For model evaluation, we introduce FIN-bench, a version of BIG-bench with Finnish tasks. We also assess other model qualities such as toxicity and bias. Our models and tools are openly available at \url{https://turkunlp.org/gpt3-finnish}.
[ "Luukkonen, Risto", "Komulainen, Ville", "Luoma, Jouni", "Eskelinen, Anni", "Kanerva, Jenna", "Kupari, Hanna-Mari", "Ginter, Filip", "Laippala, Veronika", "Muennighoff, Niklas", "Piktus, Aleks", "ra", "Wang, Thomas", "Tazi, Nouamane", "Scao, Teven", "Wolf, Thomas", "Suominen, Osma", "Sairanen, Samuli", "Merioksa, Mikko", "Heinonen, Jyrki", "Vahtola, Aija", "Antao, Samuel", "Pyysalo, Sampo" ]
FinGPT: Large Generative Models for a Small Language
emnlp-main.164
2311.05640
[ "" ]
https://huggingface.co/papers/2311.05640
15
26
1
21
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.165.bib
https://aclanthology.org/2023.emnlp-main.165/
@inproceedings{yang-shen-2023-boosting, title = "Boosting Summarization with Normalizing Flows and Aggressive Training", author = "Yang, Yu and Shen, Xiaotong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.165", doi = "10.18653/v1/2023.emnlp-main.165", pages = "2727--2751", abstract = "This paper presents FlowSUM, a normalizing flows-based variational encoder-decoder framework for Transformer-based summarization. Our approach tackles two primary challenges in variational summarization: insufficient semantic information in latent representations and posterior collapse during training. To address these challenges, we employ normalizing flows to enable flexible latent posterior modeling, and we propose a controlled alternate aggressive training (CAAT) strategy with an improved gate mechanism. Experimental results show that FlowSUM significantly enhances the quality of generated summaries and unleashes the potential for knowledge distillation with minimal impact on inference time. Furthermore, we investigate the issue of posterior collapse in normalizing flows and analyze how the summary quality is affected by the training strategy, gate initialization, and the type and number of normalizing flows used, offering valuable insights for future research.", }
This paper presents FlowSUM, a normalizing flows-based variational encoder-decoder framework for Transformer-based summarization. Our approach tackles two primary challenges in variational summarization: insufficient semantic information in latent representations and posterior collapse during training. To address these challenges, we employ normalizing flows to enable flexible latent posterior modeling, and we propose a controlled alternate aggressive training (CAAT) strategy with an improved gate mechanism. Experimental results show that FlowSUM significantly enhances the quality of generated summaries and unleashes the potential for knowledge distillation with minimal impact on inference time. Furthermore, we investigate the issue of posterior collapse in normalizing flows and analyze how the summary quality is affected by the training strategy, gate initialization, and the type and number of normalizing flows used, offering valuable insights for future research.
[ "Yang, Yu", "Shen, Xiaotong" ]
Boosting Summarization with Normalizing Flows and Aggressive Training
emnlp-main.165
2311.00588
[ "https://github.com/yuyangstat/flowsum" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.166.bib
https://aclanthology.org/2023.emnlp-main.166/
@inproceedings{syed-etal-2023-indicative, title = "Indicative Summarization of Long Discussions", author = "Syed, Shahbaz and Schwabe, Dominik and Al-Khatib, Khalid and Potthast, Martin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.166", doi = "10.18653/v1/2023.emnlp-main.166", pages = "2752--2788", abstract = "Online forums encourage the exchange and discussion of different stances on many topics. Not only do they provide an opportunity to present one{'}s own arguments, but may also gather a broad cross-section of others{'} arguments. However, the resulting long discussions are difficult to overview. This paper presents a novel unsupervised approach using large language models (LLMs) to generating indicative summaries for long discussions that basically serve as tables of contents. Our approach first clusters argument sentences, generates cluster labels as abstractive summaries, and classifies the generated cluster labels into argumentation frames resulting in a two-level summary. Based on an extensively optimized prompt engineering approach, we evaluate 19 LLMs for generative cluster labeling and frame classification. To evaluate the usefulness of our indicative summaries, we conduct a purpose-driven user study via a new visual interface called **Discussion Explorer**: It shows that our proposed indicative summaries serve as a convenient navigation tool to explore long discussions.", }
Online forums encourage the exchange and discussion of different stances on many topics. Not only do they provide an opportunity to present one{'}s own arguments, but may also gather a broad cross-section of others{'} arguments. However, the resulting long discussions are difficult to overview. This paper presents a novel unsupervised approach using large language models (LLMs) to generating indicative summaries for long discussions that basically serve as tables of contents. Our approach first clusters argument sentences, generates cluster labels as abstractive summaries, and classifies the generated cluster labels into argumentation frames resulting in a two-level summary. Based on an extensively optimized prompt engineering approach, we evaluate 19 LLMs for generative cluster labeling and frame classification. To evaluate the usefulness of our indicative summaries, we conduct a purpose-driven user study via a new visual interface called **Discussion Explorer**: It shows that our proposed indicative summaries serve as a convenient navigation tool to explore long discussions.
[ "Syed, Shahbaz", "Schwabe, Dominik", "Al-Khatib, Khalid", "Potthast, Martin" ]
Indicative Summarization of Long Discussions
emnlp-main.166
2311.01882
[ "https://github.com/webis-de/emnlp-23" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.167.bib
https://aclanthology.org/2023.emnlp-main.167/
@inproceedings{lee-etal-2023-framework, title = "A Framework for Vision-Language Warm-up Tasks in Multimodal Dialogue Models", author = "Lee, Jaewook and Park, Seongsik and Park, Seong-Heum and Kim, Hongjin and Kim, Harksoo", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.167", pages = "2789--2799", abstract = "Most research on multimodal open-domain dialogue agents has focused on pretraining and multi-task learning using additional rich datasets beyond a given target dataset. However, methods for exploiting these additional datasets can be quite limited in real-world settings, creating a need for more efficient methods for constructing agents based solely on the target dataset. To address these issues, we present a new learning strategy called vision-language warm-up tasks for multimodal dialogue models (VLAW-MDM). This strategy does not require the use of large pretraining or multi-task datasets but rather relies solely on learning from target data. Moreover, our proposed approach automatically generate captions for images and incorporate them into the model{'}s input to improve the contextualization of visual information. Using this novel approach, we empirically demonstrate that our learning strategy is effective for limited data and relatively small models. The result show that our method achieved comparable and in some cases superior performance compared to existing state-of-the-art models on various evaluation metrics.", }
Most research on multimodal open-domain dialogue agents has focused on pretraining and multi-task learning using additional rich datasets beyond a given target dataset. However, methods for exploiting these additional datasets can be quite limited in real-world settings, creating a need for more efficient methods for constructing agents based solely on the target dataset. To address these issues, we present a new learning strategy called vision-language warm-up tasks for multimodal dialogue models (VLAW-MDM). This strategy does not require the use of large pretraining or multi-task datasets but rather relies solely on learning from target data. Moreover, our proposed approach automatically generate captions for images and incorporate them into the model{'}s input to improve the contextualization of visual information. Using this novel approach, we empirically demonstrate that our learning strategy is effective for limited data and relatively small models. The result show that our method achieved comparable and in some cases superior performance compared to existing state-of-the-art models on various evaluation metrics.
[ "Lee, Jaewook", "Park, Seongsik", "Park, Seong-Heum", "Kim, Hongjin", "Kim, Harksoo" ]
A Framework for Vision-Language Warm-up Tasks in Multimodal Dialogue Models
emnlp-main.167
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.168.bib
https://aclanthology.org/2023.emnlp-main.168/
@inproceedings{yang-etal-2023-enough, title = "Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling", author = "Yang, Yuanhang and Qi, Shiyi and Liu, Chuanyi and Wang, Qifan and Gao, Cuiyun and Xu, Zenglin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.168", doi = "10.18653/v1/2023.emnlp-main.168", pages = "2800--2806", abstract = "Transformer-based models have achieved great success on sentence pair modeling tasks, such as answer selection and natural language inference (NLI). These models generally perform cross-attention over input pairs, leading to prohibitive computational cost. Recent studies propose dual-encoder and late interaction architectures for faster computation. However, the balance between the expressive of cross-attention and computation speedup still needs better coordinated. To this end, this paper introduces a novel paradigm TopicAns for efficient sentence pair modeling. TopicAns involves a lightweight cross-attention mechanism. It conducts query encoding only once while modeling the query-candidate interaction in parallel. Extensive experiments conducted on four tasks demonstrate that our TopicAnscan speed up sentence pairing by over 113x while achieving comparable performance as the more expensive cross-attention models.", }
Transformer-based models have achieved great success on sentence pair modeling tasks, such as answer selection and natural language inference (NLI). These models generally perform cross-attention over input pairs, leading to prohibitive computational cost. Recent studies propose dual-encoder and late interaction architectures for faster computation. However, the balance between the expressive of cross-attention and computation speedup still needs better coordinated. To this end, this paper introduces a novel paradigm TopicAns for efficient sentence pair modeling. TopicAns involves a lightweight cross-attention mechanism. It conducts query encoding only once while modeling the query-candidate interaction in parallel. Extensive experiments conducted on four tasks demonstrate that our TopicAnscan speed up sentence pairing by over 113x while achieving comparable performance as the more expensive cross-attention models.
[ "Yang, Yuanhang", "Qi, Shiyi", "Liu, Chuanyi", "Wang, Qifan", "Gao, Cuiyun", "Xu, Zenglin" ]
Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling
emnlp-main.168
2210.05261
[ "https://github.com/ysngki/mixencoder" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.169.bib
https://aclanthology.org/2023.emnlp-main.169/
@inproceedings{liu-etal-2023-plan, title = "Plan, Verify and Switch: Integrated Reasoning with Diverse {X}-of-Thoughts", author = "Liu, Tengxiao and Guo, Qipeng and Yang, Yuqing and Hu, Xiangkun and Zhang, Yue and Qiu, Xipeng and Zhang, Zheng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.169", doi = "10.18653/v1/2023.emnlp-main.169", pages = "2807--2822", abstract = "As large language models (LLMs) have shown effectiveness with different prompting methods, such as Chain of Thought, Program of Thought, we find that these methods have formed a great complementarity to each other on math reasoning tasks. In this work, we propose XoT, an integrated problem solving framework by prompting LLMs with diverse reasoning thoughts. For each question, XoT always begins with selecting the most suitable method then executes each method iteratively. Within each iteration, XoT actively checks the validity of the generated answer and incorporates the feedback from external executors, allowing it to dynamically switch among different prompting methods. Through extensive experiments on 10 popular math reasoning datasets, we demonstrate the effectiveness of our proposed approach and thoroughly analyze the strengths of each module. Moreover, empirical results suggest that our framework is orthogonal to recent work that makes improvements on single reasoning methods and can further generalise to logical reasoning domain. By allowing method switching, XoT provides a fresh perspective on the collaborative integration of diverse reasoning thoughts in a unified framework.", }
As large language models (LLMs) have shown effectiveness with different prompting methods, such as Chain of Thought, Program of Thought, we find that these methods have formed a great complementarity to each other on math reasoning tasks. In this work, we propose XoT, an integrated problem solving framework by prompting LLMs with diverse reasoning thoughts. For each question, XoT always begins with selecting the most suitable method then executes each method iteratively. Within each iteration, XoT actively checks the validity of the generated answer and incorporates the feedback from external executors, allowing it to dynamically switch among different prompting methods. Through extensive experiments on 10 popular math reasoning datasets, we demonstrate the effectiveness of our proposed approach and thoroughly analyze the strengths of each module. Moreover, empirical results suggest that our framework is orthogonal to recent work that makes improvements on single reasoning methods and can further generalise to logical reasoning domain. By allowing method switching, XoT provides a fresh perspective on the collaborative integration of diverse reasoning thoughts in a unified framework.
[ "Liu, Tengxiao", "Guo, Qipeng", "Yang, Yuqing", "Hu, Xiangkun", "Zhang, Yue", "Qiu, Xipeng", "Zhang, Zheng" ]
Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-Thoughts
emnlp-main.169
2310.14628
[ "https://github.com/tengxiaoliu/xot" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.170.bib
https://aclanthology.org/2023.emnlp-main.170/
@inproceedings{li-etal-2023-glen, title = "{GLEN}: General-Purpose Event Detection for Thousands of Types", author = "Li, Sha and Zhan, Qiusi and Conger, Kathryn and Palmer, Martha and Ji, Heng and Han, Jiawei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.170", doi = "10.18653/v1/2023.emnlp-main.170", pages = "2823--2838", abstract = "The progress of event extraction research has been hindered by the absence of wide-coverage, large-scale datasets. To make event extraction systems more accessible, we build a general-purpose event detection dataset GLEN, which covers 205K event mentions with 3,465 different types, making it more than 20x larger in ontology than today{'}s largest event dataset. GLEN is created by utilizing the DWD Overlay, which provides a mapping between Wikidata Qnodes and PropBank rolesets. This enables us to use the abundant existing annotation for PropBank as distant supervision. In addition, we also propose a new multi-stage event detection model specifically designed to handle the large ontology size in GLEN. We show that our model exhibits superior performance compared to a range of baselines including InstructGPT. Finally, we perform error analysis and show that label noise is still the largest challenge for improving performance for this new dataset.", }
The progress of event extraction research has been hindered by the absence of wide-coverage, large-scale datasets. To make event extraction systems more accessible, we build a general-purpose event detection dataset GLEN, which covers 205K event mentions with 3,465 different types, making it more than 20x larger in ontology than today{'}s largest event dataset. GLEN is created by utilizing the DWD Overlay, which provides a mapping between Wikidata Qnodes and PropBank rolesets. This enables us to use the abundant existing annotation for PropBank as distant supervision. In addition, we also propose a new multi-stage event detection model specifically designed to handle the large ontology size in GLEN. We show that our model exhibits superior performance compared to a range of baselines including InstructGPT. Finally, we perform error analysis and show that label noise is still the largest challenge for improving performance for this new dataset.
[ "Li, Sha", "Zhan, Qiusi", "Conger, Kathryn", "Palmer, Martha", "Ji, Heng", "Han, Jiawei" ]
GLEN: General-Purpose Event Detection for Thousands of Types
emnlp-main.170
2303.09093
[ "https://github.com/zqs1943/glen" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.171.bib
https://aclanthology.org/2023.emnlp-main.171/
@inproceedings{wang-etal-2023-hierarchical, title = "Hierarchical Pretraining on Multimodal Electronic Health Records", author = "Wang, Xiaochen and Luo, Junyu and Wang, Jiaqi and Yin, Ziyi and Cui, Suhan and Zhong, Yuan and Wang, Yaqing and Ma, Fenglong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.171", doi = "10.18653/v1/2023.emnlp-main.171", pages = "2839--2852", abstract = "Pretraining has proven to be a powerful technique in natural language processing (NLP), exhibiting remarkable success in various NLP downstream tasks. However, in the medical domain, existing pretrained models on electronic health records (EHR) fail to capture the hierarchical nature of EHR data, limiting their generalization capability across diverse downstream tasks using a single pretrained model. To tackle this challenge, this paper introduces a novel, general, and unified pretraining framework called MedHMP, specifically designed for hierarchically multimodal EHR data. The effectiveness of the proposed MedHMP is demonstrated through experimental results on eight downstream tasks spanning three levels. Comparisons against eighteen baselines further highlight the efficacy of our approach.", }
Pretraining has proven to be a powerful technique in natural language processing (NLP), exhibiting remarkable success in various NLP downstream tasks. However, in the medical domain, existing pretrained models on electronic health records (EHR) fail to capture the hierarchical nature of EHR data, limiting their generalization capability across diverse downstream tasks using a single pretrained model. To tackle this challenge, this paper introduces a novel, general, and unified pretraining framework called MedHMP, specifically designed for hierarchically multimodal EHR data. The effectiveness of the proposed MedHMP is demonstrated through experimental results on eight downstream tasks spanning three levels. Comparisons against eighteen baselines further highlight the efficacy of our approach.
[ "Wang, Xiaochen", "Luo, Junyu", "Wang, Jiaqi", "Yin, Ziyi", "Cui, Suhan", "Zhong, Yuan", "Wang, Yaqing", "Ma, Fenglong" ]
Hierarchical Pretraining on Multimodal Electronic Health Records
emnlp-main.171
2310.07871
[ "https://github.com/xiaochenwang-psu/medhmp" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.172.bib
https://aclanthology.org/2023.emnlp-main.172/
@inproceedings{lango-dusek-2023-critic, title = "Critic-Driven Decoding for Mitigating Hallucinations in Data-to-text Generation", author = "Lango, Mateusz and Dusek, Ondrej", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.172", doi = "10.18653/v1/2023.emnlp-main.172", pages = "2853--2862", abstract = "Hallucination of text ungrounded in the input is a well-known problem in neural data-to-text generation. Many methods have been proposed to mitigate it, but they typically require altering model architecture or collecting additional data, and thus cannot be easily applied to an existing model. In this paper, we explore a new way to mitigate hallucinations by combining the probabilistic output of a generator language model (LM) with the output of a special {``}text critic{''} classifier, which guides the generation by assessing the match between the input data and the text generated so far. Our method does not need any changes to the underlying LM{'}s architecture or training procedure and can thus be combined with any model and decoding operating on word probabilities. The critic does not need any additional training data, using the base LM{'}s training data and synthetic negative examples. Our experimental results show that our method improves over the baseline on the WebNLG and OpenDialKG benchmarks.", }
Hallucination of text ungrounded in the input is a well-known problem in neural data-to-text generation. Many methods have been proposed to mitigate it, but they typically require altering model architecture or collecting additional data, and thus cannot be easily applied to an existing model. In this paper, we explore a new way to mitigate hallucinations by combining the probabilistic output of a generator language model (LM) with the output of a special {``}text critic{''} classifier, which guides the generation by assessing the match between the input data and the text generated so far. Our method does not need any changes to the underlying LM{'}s architecture or training procedure and can thus be combined with any model and decoding operating on word probabilities. The critic does not need any additional training data, using the base LM{'}s training data and synthetic negative examples. Our experimental results show that our method improves over the baseline on the WebNLG and OpenDialKG benchmarks.
[ "Lango, Mateusz", "Dusek, Ondrej" ]
Critic-Driven Decoding for Mitigating Hallucinations in Data-to-text Generation
emnlp-main.172
2310.16964
[ "https://github.com/langus0/critic-aware-decoding" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.173.bib
https://aclanthology.org/2023.emnlp-main.173/
@inproceedings{guo-etal-2023-bridging, title = "Bridging the Gap between Synthetic and Authentic Images for Multimodal Machine Translation", author = "Guo, Wenyu and Fang, Qingkai and Yu, Dong and Feng, Yang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.173", doi = "10.18653/v1/2023.emnlp-main.173", pages = "2863--2874", abstract = "Multimodal machine translation (MMT) simultaneously takes the source sentence and a relevant image as input for translation. Since there is no paired image available for the input sentence in most cases, recent studies suggest utilizing powerful text-to-image generation models to provide image inputs. Nevertheless, synthetic images generated by these models often follow different distributions compared to authentic images. Consequently, using authentic images for training and synthetic images for inference can introduce a distribution shift, resulting in performance degradation during inference. To tackle this challenge, in this paper, we feed synthetic and authentic images to the MMT model, respectively. Then we minimize the gap between the synthetic and authentic images by drawing close the input image representations of the Transformer Encoder and the output distributions of the Transformer Decoder. Therefore, we mitigate the distribution disparity introduced by the synthetic images during inference, thereby freeing the authentic images from the inference process. Experimental results show that our approach achieves state-of-the-art performance on the Multi30K En-De and En-Fr datasets, while remaining independent of authentic images during inference.", }
Multimodal machine translation (MMT) simultaneously takes the source sentence and a relevant image as input for translation. Since there is no paired image available for the input sentence in most cases, recent studies suggest utilizing powerful text-to-image generation models to provide image inputs. Nevertheless, synthetic images generated by these models often follow different distributions compared to authentic images. Consequently, using authentic images for training and synthetic images for inference can introduce a distribution shift, resulting in performance degradation during inference. To tackle this challenge, in this paper, we feed synthetic and authentic images to the MMT model, respectively. Then we minimize the gap between the synthetic and authentic images by drawing close the input image representations of the Transformer Encoder and the output distributions of the Transformer Decoder. Therefore, we mitigate the distribution disparity introduced by the synthetic images during inference, thereby freeing the authentic images from the inference process. Experimental results show that our approach achieves state-of-the-art performance on the Multi30K En-De and En-Fr datasets, while remaining independent of authentic images during inference.
[ "Guo, Wenyu", "Fang, Qingkai", "Yu, Dong", "Feng, Yang" ]
Bridging the Gap between Synthetic and Authentic Images for Multimodal Machine Translation
emnlp-main.173
2310.13361
[ "https://github.com/ictnlp/sammt" ]
https://huggingface.co/papers/2310.13361
1
0
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.174.bib
https://aclanthology.org/2023.emnlp-main.174/
@inproceedings{wu-etal-2023-depn, title = "{DEPN}: Detecting and Editing Privacy Neurons in Pretrained Language Models", author = "Wu, Xinwei and Li, Junzhuo and Xu, Minghui and Dong, Weilong and Wu, Shuangzhi and Bian, Chao and Xiong, Deyi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.174", doi = "10.18653/v1/2023.emnlp-main.174", pages = "2875--2886", abstract = "Pretrained language models have learned a vast amount of human knowledge from large-scale corpora, but their powerful memorization capability also brings the risk of data leakage. Some risks may only be discovered after the model training is completed, such as the model memorizing a specific phone number and frequently outputting it. In such cases, model developers need to eliminate specific data influences from the model to mitigate legal and ethical penalties. To effectively mitigate these risks, people often have to spend a significant amount of time and computational costs to retrain new models instead of finding ways to cure the {`}sick{'} models. Therefore, we propose a method to locate and erase risky neurons in order to eliminate the impact of privacy data in the model. We use a new method based on integrated gradients to locate neurons associated with privacy texts, and then erase these neurons by setting their activation values to zero.Furthermore, we propose a risky neuron aggregation method to eliminate the influence of privacy data in the model in batches. Experimental results show that our method can effectively and quickly eliminate the impact of privacy data without affecting the model{'}s performance. Additionally, we demonstrate the relationship between model memorization and neurons through experiments, further illustrating the robustness of our method.", }
Pretrained language models have learned a vast amount of human knowledge from large-scale corpora, but their powerful memorization capability also brings the risk of data leakage. Some risks may only be discovered after the model training is completed, such as the model memorizing a specific phone number and frequently outputting it. In such cases, model developers need to eliminate specific data influences from the model to mitigate legal and ethical penalties. To effectively mitigate these risks, people often have to spend a significant amount of time and computational costs to retrain new models instead of finding ways to cure the {`}sick{'} models. Therefore, we propose a method to locate and erase risky neurons in order to eliminate the impact of privacy data in the model. We use a new method based on integrated gradients to locate neurons associated with privacy texts, and then erase these neurons by setting their activation values to zero.Furthermore, we propose a risky neuron aggregation method to eliminate the influence of privacy data in the model in batches. Experimental results show that our method can effectively and quickly eliminate the impact of privacy data without affecting the model{'}s performance. Additionally, we demonstrate the relationship between model memorization and neurons through experiments, further illustrating the robustness of our method.
[ "Wu, Xinwei", "Li, Junzhuo", "Xu, Minghui", "Dong, Weilong", "Wu, Shuangzhi", "Bian, Chao", "Xiong, Deyi" ]
DEPN: Detecting and Editing Privacy Neurons in Pretrained Language Models
emnlp-main.174
2310.20138
[ "https://github.com/flamewei123/DEPN" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.175.bib
https://aclanthology.org/2023.emnlp-main.175/
@inproceedings{reusens-etal-2023-investigating, title = "Investigating Bias in Multilingual Language Models: Cross-Lingual Transfer of Debiasing Techniques", author = "Reusens, Manon and Borchert, Philipp and Mieskes, Margot and De Weerdt, Jochen and Baesens, Bart", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.175", doi = "10.18653/v1/2023.emnlp-main.175", pages = "2887--2896", abstract = "This paper investigates the transferability of debiasing techniques across different languages within multilingual models. We examine the applicability of these techniques in English, French, German, and Dutch. Using multilingual BERT (mBERT), we demonstrate that cross-lingual transfer of debiasing techniques is not only feasible but also yields promising results. Surprisingly, our findings reveal no performance disadvantages when applying these techniques to non-English languages. Using translations of the CrowS-Pairs dataset, our analysis identifies SentenceDebias as the best technique across different languages, reducing bias in mBERT by an average of 13{\%}. We also find that debiasing techniques with additional pretraining exhibit enhanced cross-lingual effectiveness for the languages included in the analyses, particularly in lower-resource languages. These novel insights contribute to a deeper understanding of bias mitigation in multilingual language models and provide practical guidance for debiasing techniques in different language contexts.", }
This paper investigates the transferability of debiasing techniques across different languages within multilingual models. We examine the applicability of these techniques in English, French, German, and Dutch. Using multilingual BERT (mBERT), we demonstrate that cross-lingual transfer of debiasing techniques is not only feasible but also yields promising results. Surprisingly, our findings reveal no performance disadvantages when applying these techniques to non-English languages. Using translations of the CrowS-Pairs dataset, our analysis identifies SentenceDebias as the best technique across different languages, reducing bias in mBERT by an average of 13{\%}. We also find that debiasing techniques with additional pretraining exhibit enhanced cross-lingual effectiveness for the languages included in the analyses, particularly in lower-resource languages. These novel insights contribute to a deeper understanding of bias mitigation in multilingual language models and provide practical guidance for debiasing techniques in different language contexts.
[ "Reusens, Manon", "Borchert, Philipp", "Mieskes, Margot", "De Weerdt, Jochen", "Baesens, Bart" ]
Investigating Bias in Multilingual Language Models: Cross-Lingual Transfer of Debiasing Techniques
emnlp-main.175
2310.10310
[ "https://github.com/manon-reusens/multilingual_bias" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.176.bib
https://aclanthology.org/2023.emnlp-main.176/
@inproceedings{ko-etal-2023-language, title = "Can Language Models Laugh at {Y}ou{T}ube Short-form Videos?", author = "Ko, Dayoon and Lee, Sangho and Kim, Gunhee", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.176", doi = "10.18653/v1/2023.emnlp-main.176", pages = "2897--2916", abstract = "As short-form funny videos on social networks are gaining popularity, it becomes demanding for AI models to understand them for better communication with humans. Unfortunately, previous video humor datasets target specific domains such as speeches or sitcoms, and mostly focus on verbal cues. We curate a user-generated dataset of 10K multimodal funny videos from YouTube, called ExFunTube. Using a video filtering pipeline with GPT-3.5, we verify both verbal and visual elements contributing to humor. After filtering, we annotate each video with timestamps and text explanations for funny moments. Our ExFunTube is unique over existing datasets in that our videos cover a wide range of domains with various types of humor that necessitate a multimodal understanding of the content. Also, we develop a zero-shot video-to-text prompting to maximize video humor understanding of large language models (LLMs). With three different evaluation methods using automatic scores, rationale quality experiments, and human evaluations, we show that our prompting significantly improves LLMs{'} ability for humor explanation.", }
As short-form funny videos on social networks are gaining popularity, it becomes demanding for AI models to understand them for better communication with humans. Unfortunately, previous video humor datasets target specific domains such as speeches or sitcoms, and mostly focus on verbal cues. We curate a user-generated dataset of 10K multimodal funny videos from YouTube, called ExFunTube. Using a video filtering pipeline with GPT-3.5, we verify both verbal and visual elements contributing to humor. After filtering, we annotate each video with timestamps and text explanations for funny moments. Our ExFunTube is unique over existing datasets in that our videos cover a wide range of domains with various types of humor that necessitate a multimodal understanding of the content. Also, we develop a zero-shot video-to-text prompting to maximize video humor understanding of large language models (LLMs). With three different evaluation methods using automatic scores, rationale quality experiments, and human evaluations, we show that our prompting significantly improves LLMs{'} ability for humor explanation.
[ "Ko, Dayoon", "Lee, Sangho", "Kim, Gunhee" ]
Can Language Models Laugh at YouTube Short-form Videos?
emnlp-main.176
2310.14159
[ "https://github.com/dayoon-ko/exfuntube" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.177.bib
https://aclanthology.org/2023.emnlp-main.177/
@inproceedings{li-etal-2023-random, title = "Random Entity Quantization for Parameter-Efficient Compositional Knowledge Graph Representation", author = "Li, Jiaang and Wang, Quan and Liu, Yi and Zhang, Licheng and Mao, Zhendong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.177", doi = "10.18653/v1/2023.emnlp-main.177", pages = "2917--2928", abstract = "Representation Learning on Knowledge Graphs (KGs) is essential for downstream tasks. The dominant approach, KG Embedding (KGE), represents entities with independent vectors and faces the scalability challenge. Recent studies propose an alternative way for parameter efficiency, which represents entities by composing entity-corresponding codewords matched from predefined small-scale codebooks. We refer to the process of obtaining corresponding codewords of each entity as entity quantization, for which previous works have designed complicated strategies. Surprisingly, this paper shows that simple random entity quantization can achieve similar results to current strategies. We analyze this phenomenon and reveal that entity codes, the quantization outcomes for expressing entities, have higher entropy at the code level and Jaccard distance at the codeword level under random entity quantization. Therefore, different entities become more easily distinguished, facilitating effective KG representation. The above results show that current quantization strategies are not critical for KG representation, and there is still room for improvement in entity distinguishability beyond current strategies.", }
Representation Learning on Knowledge Graphs (KGs) is essential for downstream tasks. The dominant approach, KG Embedding (KGE), represents entities with independent vectors and faces the scalability challenge. Recent studies propose an alternative way for parameter efficiency, which represents entities by composing entity-corresponding codewords matched from predefined small-scale codebooks. We refer to the process of obtaining corresponding codewords of each entity as entity quantization, for which previous works have designed complicated strategies. Surprisingly, this paper shows that simple random entity quantization can achieve similar results to current strategies. We analyze this phenomenon and reveal that entity codes, the quantization outcomes for expressing entities, have higher entropy at the code level and Jaccard distance at the codeword level under random entity quantization. Therefore, different entities become more easily distinguished, facilitating effective KG representation. The above results show that current quantization strategies are not critical for KG representation, and there is still room for improvement in entity distinguishability beyond current strategies.
[ "Li, Jiaang", "Wang, Quan", "Liu, Yi", "Zhang, Licheng", "Mao, Zhendong" ]
Random Entity Quantization for Parameter-Efficient Compositional Knowledge Graph Representation
emnlp-main.177
2310.15797
[ "https://github.com/jiaangl/randomquantization" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.178.bib
https://aclanthology.org/2023.emnlp-main.178/
@inproceedings{miao-etal-2023-exploring, title = "Exploring All-In-One Knowledge Distillation Framework for Neural Machine Translation", author = "Miao, Zhongjian and Zhang, Wen and Su, Jinsong and Li, Xiang and Luan, Jian and Chen, Yidong and Wang, Bin and Zhang, Min", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.178", doi = "10.18653/v1/2023.emnlp-main.178", pages = "2929--2940", abstract = "Conventional knowledge distillation(KD) approaches are commonly employed to compress neural machine translation(NMT) models. However, they only obtain one lightweight student each time. Consequently, we have to conduct KD multiple times when different students are required at the same time, which could be resource-intensive. Additionally, these students are individually optimized, and thus lack interactions with each other, leading to their potential not being fully exerted. In this work, we propose a novel All-In-One Knowledge Distillation(AIO-KD) framework for NMT, which generates multiple satisfactory students at once. Under AIO-KD, we first randomly extract fewer-layer subnetworks from the teacher as the sample students. Then, we jointly optimize the teacher and these students, where the students simultaneously learn the knowledge from the teacher and interact with other students via mutual learning. When utilized, we re-extract the candidate students, satisfying the specifications of various devices. Particularly, we adopt carefully-designed strategies for AIO-KD: 1) we dynamically detach gradients to prevent poorly-performed students from negatively affecting the teacher during the knowledge transfer, which could subsequently impact other students; 2) we design a two-stage mutual learning strategy, which alleviates the negative impacts of poorly-performed students on the early-stage student interactions. Extensive experiments and in-depth analyses on three benchmarks demonstrate the effectiveness and eco-friendliness of AIO-KD. Our source code is available at https://github.com/DeepLearnXMU/AIO-KD.", }
Conventional knowledge distillation(KD) approaches are commonly employed to compress neural machine translation(NMT) models. However, they only obtain one lightweight student each time. Consequently, we have to conduct KD multiple times when different students are required at the same time, which could be resource-intensive. Additionally, these students are individually optimized, and thus lack interactions with each other, leading to their potential not being fully exerted. In this work, we propose a novel All-In-One Knowledge Distillation(AIO-KD) framework for NMT, which generates multiple satisfactory students at once. Under AIO-KD, we first randomly extract fewer-layer subnetworks from the teacher as the sample students. Then, we jointly optimize the teacher and these students, where the students simultaneously learn the knowledge from the teacher and interact with other students via mutual learning. When utilized, we re-extract the candidate students, satisfying the specifications of various devices. Particularly, we adopt carefully-designed strategies for AIO-KD: 1) we dynamically detach gradients to prevent poorly-performed students from negatively affecting the teacher during the knowledge transfer, which could subsequently impact other students; 2) we design a two-stage mutual learning strategy, which alleviates the negative impacts of poorly-performed students on the early-stage student interactions. Extensive experiments and in-depth analyses on three benchmarks demonstrate the effectiveness and eco-friendliness of AIO-KD. Our source code is available at https://github.com/DeepLearnXMU/AIO-KD.
[ "Miao, Zhongjian", "Zhang, Wen", "Su, Jinsong", "Li, Xiang", "Luan, Jian", "Chen, Yidong", "Wang, Bin", "Zhang, Min" ]
Exploring All-In-One Knowledge Distillation Framework for Neural Machine Translation
emnlp-main.178
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.179.bib
https://aclanthology.org/2023.emnlp-main.179/
@inproceedings{wan-etal-2023-histalign, title = "{H}ist{A}lign: Improving Context Dependency in Language Generation by Aligning with History", author = "Wan, David and Zhang, Shiyue and Bansal, Mohit", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.179", doi = "10.18653/v1/2023.emnlp-main.179", pages = "2941--2960", abstract = "Language models (LMs) can generate hallucinations and incoherent outputs, which highlights their weak context dependency. Cache-LMs, which augment LMs with a memory of recent history, can increase context dependency and have shown remarkable performance in diverse language generation tasks. However, we find that even with training, the performance gain stemming from the cache component of current cache-LMs is suboptimal due to the misalignment between the current hidden states and those stored in the memory. In this work, we present HistAlign, a new training approach to ensure good cache alignment such that the model receives useful signals from the history. We first prove our concept on a simple and synthetic task where the memory is essential for correct predictions, and we show that the cache component of HistAlign is better aligned and improves overall performance. Next, we evaluate HistAlign on diverse downstream language generation tasks, including prompt continuation, abstractive summarization, and data-to-text. We demonstrate that HistAlign improves text coherence and faithfulness in open-ended and conditional generation settings respectively. HistAlign is also generalizable across different model families, showcasing its strength in improving context dependency of LMs in diverse scenarios.", }
Language models (LMs) can generate hallucinations and incoherent outputs, which highlights their weak context dependency. Cache-LMs, which augment LMs with a memory of recent history, can increase context dependency and have shown remarkable performance in diverse language generation tasks. However, we find that even with training, the performance gain stemming from the cache component of current cache-LMs is suboptimal due to the misalignment between the current hidden states and those stored in the memory. In this work, we present HistAlign, a new training approach to ensure good cache alignment such that the model receives useful signals from the history. We first prove our concept on a simple and synthetic task where the memory is essential for correct predictions, and we show that the cache component of HistAlign is better aligned and improves overall performance. Next, we evaluate HistAlign on diverse downstream language generation tasks, including prompt continuation, abstractive summarization, and data-to-text. We demonstrate that HistAlign improves text coherence and faithfulness in open-ended and conditional generation settings respectively. HistAlign is also generalizable across different model families, showcasing its strength in improving context dependency of LMs in diverse scenarios.
[ "Wan, David", "Zhang, Shiyue", "Bansal, Mohit" ]
HistAlign: Improving Context Dependency in Language Generation by Aligning with History
emnlp-main.179
2305.04782
[ "https://github.com/meetdavidwan/histalign" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.180.bib
https://aclanthology.org/2023.emnlp-main.180/
@inproceedings{ormazabal-etal-2023-comblm, title = "{C}omb{LM}: Adapting Black-Box Language Models through Small Fine-Tuned Models", author = "Ormazabal, Aitor and Artetxe, Mikel and Agirre, Eneko", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.180", doi = "10.18653/v1/2023.emnlp-main.180", pages = "2961--2974", abstract = "Methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model, and work by modifying its parameters. However, this is incompatible with a recent trend in the field, where the highest quality models are only available as black-boxes through inference APIs. Even when the model weights are available, the computational cost of fine-tuning large LMs can be prohibitive for most practitioners. In this work, we present a lightweight method for adapting large LMs to new domains and tasks, assuming no access to their weights or intermediate activations. Our approach fine-tunes a small white-box LM and combines it with the large black-box LM at the probability level through a small network, learned on a small validation set. We validate our approach by adapting a large LM (OPT-30B) to several domains and a downstream task (machine translation), observing improved performance in all cases, of up to 9{\%}, while using a domain expert 23x smaller.", }
Methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model, and work by modifying its parameters. However, this is incompatible with a recent trend in the field, where the highest quality models are only available as black-boxes through inference APIs. Even when the model weights are available, the computational cost of fine-tuning large LMs can be prohibitive for most practitioners. In this work, we present a lightweight method for adapting large LMs to new domains and tasks, assuming no access to their weights or intermediate activations. Our approach fine-tunes a small white-box LM and combines it with the large black-box LM at the probability level through a small network, learned on a small validation set. We validate our approach by adapting a large LM (OPT-30B) to several domains and a downstream task (machine translation), observing improved performance in all cases, of up to 9{\%}, while using a domain expert 23x smaller.
[ "Ormazabal, Aitor", "Artetxe, Mikel", "Agirre, Eneko" ]
CombLM: Adapting Black-Box Language Models through Small Fine-Tuned Models
emnlp-main.180
2305.16876
[ "" ]
https://huggingface.co/papers/2305.16876
2
0
0
3
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.181.bib
https://aclanthology.org/2023.emnlp-main.181/
@inproceedings{singh-etal-2023-image, title = "Image Manipulation via Multi-Hop Instructions - A New Dataset and Weakly-Supervised Neuro-Symbolic Approach", author = "Singh, Harman and Garg, Poorva and Gupta, Mohit and Shah, Kevin and Goswami, Ashish and Modi, Satyam and Mondal, Arnab and Khandelwal, Dinesh and Garg, Dinesh and Singla, Parag", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.181", doi = "10.18653/v1/2023.emnlp-main.181", pages = "2975--3007", abstract = "We are interested in image manipulation via natural language text {--} a task that is useful for multiple AI applications but requires complex reasoning over multi-modal spaces. We extend recently proposed Neuro Symbolic Concept Learning (NSCL), which has been quite effective for the task of Visual Question Answering (VQA), for the task of image manipulation. Our system referred to as NeuroSIM can perform complex multi-hop reasoning over multi-object scenes and only requires weak supervision in the form of annotated data for VQA. NeuroSIM parses an instruction into a symbolic program, based on a Domain Specific Language (DSL) comprising of object attributes and manipulation operations, that guides its execution. We create a new dataset for the task, and extensive experiments demonstrate that NeuroSIM is highly competitive with or beats SOTA baselines that make use of supervised data for manipulation.", }
We are interested in image manipulation via natural language text {--} a task that is useful for multiple AI applications but requires complex reasoning over multi-modal spaces. We extend recently proposed Neuro Symbolic Concept Learning (NSCL), which has been quite effective for the task of Visual Question Answering (VQA), for the task of image manipulation. Our system referred to as NeuroSIM can perform complex multi-hop reasoning over multi-object scenes and only requires weak supervision in the form of annotated data for VQA. NeuroSIM parses an instruction into a symbolic program, based on a Domain Specific Language (DSL) comprising of object attributes and manipulation operations, that guides its execution. We create a new dataset for the task, and extensive experiments demonstrate that NeuroSIM is highly competitive with or beats SOTA baselines that make use of supervised data for manipulation.
[ "Singh, Harman", "Garg, Poorva", "Gupta, Mohit", "Shah, Kevin", "Goswami, Ashish", "Modi, Satyam", "Mondal, Arnab", "Kh", "elwal, Dinesh", "Garg, Dinesh", "Singla, Parag" ]
Image Manipulation via Multi-Hop Instructions - A New Dataset and Weakly-Supervised Neuro-Symbolic Approach
emnlp-main.181
2305.14410
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.182.bib
https://aclanthology.org/2023.emnlp-main.182/
@inproceedings{algayres-etal-2023-generative, title = "Generative Spoken Language Model based on continuous word-sized audio tokens", author = "Algayres, Robin and Adi, Yossi and Nguyen, Tu and Copet, Jade and Synnaeve, Gabriel and Sagot, Beno{\^\i}t and Dupoux, Emmanuel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.182", doi = "10.18653/v1/2023.emnlp-main.182", pages = "3008--3028", abstract = "In NLP, text language models based on words or subwords are known to outperform their character-based counterparts. Yet, in the speech community, the standard input of spoken LMs are 20ms or 40ms-long discrete units (shorter than a phoneme). Taking inspiration from word-based LM, we introduce a Generative Spoken Language Model (GSLM) based on word-size continuous-valued audio tokens that can generate diverse and expressive language output. This is obtained by replacing lookup table for lexical types with a Lexical Embedding function, the cross entropy loss by a contrastive loss, and multinomial sampling by k-NN sampling. The resulting model is the first generative language model based on word-size continuous tokens. Its performance is on par with discrete unit GSLMs regarding generation quality as measured by automatic metrics and subjective human judgements. Moreover, it is five times more memory efficient thanks to its large 200ms units. In addition, the embeddings before and after the Lexical Embedder are phonetically and semantically interpretable.", }
In NLP, text language models based on words or subwords are known to outperform their character-based counterparts. Yet, in the speech community, the standard input of spoken LMs are 20ms or 40ms-long discrete units (shorter than a phoneme). Taking inspiration from word-based LM, we introduce a Generative Spoken Language Model (GSLM) based on word-size continuous-valued audio tokens that can generate diverse and expressive language output. This is obtained by replacing lookup table for lexical types with a Lexical Embedding function, the cross entropy loss by a contrastive loss, and multinomial sampling by k-NN sampling. The resulting model is the first generative language model based on word-size continuous tokens. Its performance is on par with discrete unit GSLMs regarding generation quality as measured by automatic metrics and subjective human judgements. Moreover, it is five times more memory efficient thanks to its large 200ms units. In addition, the embeddings before and after the Lexical Embedder are phonetically and semantically interpretable.
[ "Algayres, Robin", "Adi, Yossi", "Nguyen, Tu", "Copet, Jade", "Synnaeve, Gabriel", "Sagot, Beno{\\^\\i}t", "Dupoux, Emmanuel" ]
Generative Spoken Language Model based on continuous word-sized audio tokens
emnlp-main.182
2310.05224
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.183.bib
https://aclanthology.org/2023.emnlp-main.183/
@inproceedings{ding-etal-2023-enhancing, title = "Enhancing Chat Language Models by Scaling High-quality Instructional Conversations", author = "Ding, Ning and Chen, Yulin and Xu, Bokai and Qin, Yujia and Hu, Shengding and Liu, Zhiyuan and Sun, Maosong and Zhou, Bowen", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.183", doi = "10.18653/v1/2023.emnlp-main.183", pages = "3029--3051", abstract = "Fine-tuning on instruction data has been widely validated as an effective practice for implementing chat language models like ChatGPT. Scaling the diversity and quality of such data, although straightforward, stands a great chance of leading to improved performance. This paper aims to push the upper bound of open-source models further. We first provide a systematically designed, diverse, informative, large-scale dataset of instructional conversations, UltraChat, which does not involve human queries. Our objective is to capture the breadth of interactions between a human user and an AI assistant and employs a comprehensive framework to generate multi-turn conversation iteratively. UltraChat contains 1.5 million high-quality multi-turn dialogues and covers a wide range of topics and instructions. Our statistical analysis of UltraChat reveals its superiority in various key metrics, including scale, average length, diversity, coherence, etc., solidifying its position as a leading open-source dataset. Building upon UltraChat, we fine-tune a LLaMA model to create a powerful conversational model, UltraLM. Our evaluations indicate that UltraLM consistently outperforms other open-source models, including WizardLM and Vicuna, the previously recognized state-of-the-art open-source models.", }
Fine-tuning on instruction data has been widely validated as an effective practice for implementing chat language models like ChatGPT. Scaling the diversity and quality of such data, although straightforward, stands a great chance of leading to improved performance. This paper aims to push the upper bound of open-source models further. We first provide a systematically designed, diverse, informative, large-scale dataset of instructional conversations, UltraChat, which does not involve human queries. Our objective is to capture the breadth of interactions between a human user and an AI assistant and employs a comprehensive framework to generate multi-turn conversation iteratively. UltraChat contains 1.5 million high-quality multi-turn dialogues and covers a wide range of topics and instructions. Our statistical analysis of UltraChat reveals its superiority in various key metrics, including scale, average length, diversity, coherence, etc., solidifying its position as a leading open-source dataset. Building upon UltraChat, we fine-tune a LLaMA model to create a powerful conversational model, UltraLM. Our evaluations indicate that UltraLM consistently outperforms other open-source models, including WizardLM and Vicuna, the previously recognized state-of-the-art open-source models.
[ "Ding, Ning", "Chen, Yulin", "Xu, Bokai", "Qin, Yujia", "Hu, Shengding", "Liu, Zhiyuan", "Sun, Maosong", "Zhou, Bowen" ]
Enhancing Chat Language Models by Scaling High-quality Instructional Conversations
emnlp-main.183
2305.14233
[ "https://github.com/thunlp/ultrachat" ]
https://huggingface.co/papers/2305.14233
2
6
4
9
[ "openbmb/UltraLM-13b", "TheBloke/UltraLM-13B-GGML", "TheBloke/UltraLM-13B-GPTQ", "openbmb/UltraLM-65b", "TheBloke/UltraLM-13B-fp16", "poisson-fish/ultralm-13b-GPTQ" ]
[ "HuggingFaceH4/ultrachat_200k", "nguyenphuthien/vietnamese_ultrachat_200k", "zhengr/ultrachat_200k", "abhinand/ultrachat_200k_sharegpt", "datatab/ultrachat_200k_serbian", "royweiss1/GPT_Keylogger_Dataset", "rohansolo/BB-Ultrachat-IndicLingual6-12k", "saied/Persian_Chat_Dataset", "DreamingBumblebee/ultrachat-100-ko", "kreem22/kreemdata", "mii-community/ultrafeedback-translated-ita" ]
[ "open-llm-leaderboard/open_llm_leaderboard", "Intel/low_bit_open_llm_leaderboard", "BAAI/open_cn_llm_leaderboard", "gsaivinay/open_llm_leaderboard", "GTBench/GTBench", "felixz/open_llm_leaderboard", "OPTML-Group/UnlearnCanvas-Benchmark", "Vikhrmodels/small-shlepa-lb", "b1sheng/kg_llm_leaderboard_test", "neubla/neubla-llm-evaluation-board", "rodrigomasini/data_only_open_llm_leaderboard", "Docfile/open_llm_leaderboard", "zjrwtx/openbmb-UltraLM-13b", "smothiki/open_llm_leaderboard", "mikeee/chatglm2-6b-test", "0x1668/open_llm_leaderboard", "pngwn/open_llm_leaderboard", "pngwn/open_llm_leaderboard-check", "pngwn/open_llm_leaderboard_two", "asir0z/open_llm_leaderboard", "choco9966/LeaderboardTest", "kbmlcoding/open_llm_leaderboard_free", "choco9966/open-ko-llm-leaderboard", "aichampions/open_llm_leaderboard", "Adeco/open_llm_leaderboard", "anirudh937/open_llm_leaderboard", "smothiki/open_llm_leaderboard2" ]
1
Poster
https://aclanthology.org/2023.emnlp-main.184.bib
https://aclanthology.org/2023.emnlp-main.184/
@inproceedings{bugliarello-etal-2023-weakly, title = "Weakly-Supervised Learning of Visual Relations in Multimodal Pretraining", author = "Bugliarello, Emanuele and Nematzadeh, Aida and Hendricks, Lisa", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.184", doi = "10.18653/v1/2023.emnlp-main.184", pages = "3052--3071", abstract = "Recent work in vision-and-language pretraining has investigated supervised signals from object detection data to learn better, fine-grained multimodal representations. In this work, we take a step further and explore how we can tap into supervision from small-scale visual relation data. In particular, we propose two pretraining approaches to contextualise visual entities in a multimodal setup. With verbalised scene graphs, we transform visual relation triplets into structured captions, and treat them as additional image descriptions. With masked relation prediction, we further encourage relating entities from image regions with visually masked contexts. When applied to strong baselines pretrained on large amounts of Web data, zero-shot evaluations on both coarse-grained and fine-grained tasks show the efficacy of our methods in learning multimodal representations from weakly-supervised relations data.", }
Recent work in vision-and-language pretraining has investigated supervised signals from object detection data to learn better, fine-grained multimodal representations. In this work, we take a step further and explore how we can tap into supervision from small-scale visual relation data. In particular, we propose two pretraining approaches to contextualise visual entities in a multimodal setup. With verbalised scene graphs, we transform visual relation triplets into structured captions, and treat them as additional image descriptions. With masked relation prediction, we further encourage relating entities from image regions with visually masked contexts. When applied to strong baselines pretrained on large amounts of Web data, zero-shot evaluations on both coarse-grained and fine-grained tasks show the efficacy of our methods in learning multimodal representations from weakly-supervised relations data.
[ "Bugliarello, Emanuele", "Nematzadeh, Aida", "Hendricks, Lisa" ]
Weakly-Supervised Learning of Visual Relations in Multimodal Pretraining
emnlp-main.184
2305.14281
[ "https://github.com/e-bug/weak-relation-vlm" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.185.bib
https://aclanthology.org/2023.emnlp-main.185/
@inproceedings{cao-etal-2023-unsupervised, title = "Unsupervised Grammatical Error Correction Rivaling Supervised Methods", author = "Cao, Hannan and Yuan, Liping and Zhang, Yuchen and Ng, Hwee Tou", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.185", doi = "10.18653/v1/2023.emnlp-main.185", pages = "3072--3088", abstract = "State-of-the-art grammatical error correction (GEC) systems rely on parallel training data (ungrammatical sentences and their manually corrected counterparts), which are expensive to construct. In this paper, we employ the Break-It-Fix-It (BIFI) method to build an unsupervised GEC system. The BIFI framework generates parallel data from unlabeled text using a fixer to transform ungrammatical sentences into grammatical ones, and a critic to predict sentence grammaticality. We present an unsupervised approach to build the fixer and the critic, and an algorithm that allows them to iteratively improve each other. We evaluate our unsupervised GEC system on English and Chinese GEC. Empirical results show that our GEC system outperforms previous unsupervised GEC systems, and achieves performance comparable to supervised GEC systems without ensemble. Furthermore, when combined with labeled training data, our system achieves new state-of-the-art results on the CoNLL-2014 and NLPCC-2018 test sets.", }
State-of-the-art grammatical error correction (GEC) systems rely on parallel training data (ungrammatical sentences and their manually corrected counterparts), which are expensive to construct. In this paper, we employ the Break-It-Fix-It (BIFI) method to build an unsupervised GEC system. The BIFI framework generates parallel data from unlabeled text using a fixer to transform ungrammatical sentences into grammatical ones, and a critic to predict sentence grammaticality. We present an unsupervised approach to build the fixer and the critic, and an algorithm that allows them to iteratively improve each other. We evaluate our unsupervised GEC system on English and Chinese GEC. Empirical results show that our GEC system outperforms previous unsupervised GEC systems, and achieves performance comparable to supervised GEC systems without ensemble. Furthermore, when combined with labeled training data, our system achieves new state-of-the-art results on the CoNLL-2014 and NLPCC-2018 test sets.
[ "Cao, Hannan", "Yuan, Liping", "Zhang, Yuchen", "Ng, Hwee Tou" ]
Unsupervised Grammatical Error Correction Rivaling Supervised Methods
emnlp-main.185
null
[ "https://github.com/nusnlp/ugec" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.186.bib
https://aclanthology.org/2023.emnlp-main.186/
@inproceedings{lou-etal-2023-s2abel, title = "{S}2ab{EL}: A Dataset for Entity Linking from Scientific Tables", author = "Lou, Yuze and Kuehl, Bailey and Bransom, Erin and Feldman, Sergey and Naik, Aakanksha and Downey, Doug", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.186", doi = "10.18653/v1/2023.emnlp-main.186", pages = "3089--3101", abstract = "Entity linking (EL) is the task of linking a textual mention to its corresponding entry in a knowledge base, and is critical for many knowledge-intensive NLP applications. When applied to tables in scientific papers, EL is a step toward large-scale scientific knowledge bases that could enable advanced scientific question answering and analytics. We present the first dataset for EL in scientific tables. EL for scientific tables is especially challenging because scientific knowledge bases can be very incomplete, and disambiguating table mentions typically requires understanding the paper{'}s text in addition to the table. Our dataset, Scientific Table Entity Linking (S2abEL), focuses on EL in machine learning results tables and includes hand-labeled cell types, attributed sources, and entity links from the PaperswithCode taxonomy for 8,429 cells from 732 tables. We introduce a neural baseline method designed for EL on scientific tables containing many out-of-knowledge-base mentions, and show that it significantly outperforms a state-of-the-art generic table EL method. The best baselines fall below human performance, and our analysis highlights avenues for improvement.", }
Entity linking (EL) is the task of linking a textual mention to its corresponding entry in a knowledge base, and is critical for many knowledge-intensive NLP applications. When applied to tables in scientific papers, EL is a step toward large-scale scientific knowledge bases that could enable advanced scientific question answering and analytics. We present the first dataset for EL in scientific tables. EL for scientific tables is especially challenging because scientific knowledge bases can be very incomplete, and disambiguating table mentions typically requires understanding the paper{'}s text in addition to the table. Our dataset, Scientific Table Entity Linking (S2abEL), focuses on EL in machine learning results tables and includes hand-labeled cell types, attributed sources, and entity links from the PaperswithCode taxonomy for 8,429 cells from 732 tables. We introduce a neural baseline method designed for EL on scientific tables containing many out-of-knowledge-base mentions, and show that it significantly outperforms a state-of-the-art generic table EL method. The best baselines fall below human performance, and our analysis highlights avenues for improvement.
[ "Lou, Yuze", "Kuehl, Bailey", "Bransom, Erin", "Feldman, Sergey", "Naik, Aakanksha", "Downey, Doug" ]
S2abEL: A Dataset for Entity Linking from Scientific Tables
emnlp-main.186
2305.00366
[ "https://github.com/allenai/s2abel" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.187.bib
https://aclanthology.org/2023.emnlp-main.187/
@inproceedings{li-etal-2023-api, title = "{API}-Bank: A Comprehensive Benchmark for Tool-Augmented {LLM}s", author = "Li, Minghao and Zhao, Yingxiu and Yu, Bowen and Song, Feifan and Li, Hangyu and Yu, Haiyang and Li, Zhoujun and Huang, Fei and Li, Yongbin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.187", doi = "10.18653/v1/2023.emnlp-main.187", pages = "3102--3116", abstract = "Recent research has demonstrated that Large Language Models (LLMs) can enhance their capabilities by utilizing external tools. However, three pivotal questions remain unanswered: (1) How effective are current LLMs in utilizing tools? (2) How can we enhance LLMs{'} ability to utilize tools? (3) What obstacles need to be overcome to leverage tools? To address these questions, we introduce API-Bank, a groundbreaking benchmark, specifically designed for tool-augmented LLMs. For the first question, we develop a runnable evaluation system consisting of 73 API tools. We annotate 314 tool-use dialogues with 753 API calls to assess the existing LLMs{'} capabilities in planning, retrieving, and calling APIs. For the second question, we construct a comprehensive training set containing 1,888 tool-use dialogues from 2,138 APIs spanning 1,000 distinct domains. Using this dataset, we train Lynx, a tool-augmented LLM initialized from Alpaca. Experimental results demonstrate that GPT-3.5 exhibits improved tool utilization compared to GPT-3, while GPT-4 excels in planning. However, there is still significant potential for further improvement. Moreover, Lynx surpasses Alpaca{'}s tool utilization performance by more than 26 pts and approaches the effectiveness of GPT-3.5. Through error analysis, we highlight the key challenges for future research in this field to answer the third question.", }
Recent research has demonstrated that Large Language Models (LLMs) can enhance their capabilities by utilizing external tools. However, three pivotal questions remain unanswered: (1) How effective are current LLMs in utilizing tools? (2) How can we enhance LLMs{'} ability to utilize tools? (3) What obstacles need to be overcome to leverage tools? To address these questions, we introduce API-Bank, a groundbreaking benchmark, specifically designed for tool-augmented LLMs. For the first question, we develop a runnable evaluation system consisting of 73 API tools. We annotate 314 tool-use dialogues with 753 API calls to assess the existing LLMs{'} capabilities in planning, retrieving, and calling APIs. For the second question, we construct a comprehensive training set containing 1,888 tool-use dialogues from 2,138 APIs spanning 1,000 distinct domains. Using this dataset, we train Lynx, a tool-augmented LLM initialized from Alpaca. Experimental results demonstrate that GPT-3.5 exhibits improved tool utilization compared to GPT-3, while GPT-4 excels in planning. However, there is still significant potential for further improvement. Moreover, Lynx surpasses Alpaca{'}s tool utilization performance by more than 26 pts and approaches the effectiveness of GPT-3.5. Through error analysis, we highlight the key challenges for future research in this field to answer the third question.
[ "Li, Minghao", "Zhao, Yingxiu", "Yu, Bowen", "Song, Feifan", "Li, Hangyu", "Yu, Haiyang", "Li, Zhoujun", "Huang, Fei", "Li, Yongbin" ]
API-Bank: A Comprehensive Benchmark for Tool-Augmented LLMs
emnlp-main.187
2304.08244
[ "" ]
https://huggingface.co/papers/2304.08244
0
0
0
9
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.188.bib
https://aclanthology.org/2023.emnlp-main.188/
@inproceedings{teodorescu-etal-2023-language, title = "Language and Mental Health: Measures of Emotion Dynamics from Text as Linguistic Biosocial Markers", author = "Teodorescu, Daniela and Cheng, Tiffany and Fyshe, Alona and Mohammad, Saif", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.188", doi = "10.18653/v1/2023.emnlp-main.188", pages = "3117--3133", abstract = "Research in psychopathology has shown that, at an aggregate level, the patterns of emotional change over time{---}emotion dynamics{---}are indicators of one{'}s mental health. One{'}s patterns of emotion change have traditionally been determined through self-reports of emotions; however, there are known issues with accuracy, bias, and convenience. Recent approaches to determining emotion dynamics from one{'}s everyday utterances, addresses many of these concerns, but it is not yet known whether these measures of utterance emotion dynamics (UED) correlate with mental health diagnoses. Here, for the first time, we study the relationship between tweet emotion dynamics and mental health disorders. We find that each of the UED metrics studied varied by the user{'}s self-disclosed diagnosis. For example: average valence was significantly higher (i.e., more positive text) in the control group compared to users with ADHD, MDD, and PTSD. Valence variability was significantly lower in the control group compared to ADHD, depression, bipolar disorder, MDD, PTSD, and OCD but not PPD. Rise and recovery rates of valence also exhibited significant differences from the control. This work provides important early evidence for how linguistic cues pertaining to emotion dynamics can play a crucial role as biosocial markers for mental illnesses and aid in the understanding, diagnosis, and management of mental health disorders.", }
Research in psychopathology has shown that, at an aggregate level, the patterns of emotional change over time{---}emotion dynamics{---}are indicators of one{'}s mental health. One{'}s patterns of emotion change have traditionally been determined through self-reports of emotions; however, there are known issues with accuracy, bias, and convenience. Recent approaches to determining emotion dynamics from one{'}s everyday utterances, addresses many of these concerns, but it is not yet known whether these measures of utterance emotion dynamics (UED) correlate with mental health diagnoses. Here, for the first time, we study the relationship between tweet emotion dynamics and mental health disorders. We find that each of the UED metrics studied varied by the user{'}s self-disclosed diagnosis. For example: average valence was significantly higher (i.e., more positive text) in the control group compared to users with ADHD, MDD, and PTSD. Valence variability was significantly lower in the control group compared to ADHD, depression, bipolar disorder, MDD, PTSD, and OCD but not PPD. Rise and recovery rates of valence also exhibited significant differences from the control. This work provides important early evidence for how linguistic cues pertaining to emotion dynamics can play a crucial role as biosocial markers for mental illnesses and aid in the understanding, diagnosis, and management of mental health disorders.
[ "Teodorescu, Daniela", "Cheng, Tiffany", "Fyshe, Alona", "Mohammad, Saif" ]
Language and Mental Health: Measures of Emotion Dynamics from Text as Linguistic Biosocial Markers
emnlp-main.188
2310.17369
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.189.bib
https://aclanthology.org/2023.emnlp-main.189/
@inproceedings{jiang-etal-2023-lion, title = "Lion: Adversarial Distillation of Proprietary Large Language Models", author = "Jiang, Yuxin and Chan, Chunkit and Chen, Mingyang and Wang, Wei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.189", doi = "10.18653/v1/2023.emnlp-main.189", pages = "3134--3154", abstract = "The practice of transferring knowledge from a sophisticated, proprietary large language model (LLM) to a compact, open-source LLM has garnered considerable attention. Previous works have focused on a unidirectional knowledge distillation way by aligning the responses of the student model with those of the teacher model to a set of instructions. Nevertheless, they overlooked the possibility of incorporating any {``}feedback{''}{--}identifying challenging instructions where the student model{'}s performance falls short{--}to boost the student model{'}s proficiency iteratively. To this end, we propose a novel adversarial distillation framework for a more efficient knowledge transfer. Leveraging the versatile role adaptability of LLMs, we prompt the teacher model to identify {``}hard{''} instructions and generate new {``}hard{''} instructions for the student model, creating a three-stage adversarial loop of imitation, discrimination, and generation. By applying this adversarial framework, we successfully transfer knowledge from ChatGPT to a student model (named Lion), using a mere 70k training data. Our results show that Lion-13B not only achieves comparable open-ended generation capabilities to ChatGPT but surpasses conventional state-of-the-art (SOTA) instruction-tuned models like Vicuna-13B by 55.4{\%} in challenging zero-shot reasoning benchmarks such as BIG-Bench Hard (BBH) and 16.7{\%} on AGIEval.", }
The practice of transferring knowledge from a sophisticated, proprietary large language model (LLM) to a compact, open-source LLM has garnered considerable attention. Previous works have focused on a unidirectional knowledge distillation way by aligning the responses of the student model with those of the teacher model to a set of instructions. Nevertheless, they overlooked the possibility of incorporating any {``}feedback{''}{--}identifying challenging instructions where the student model{'}s performance falls short{--}to boost the student model{'}s proficiency iteratively. To this end, we propose a novel adversarial distillation framework for a more efficient knowledge transfer. Leveraging the versatile role adaptability of LLMs, we prompt the teacher model to identify {``}hard{''} instructions and generate new {``}hard{''} instructions for the student model, creating a three-stage adversarial loop of imitation, discrimination, and generation. By applying this adversarial framework, we successfully transfer knowledge from ChatGPT to a student model (named Lion), using a mere 70k training data. Our results show that Lion-13B not only achieves comparable open-ended generation capabilities to ChatGPT but surpasses conventional state-of-the-art (SOTA) instruction-tuned models like Vicuna-13B by 55.4{\%} in challenging zero-shot reasoning benchmarks such as BIG-Bench Hard (BBH) and 16.7{\%} on AGIEval.
[ "Jiang, Yuxin", "Chan, Chunkit", "Chen, Mingyang", "Wang, Wei" ]
Lion: Adversarial Distillation of Proprietary Large Language Models
emnlp-main.189
2305.12870
[ "https://github.com/yjiangcm/lion" ]
https://huggingface.co/papers/2305.12870
1
0
0
4
[ "YuxinJiang/lion-7b" ]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.190.bib
https://aclanthology.org/2023.emnlp-main.190/
@inproceedings{sun-etal-2023-evaluating, title = "Evaluating Large Language Models on Controlled Generation Tasks", author = "Sun, Jiao and Tian, Yufei and Zhou, Wangchunshu and Xu, Nan and Hu, Qian and Gupta, Rahul and Wieting, John and Peng, Nanyun and Ma, Xuezhe", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.190", doi = "10.18653/v1/2023.emnlp-main.190", pages = "3155--3168", abstract = "While recent studies have looked into the abilities of large language models in various benchmark tasks, including question generation, reading comprehension, multilingual and etc, there have been few studies looking into the controllability of large language models on generation tasks. We present an extensive analysis of various benchmarks including a sentence planning benchmark with different granularities. After comparing large language models against state-of-the-start finetuned smaller models, we present a spectrum showing large language models falling behind, are comparable, or exceed the ability of smaller models. We conclude that *large language models struggle at meeting fine-grained hard constraints*.", }
While recent studies have looked into the abilities of large language models in various benchmark tasks, including question generation, reading comprehension, multilingual and etc, there have been few studies looking into the controllability of large language models on generation tasks. We present an extensive analysis of various benchmarks including a sentence planning benchmark with different granularities. After comparing large language models against state-of-the-start finetuned smaller models, we present a spectrum showing large language models falling behind, are comparable, or exceed the ability of smaller models. We conclude that *large language models struggle at meeting fine-grained hard constraints*.
[ "Sun, Jiao", "Tian, Yufei", "Zhou, Wangchunshu", "Xu, Nan", "Hu, Qian", "Gupta, Rahul", "Wieting, John", "Peng, Nanyun", "Ma, Xuezhe" ]
Evaluating Large Language Models on Controlled Generation Tasks
emnlp-main.190
2310.14542
[ "https://github.com/sunjiao123sun/llm-controlgen" ]
https://huggingface.co/papers/2310.14542
2
0
0
9
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.191.bib
https://aclanthology.org/2023.emnlp-main.191/
@inproceedings{guo-etal-2023-desiq, title = "{D}e{SIQ}: Towards an Unbiased, Challenging Benchmark for Social Intelligence Understanding", author = "Guo, Xiao-Yu and Li, Yuan-Fang and Haf, Reza", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.191", doi = "10.18653/v1/2023.emnlp-main.191", pages = "3169--3180", abstract = "Social intelligence is essential for understanding and reasoning about human expressions, intents and interactions. One representative benchmark for its study is Social Intelligence Queries (Social-IQ), a dataset of multiple-choice questions on videos of complex social interactions. We define a comprehensive methodology to study the soundness of Social-IQ, as the soundness of such benchmark datasets is crucial to the investigation of the underlying research problem. We define a comprehensive methodology to study the soundness of Social-IQ, as the soundness of such benchmark datasets is crucial to the investigation of the underlying research problem. Our analysis reveals that Social-IQ contains substantial biases, which can be exploited by a moderately strong language model to learn spurious correlations to achieve perfect performance without being given the context or even the question. We introduce DeSIQ, a new challenging dataset, constructed by applying simple perturbations to Social-IQ. Our empirical analysis shows De-SIQ significantly reduces the biases in the original Social-IQ dataset. Furthermore, we examine and shed light on the effect of model size, model style, learning settings, commonsense knowledge, and multi-modality on the new benchmark performance. Our new dataset, observations and findings open up important research questions for the study of social intelligence.", }
Social intelligence is essential for understanding and reasoning about human expressions, intents and interactions. One representative benchmark for its study is Social Intelligence Queries (Social-IQ), a dataset of multiple-choice questions on videos of complex social interactions. We define a comprehensive methodology to study the soundness of Social-IQ, as the soundness of such benchmark datasets is crucial to the investigation of the underlying research problem. We define a comprehensive methodology to study the soundness of Social-IQ, as the soundness of such benchmark datasets is crucial to the investigation of the underlying research problem. Our analysis reveals that Social-IQ contains substantial biases, which can be exploited by a moderately strong language model to learn spurious correlations to achieve perfect performance without being given the context or even the question. We introduce DeSIQ, a new challenging dataset, constructed by applying simple perturbations to Social-IQ. Our empirical analysis shows De-SIQ significantly reduces the biases in the original Social-IQ dataset. Furthermore, we examine and shed light on the effect of model size, model style, learning settings, commonsense knowledge, and multi-modality on the new benchmark performance. Our new dataset, observations and findings open up important research questions for the study of social intelligence.
[ "Guo, Xiao-Yu", "Li, Yuan-Fang", "Haf, Reza" ]
DeSIQ: Towards an Unbiased, Challenging Benchmark for Social Intelligence Understanding
emnlp-main.191
2310.18359
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.192.bib
https://aclanthology.org/2023.emnlp-main.192/
@inproceedings{bouyamourn-2023-llms, title = "Why {LLM}s Hallucinate, and How to Get (Evidential) Closure: Perceptual, Intensional, and Extensional Learning for Faithful Natural Language Generation", author = "Bouyamourn, Adam", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.192", doi = "10.18653/v1/2023.emnlp-main.192", pages = "3181--3193", abstract = "We show that LLMs hallucinate because their output is not constrained to be synonymous with claims for which they have evidence: a condition that we call evidential closure. Information about the truth or falsity of sentences is not statistically identified in the standard neural language generation setup, and so cannot be conditioned on to generate new strings. We then show how to constrain LLMs to produce output that satisfies evidential closure. A multimodal LLM must learn about the external world (perceptual learning); it must learn a mapping from strings to states of the world (extensional learning); and, to achieve fluency when generalizing beyond a body of evidence, it must learn mappings from strings to their synonyms (intensional learning). The output of a unimodal LLM must be synonymous with strings in a validated evidence set. Finally, we present a heuristic procedure, Learn-Babble-Prune, that yields faithful output from an LLM by rejecting output that is not synonymous with claims for which the LLM has evidence.", }
We show that LLMs hallucinate because their output is not constrained to be synonymous with claims for which they have evidence: a condition that we call evidential closure. Information about the truth or falsity of sentences is not statistically identified in the standard neural language generation setup, and so cannot be conditioned on to generate new strings. We then show how to constrain LLMs to produce output that satisfies evidential closure. A multimodal LLM must learn about the external world (perceptual learning); it must learn a mapping from strings to states of the world (extensional learning); and, to achieve fluency when generalizing beyond a body of evidence, it must learn mappings from strings to their synonyms (intensional learning). The output of a unimodal LLM must be synonymous with strings in a validated evidence set. Finally, we present a heuristic procedure, Learn-Babble-Prune, that yields faithful output from an LLM by rejecting output that is not synonymous with claims for which the LLM has evidence.
[ "Bouyamourn, Adam" ]
Why LLMs Hallucinate, and How to Get (Evidential) Closure: Perceptual, Intensional, and Extensional Learning for Faithful Natural Language Generation
emnlp-main.192
2310.15355
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.193.bib
https://aclanthology.org/2023.emnlp-main.193/
@inproceedings{newman-etal-2023-question, title = "A Question Answering Framework for Decontextualizing User-facing Snippets from Scientific Documents", author = "Newman, Benjamin and Soldaini, Luca and Fok, Raymond and Cohan, Arman and Lo, Kyle", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.193", doi = "10.18653/v1/2023.emnlp-main.193", pages = "3194--3212", abstract = "Many real-world applications (e.g., note taking, search) require extracting a sentence or paragraph from a document and showing that snippet to a human outside of the source document. Yet, users may find snippets difficult to understand as they lack context from the original document. In this work, we use language models to rewrite snippets from scientific documents to be read on their own. First, we define the requirements and challenges for this user-facing decontextualization task, such as clarifying where edits occur and handling references to other documents. Second, we propose a framework that decomposes the task into three stages: question generation, question answering, and rewriting. Using this framework, we collect gold decontextualizations from experienced scientific article readers. We then conduct a range of experiments across state-of-the-art commercial and open-source language models to identify how to best provide missing-but-relevant information to models for our task. Finally, we develop QaDecontext, a simple prompting strategy inspired by our framework that improves over end-to-end prompting. We conclude with analysis that finds, while rewriting is easy, question generation and answering remain challenging for today{'}s models.", }
Many real-world applications (e.g., note taking, search) require extracting a sentence or paragraph from a document and showing that snippet to a human outside of the source document. Yet, users may find snippets difficult to understand as they lack context from the original document. In this work, we use language models to rewrite snippets from scientific documents to be read on their own. First, we define the requirements and challenges for this user-facing decontextualization task, such as clarifying where edits occur and handling references to other documents. Second, we propose a framework that decomposes the task into three stages: question generation, question answering, and rewriting. Using this framework, we collect gold decontextualizations from experienced scientific article readers. We then conduct a range of experiments across state-of-the-art commercial and open-source language models to identify how to best provide missing-but-relevant information to models for our task. Finally, we develop QaDecontext, a simple prompting strategy inspired by our framework that improves over end-to-end prompting. We conclude with analysis that finds, while rewriting is easy, question generation and answering remain challenging for today{'}s models.
[ "Newman, Benjamin", "Soldaini, Luca", "Fok, Raymond", "Cohan, Arman", "Lo, Kyle" ]
A Question Answering Framework for Decontextualizing User-facing Snippets from Scientific Documents
emnlp-main.193
2305.14772
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.194.bib
https://aclanthology.org/2023.emnlp-main.194/
@inproceedings{li-etal-2023-slog, title = "{SLOG}: A Structural Generalization Benchmark for Semantic Parsing", author = "Li, Bingzhi and Donatelli, Lucia and Koller, Alexander and Linzen, Tal and Yao, Yuekun and Kim, Najoung", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.194", doi = "10.18653/v1/2023.emnlp-main.194", pages = "3213--3232", abstract = "The goal of compositional generalization benchmarks is to evaluate how well models generalize to new complex linguistic expressions. Existing benchmarks often focus on lexical generalization, the interpretation of novel lexical items in syntactic structures familiar from training; structural generalization tasks, where a model needs to interpret syntactic structures that are themselves unfamiliar from training, are often underrepresented, resulting in overly optimistic perceptions of how well models can generalize. We introduce SLOG, a semantic parsing dataset that extends COGS (Kim and Linzen, 2020) with 17 structural generalization cases. In our experiments, the generalization accuracy of Transformer models, including pretrained ones, only reaches 40.6{\%}, while a structure-aware parser only achieves 70.8{\%}. These results are far from the near-perfect accuracy existing models achieve on COGS, demonstrating the role of SLOG in foregrounding the large discrepancy between models{'} lexical and structural generalization capacities.", }
The goal of compositional generalization benchmarks is to evaluate how well models generalize to new complex linguistic expressions. Existing benchmarks often focus on lexical generalization, the interpretation of novel lexical items in syntactic structures familiar from training; structural generalization tasks, where a model needs to interpret syntactic structures that are themselves unfamiliar from training, are often underrepresented, resulting in overly optimistic perceptions of how well models can generalize. We introduce SLOG, a semantic parsing dataset that extends COGS (Kim and Linzen, 2020) with 17 structural generalization cases. In our experiments, the generalization accuracy of Transformer models, including pretrained ones, only reaches 40.6{\%}, while a structure-aware parser only achieves 70.8{\%}. These results are far from the near-perfect accuracy existing models achieve on COGS, demonstrating the role of SLOG in foregrounding the large discrepancy between models{'} lexical and structural generalization capacities.
[ "Li, Bingzhi", "Donatelli, Lucia", "Koller, Alex", "er", "Linzen, Tal", "Yao, Yuekun", "Kim, Najoung" ]
SLOG: A Structural Generalization Benchmark for Semantic Parsing
emnlp-main.194
2310.15040
[ "https://github.com/bingzhilee/slog" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.195.bib
https://aclanthology.org/2023.emnlp-main.195/
@inproceedings{murty-etal-2023-pushdown, title = "Pushdown Layers: Encoding Recursive Structure in Transformer Language Models", author = "Murty, Shikhar and Sharma, Pratyusha and Andreas, Jacob and Manning, Christopher", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.195", doi = "10.18653/v1/2023.emnlp-main.195", pages = "3233--3247", abstract = "Recursion is a prominent feature of human language, and fundamentally challenging for self-attention due to the lack of an explicit recursive-state tracking mechanism. Consequently, Transformer language models poorly capture long-tail recursive structure and exhibit sample-inefficient syntactic generalization. This work introduces Pushdown Layers, a new self-attention layer that models recursive state via a stack tape that tracks estimated depths of every token in an incremental parse of the observed prefix. Transformer LMs with Pushdown Layers are syntactic language models that autoregressively and synchronously update this stack tape as they predict new tokens, in turn using the stack tape to softly modulate attention over tokens{---}for instance, learning to {``}skip{''} over closed constituents. When trained on a corpus of strings annotated with silver constituency parses, Transformers equipped with Pushdown Layers achieve dramatically better and 3-5x more sample-efficient syntactic generalization, while maintaining similar perplexities. Pushdown Layers are a drop-in replacement for standard self-attention. We illustrate this by finetuning GPT2-medium with Pushdown Layers on an automatically parsed WikiText-103, leading to improvements on several GLUE text classification tasks.", }
Recursion is a prominent feature of human language, and fundamentally challenging for self-attention due to the lack of an explicit recursive-state tracking mechanism. Consequently, Transformer language models poorly capture long-tail recursive structure and exhibit sample-inefficient syntactic generalization. This work introduces Pushdown Layers, a new self-attention layer that models recursive state via a stack tape that tracks estimated depths of every token in an incremental parse of the observed prefix. Transformer LMs with Pushdown Layers are syntactic language models that autoregressively and synchronously update this stack tape as they predict new tokens, in turn using the stack tape to softly modulate attention over tokens{---}for instance, learning to {``}skip{''} over closed constituents. When trained on a corpus of strings annotated with silver constituency parses, Transformers equipped with Pushdown Layers achieve dramatically better and 3-5x more sample-efficient syntactic generalization, while maintaining similar perplexities. Pushdown Layers are a drop-in replacement for standard self-attention. We illustrate this by finetuning GPT2-medium with Pushdown Layers on an automatically parsed WikiText-103, leading to improvements on several GLUE text classification tasks.
[ "Murty, Shikhar", "Sharma, Pratyusha", "Andreas, Jacob", "Manning, Christopher" ]
Pushdown Layers: Encoding Recursive Structure in Transformer Language Models
emnlp-main.195
2310.19089
[ "https://github.com/murtyshikhar/pushdown-layers" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.196.bib
https://aclanthology.org/2023.emnlp-main.196/
@inproceedings{mousi-etal-2023-llms, title = "Can {LLM}s Facilitate Interpretation of Pre-trained Language Models?", author = "Mousi, Basel and Durrani, Nadir and Dalvi, Fahim", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.196", doi = "10.18653/v1/2023.emnlp-main.196", pages = "3248--3268", abstract = "Work done to uncover the knowledge encoded within pre-trained language models rely on annotated corpora or human-in-the-loop methods. However, these approaches are limited in terms of scalability and the scope of interpretation. We propose using a large language model, ChatGPT, as an annotator to enable fine-grained interpretation analysis of pre-trained language models. We discover latent concepts within pre-trained language models by applying agglomerative hierarchical clustering over contextualized representations and then annotate these concepts using ChatGPT. Our findings demonstrate that ChatGPT produces accurate and semantically richer annotations compared to human-annotated concepts. Additionally, we showcase how GPT-based annotations empower interpretation analysis methodologies of which we demonstrate two: probing frameworks and neuron interpretation. To facilitate further exploration and experimentation in the field, we make available a substantial ConceptNet dataset (TCN) comprising 39,000 annotated concepts.", }
Work done to uncover the knowledge encoded within pre-trained language models rely on annotated corpora or human-in-the-loop methods. However, these approaches are limited in terms of scalability and the scope of interpretation. We propose using a large language model, ChatGPT, as an annotator to enable fine-grained interpretation analysis of pre-trained language models. We discover latent concepts within pre-trained language models by applying agglomerative hierarchical clustering over contextualized representations and then annotate these concepts using ChatGPT. Our findings demonstrate that ChatGPT produces accurate and semantically richer annotations compared to human-annotated concepts. Additionally, we showcase how GPT-based annotations empower interpretation analysis methodologies of which we demonstrate two: probing frameworks and neuron interpretation. To facilitate further exploration and experimentation in the field, we make available a substantial ConceptNet dataset (TCN) comprising 39,000 annotated concepts.
[ "Mousi, Basel", "Durrani, Nadir", "Dalvi, Fahim" ]
Can LLMs Facilitate Interpretation of Pre-trained Language Models?
emnlp-main.196
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.197.bib
https://aclanthology.org/2023.emnlp-main.197/
@inproceedings{lee-etal-2023-enhancing, title = "Enhancing Low-resource Fine-grained Named Entity Recognition by Leveraging Coarse-grained Datasets", author = "Lee, Su and Oh, Seokjin and Jung, Woohwan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.197", doi = "10.18653/v1/2023.emnlp-main.197", pages = "3269--3279", abstract = "Named Entity Recognition (NER) frequently suffers from the problem of insufficient labeled data, particularly in fine-grained NER scenarios. Although $K$-shot learning techniques can be applied, their performance tends to saturate when the number of annotations exceeds several tens of labels. To overcome this problem, we utilize existing coarse-grained datasets that offer a large number of annotations. A straightforward approach to address this problem is pre-finetuning, which employs coarse-grained data for representation learning. However, it cannot directly utilize the relationships between fine-grained and coarse-grained entities, although a fine-grained entity type is likely to be a subcategory of a coarse-grained entity type. We propose a fine-grained NER model with a Fine-to-Coarse(F2C) mapping matrix to leverage the hierarchical structure explicitly. In addition, we present an inconsistency filtering method to eliminate coarse-grained entities that are inconsistent with fine-grained entity types to avoid performance degradation. Our experimental results show that our method outperforms both $K$-shot learning and supervised learning methods when dealing with a small number of fine-grained annotations.", }
Named Entity Recognition (NER) frequently suffers from the problem of insufficient labeled data, particularly in fine-grained NER scenarios. Although $K$-shot learning techniques can be applied, their performance tends to saturate when the number of annotations exceeds several tens of labels. To overcome this problem, we utilize existing coarse-grained datasets that offer a large number of annotations. A straightforward approach to address this problem is pre-finetuning, which employs coarse-grained data for representation learning. However, it cannot directly utilize the relationships between fine-grained and coarse-grained entities, although a fine-grained entity type is likely to be a subcategory of a coarse-grained entity type. We propose a fine-grained NER model with a Fine-to-Coarse(F2C) mapping matrix to leverage the hierarchical structure explicitly. In addition, we present an inconsistency filtering method to eliminate coarse-grained entities that are inconsistent with fine-grained entity types to avoid performance degradation. Our experimental results show that our method outperforms both $K$-shot learning and supervised learning methods when dealing with a small number of fine-grained annotations.
[ "Lee, Su", "Oh, Seokjin", "Jung, Woohwan" ]
Enhancing Low-resource Fine-grained Named Entity Recognition by Leveraging Coarse-grained Datasets
emnlp-main.197
2310.11715
[ "https://github.com/sue991/cofiner" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.198.bib
https://aclanthology.org/2023.emnlp-main.198/
@inproceedings{wu-etal-2023-oolong, title = "Oolong: Investigating What Makes Transfer Learning Hard with Controlled Studies", author = "Wu, Zhengxuan and Tamkin, Alex and Papadimitriou, Isabel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.198", doi = "10.18653/v1/2023.emnlp-main.198", pages = "3280--3289", abstract = "When we transfer a pretrained language model to a new language, there are many axes of variation that change at once. To disentangle the impact of different factors like syntactic similarity and vocabulary similarity, we propose a set of \textit{controlled transfer studies}: we systematically transform the language of the GLUE benchmark, altering one axis of crosslingual variation at a time, and then measure the resulting drops in a pretrained model{'}s downstream performance. We find that models can largely recover from syntactic-style shifts, but cannot recover from vocabulary misalignment and embedding matrix re-initialization, even with continued pretraining on 15 million tokens. Moreover, good-quality tokenizers in the transfer language do not make vocabulary alignment easier. Our experiments provide insights into the factors of cross-lingual transfer that researchers should most focus on when designing language transfer scenarios.", }
When we transfer a pretrained language model to a new language, there are many axes of variation that change at once. To disentangle the impact of different factors like syntactic similarity and vocabulary similarity, we propose a set of \textit{controlled transfer studies}: we systematically transform the language of the GLUE benchmark, altering one axis of crosslingual variation at a time, and then measure the resulting drops in a pretrained model{'}s downstream performance. We find that models can largely recover from syntactic-style shifts, but cannot recover from vocabulary misalignment and embedding matrix re-initialization, even with continued pretraining on 15 million tokens. Moreover, good-quality tokenizers in the transfer language do not make vocabulary alignment easier. Our experiments provide insights into the factors of cross-lingual transfer that researchers should most focus on when designing language transfer scenarios.
[ "Wu, Zhengxuan", "Tamkin, Alex", "Papadimitriou, Isabel" ]
Oolong: Investigating What Makes Transfer Learning Hard with Controlled Studies
emnlp-main.198
2202.12312
[ "https://github.com/frankaging/oolong-crosslingual" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.199.bib
https://aclanthology.org/2023.emnlp-main.199/
@inproceedings{bin-etal-2023-non, title = "Non-Autoregressive Math Word Problem Solver with Unified Tree Structure", author = "Bin, Yi and Han, Mengqun and Shi, Wenhao and Wang, Lei and Yang, Yang and Ng, See-Kiong and Shen, Heng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.199", doi = "10.18653/v1/2023.emnlp-main.199", pages = "3290--3301", abstract = "Existing MWP solvers employ sequence or binary tree to present the solution expression and decode it from given problem description. However, such structures fail to handle the variants that can be derived via mathematical manipulation, e.g., $(a_1+a_2)*a_3$ and $a_1 * a_3+a_2 * a_3$ can both be possible valid solutions for a same problem but formulated as different expression sequences or trees. The multiple solution variants depicting different possible solving procedures for the same input problem would raise two issues: 1) making it hard for the model to learn the mapping function between the input and output spaces effectively, and 2) wrongly indicating \textit{wrong} when evaluating a valid expression variant. To address these issues, we introduce a unified tree structure to present a solution expression, where the elements are permutable and identical for all the expression variants. We propose a novel non-autoregressive solver, named \textit{MWP-NAS}, to parse the problem and deduce the solution expression based on the unified tree. For evaluating the possible expression variants, we design a path-based metric to evaluate the partial accuracy of expressions of a unified tree. The results from extensive experiments conducted on Math23K and MAWPS demonstrate the effectiveness of our proposed MWP-NAS. The codes and checkpoints are available at: \url{https://github.com/mengqunhan/MWP-NAS}.", }
Existing MWP solvers employ sequence or binary tree to present the solution expression and decode it from given problem description. However, such structures fail to handle the variants that can be derived via mathematical manipulation, e.g., $(a_1+a_2)*a_3$ and $a_1 * a_3+a_2 * a_3$ can both be possible valid solutions for a same problem but formulated as different expression sequences or trees. The multiple solution variants depicting different possible solving procedures for the same input problem would raise two issues: 1) making it hard for the model to learn the mapping function between the input and output spaces effectively, and 2) wrongly indicating \textit{wrong} when evaluating a valid expression variant. To address these issues, we introduce a unified tree structure to present a solution expression, where the elements are permutable and identical for all the expression variants. We propose a novel non-autoregressive solver, named \textit{MWP-NAS}, to parse the problem and deduce the solution expression based on the unified tree. For evaluating the possible expression variants, we design a path-based metric to evaluate the partial accuracy of expressions of a unified tree. The results from extensive experiments conducted on Math23K and MAWPS demonstrate the effectiveness of our proposed MWP-NAS. The codes and checkpoints are available at: \url{https://github.com/mengqunhan/MWP-NAS}.
[ "Bin, Yi", "Han, Mengqun", "Shi, Wenhao", "Wang, Lei", "Yang, Yang", "Ng, See-Kiong", "Shen, Heng" ]
Non-Autoregressive Math Word Problem Solver with Unified Tree Structure
emnlp-main.199
2305.04556
[ "https://github.com/mengqunhan/mwp-nas" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.200.bib
https://aclanthology.org/2023.emnlp-main.200/
@inproceedings{bai-etal-2023-improving, title = "Improving {C}hinese Pop Song and Hokkien Gezi Opera Singing Voice Synthesis by Enhancing Local Modeling", author = "Bai, Peng and Zhou, Yue and Zheng, Meizhen and Sun, Wujin and Shi, Xiaodong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.200", doi = "10.18653/v1/2023.emnlp-main.200", pages = "3302--3312", abstract = "Singing Voice Synthesis (SVS) strives to synthesize pleasing vocals based on music scores and lyrics. The current acoustic models based on Transformer usually process the entire sequence globally and use a simple L1 loss. However, this approach overlooks the significance of local modeling within the sequence and the local optimization of the hard-to-synthesize parts in the predicted mel-spectrogram. Consequently, the synthesized audio exhibits local incongruities (\textit{e.g.}, local pronunciation jitter or local noise). To address this problem, we propose two methods to enhance local modeling in the acoustic model. First, we devise a nearest neighbor local attention, where each phoneme token focuses only on the adjacent phoneme tokens located before and after it. Second, we propose a phoneme-level local adaptive weights loss function that enables the model to focus more on the hard-to-synthesize parts of the mel-spectrogram. We have verified the universality of our methods on public Chinese pop song and Hokkien Gezi Opera datasets. Extensive experiments have demonstrated the effectiveness of our methods, resulting in significant improvements in both objective and subjective evaluations when compared to the strong baselines. Our code and demonstration samples are available at https://github.com/baipeng1/SVSELM.", }
Singing Voice Synthesis (SVS) strives to synthesize pleasing vocals based on music scores and lyrics. The current acoustic models based on Transformer usually process the entire sequence globally and use a simple L1 loss. However, this approach overlooks the significance of local modeling within the sequence and the local optimization of the hard-to-synthesize parts in the predicted mel-spectrogram. Consequently, the synthesized audio exhibits local incongruities (\textit{e.g.}, local pronunciation jitter or local noise). To address this problem, we propose two methods to enhance local modeling in the acoustic model. First, we devise a nearest neighbor local attention, where each phoneme token focuses only on the adjacent phoneme tokens located before and after it. Second, we propose a phoneme-level local adaptive weights loss function that enables the model to focus more on the hard-to-synthesize parts of the mel-spectrogram. We have verified the universality of our methods on public Chinese pop song and Hokkien Gezi Opera datasets. Extensive experiments have demonstrated the effectiveness of our methods, resulting in significant improvements in both objective and subjective evaluations when compared to the strong baselines. Our code and demonstration samples are available at https://github.com/baipeng1/SVSELM.
[ "Bai, Peng", "Zhou, Yue", "Zheng, Meizhen", "Sun, Wujin", "Shi, Xiaodong" ]
Improving Chinese Pop Song and Hokkien Gezi Opera Singing Voice Synthesis by Enhancing Local Modeling
emnlp-main.200
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster