bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
238
833
abstract
stringlengths
649
2.54k
title
stringlengths
31
135
authors
sequencelengths
1
31
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
10
upvotes
int64
-1
72
num_comments
int64
-1
5
n_authors
int64
-1
27
Models
sequencelengths
0
28
Datasets
sequencelengths
0
14
Spaces
sequencelengths
0
9
paper_page_exists_pre_conf
int64
0
1
unique_id
int64
0
298
null
https://openreview.net/forum?id=zlw6AHwukB
@inproceedings{ li2024a, title={A Survey on Deep Learning for Theorem Proving}, author={Zhaoyu Li and Jialiang Sun and Logan Murphy and Qidong Su and Zenan Li and Xian Zhang and Kaiyu Yang and Xujie Si}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=zlw6AHwukB} }
Theorem proving is a fundamental aspect of mathematics, spanning from informal reasoning in natural language to rigorous derivations in formal systems. In recent years, the advancement of deep learning, especially the emergence of large language models, has sparked a notable surge of research exploring these techniques to enhance the process of theorem proving. This paper presents a comprehensive survey of deep learning for theorem proving by offering (i) a thorough review of existing approaches across various tasks such as autoformalization, premise selection, proofstep generation, and proof search; (ii) an extensive summary of curated datasets and strategies for synthetic data generation; (iii) a detailed analysis of evaluation metrics and the performance of state-of-the-art methods; and (iv) a critical discussion on the persistent challenges and the promising avenues for future exploration. Our survey aims to serve as a foundational reference for deep learning approaches in theorem proving, inspiring and catalyzing further research endeavors in this rapidly growing field. A curated list of papers is available at https://github.com/zhaoyu-li/DL4TP.
A Survey on Deep Learning for Theorem Proving
[ "Zhaoyu Li", "Jialiang Sun", "Logan Murphy", "Qidong Su", "Zenan Li", "Xian Zhang", "Kaiyu Yang", "Xujie Si" ]
Conference
Poster
2404.09939
[ "https://github.com/zhaoyu-li/dl4tp" ]
-1
-1
-1
-1
[]
[]
[]
0
0
null
https://openreview.net/forum?id=zl16jLb91v
@inproceedings{ durmus2024towards, title={Towards Measuring the Representation of Subjective Global Opinions in Language Models}, author={Esin DURMUS and Karina Nguyen and Thomas Liao and Nicholas Schiefer and Amanda Askell and Anton Bakhtin and Carol Chen and Zac Hatfield-Dodds and Danny Hernandez and Nicholas Joseph and Liane Lovitt and Sam McCandlish and Orowa Sikder and Alex Tamkin and Janel Thamkul and Jared Kaplan and Jack Clark and Deep Ganguli}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=zl16jLb91v} }
Large language models (LLMs) may not equitably represent diverse global perspectives on societal issues. In this paper, we develop a quantitative framework to evaluate whose opinions model-generated responses are more similar to. We first build a dataset, GlobalOpinionQA, comprised of questions and answers from cross-national surveys designed to capture diverse opinions on global issues across different countries. Next, we define a metric that quantifies the similarity between LLM-generated survey responses and human responses, conditioned on country. With our framework, we run three experiments on an LLM trained to be helpful, honest, and harmless with Constitutional AI. By default, LLM responses tend to be more similar to the opinions of certain populations, such as those from the USA, and some European and South American countries, highlighting the potential for biases. When we prompt the model to consider a particular country's perspective, responses shift to be more similar to the opinions of the prompted populations, but can reflect harmful cultural stereotypes. When we translate GlobalOpinionQA questions to a target language, the model's responses do not necessarily become the most similar to the opinions of speakers of those languages. We will release our dataset for others to use and build on upon acceptance.
Towards Measuring the Representation of Subjective Global Opinions in Language Models
[ "Esin DURMUS", "Karina Nguyen", "Thomas Liao", "Nicholas Schiefer", "Amanda Askell", "Anton Bakhtin", "Carol Chen", "Zac Hatfield-Dodds", "Danny Hernandez", "Nicholas Joseph", "Liane Lovitt", "Sam McCandlish", "Orowa Sikder", "Alex Tamkin", "Janel Thamkul", "Jared Kaplan", "Jack Clark", "Deep Ganguli" ]
Conference
Poster
2306.16388
[ "https://github.com/salt-nlp/culturebank" ]
https://huggingface.co/papers/2306.16388
6
6
0
18
[]
[ "Anthropic/llm_global_opinions" ]
[]
1
1
null
https://openreview.net/forum?id=zZa7Ke7WAJ
@inproceedings{ xia2024top, title={Top Leaderboard Ranking = Top Coding Proficiency, Always? EvoEval: Evolving Coding Benchmarks via {LLM}}, author={Chunqiu Steven Xia and Yinlin Deng and LINGMING ZHANG}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=zZa7Ke7WAJ} }
Large language models (LLMs) have become the go-to choice for code generation tasks, with an exponential increase in the training, development, and usage of LLMs specifically for code generation. To evaluate the ability of LLMs on code, both academic and industry practitioners rely on popular handcrafted benchmarks. However, prior benchmarks contain only a very limited set of problems, both in quantity and variety. Further, due to popularity and age, many benchmarks are prone to data leakage where example solutions can be readily found on the web and thus potentially in training data. Such limitations inevitably lead us to inquire: Is the leaderboard performance on existing benchmarks reliable and comprehensive enough to measure the program synthesis ability of LLMs? To address this, we introduce EvoEval – a program synthesis benchmark suite created by evolving existing benchmarks into different targeted domains for a comprehensive evaluation of LLM coding abilities. Our study on 51 LLMs shows that compared to the high performance obtained on standard benchmarks like HumanEval, there is a significant drop in performance (on average 39.4%) when using EvoEval. Additionally, the decrease in performance can range from 19.6% to 47.7%, leading to drastic ranking changes amongst LLMs and showing potential overfitting of existing benchmarks. Furthermore, we showcase various insights including the brittleness of instruction-following models when encountering rewording or subtle changes as well as the importance of learning problem composition and decomposition. EvoEval not only provides comprehensive benchmarks, but can be used to further evolve arbitrary problems to keep up with advances and the ever-changing landscape of LLMs for code. We have open-sourced our benchmarks, tools, and all LLM-generated code at https://github.com/evo-eval/evoeval.
Top Leaderboard Ranking = Top Coding Proficiency, Always? EvoEval: Evolving Coding Benchmarks via LLM
[ "Chunqiu Steven Xia", "Yinlin Deng", "LINGMING ZHANG" ]
Conference
Oral
2403.19114
[ "https://github.com/evo-eval/evoeval" ]
https://huggingface.co/papers/2403.19114
1
0
0
3
[]
[]
[]
1
2
null
https://openreview.net/forum?id=zSf8PJyQb2
@inproceedings{ miller2024transformer, title={Transformer Circuit Evaluation Metrics Are Not Robust}, author={Joseph Miller and Bilal Chughtai and William Saunders}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=zSf8PJyQb2} }
Mechanistic interpretability work attempts to reverse engineer the learned algorithms present inside neural networks. One focus of this work has been to discover 'circuits' - subgraphs of the full model that explain behaviour on specific tasks. But how do we measure the performance of such circuits? Prior work has attempted to measure circuit 'faithfulness' - the degree to which the circuit replicates the performance of the full model. In this work, we survey many considerations for designing experiments that measure circuit faithfulness by ablating portions of the model's computation. Concerningly, we find existing methods are highly sensitive to seemingly insignificant changes in the ablation methodology. We conclude that existing circuit faithfulness scores reflect _both_ the methodological choices of researchers as well as the actual components of the circuit - the task a circuit is required to perform depends on the ablation used to test it. The ultimate goal of mechanistic interpretability work is to understand neural networks, so we emphasize the need for more clarity in the precise claims being made about circuits. We open source a library at [this https URL](https://github.com/UFO-101/auto-circuit) that includes highly efficient implementations of a wide range of ablation methodologies and circuit discovery algorithms.
Transformer Circuit Evaluation Metrics Are Not Robust
[ "Joseph Miller", "Bilal Chughtai", "William Saunders" ]
Conference
Oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
3
null
https://openreview.net/forum?id=z7FvXbyyrM
@inproceedings{ huh2024longform, title={Long-Form Answers to Visual Questions from Blind and Low Vision People}, author={Mina Huh and Fangyuan Xu and Yi-Hao Peng and Chongyan Chen and Hansika Murugu and Danna Gurari and Eunsol Choi and Amy Pavel}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=z7FvXbyyrM} }
Vision language models can now generate long-form answers to questions about images – long-form visual question answers (LFVQA). We contribute VizWiz-LF, a dataset of long-form answers to visual questions posed by blind and low vision (BLV) users. VizWiz-LF contains 4.2k long-form answers to 600 visual questions, collected from human expert describers and six VQA models. We develop and annotate functional roles of sentences of LFVQA and demonstrate that long-form answers contain information beyond the question answer such as explanations and suggestions to retake photos. We further conduct automatic and human evaluations involving BLV and sighted people to evaluate long-form answers. While BLV people perceive both human-written and generated long-form answers as plausible, generated answers often hallucinate incorrect visual details, especially for unanswerable visual questions (e.g., blurry or irrelevant images). To reduce hallucinations, we evaluate VQA models on their ability to abstain from answering unanswerable questions.
Long-Form Answers to Visual Questions from Blind and Low Vision People
[ "Mina Huh", "Fangyuan Xu", "Yi-Hao Peng", "Chongyan Chen", "Hansika Murugu", "Danna Gurari", "Eunsol Choi", "Amy Pavel" ]
Conference
Oral
2408.06303
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
4
null
https://openreview.net/forum?id=yoVRyrEgix
@inproceedings{ sharma2024locating, title={Locating and Editing Factual Associations in Mamba}, author={Arnab Sen Sharma and David Atkinson and David Bau}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=yoVRyrEgix} }
We investigate the mechanisms of factual recall in the Mamba state space model. Our work is inspired by previous findings in autoregressive transformer language models suggesting that their knowledge recall is localized to particular modules at specific token locations; we therefore ask whether factual recall in Mamba can be similarly localized. To investigate this, we conduct four lines of experiments on Mamba. First, we apply causal tracing or interchange interventions to localize key components inside Mamba that are responsible for recalling facts, revealing that specific components within middle layers show strong causal effects at the last token of the subject, while the causal effect of intervening on later layers is most pronounced at the last token of the prompt, matching previous findings on autoregressive transformers. Second, we show that rank-one model editing methods can successfully insert facts at specific locations, again resembling findings on transformer LMs. Finally we adapt attention-knockout techniques to Mamba in order to dissect information flow during factual recall. We compare Mamba directly to a similar-sized autoregressive transformer LM and conclude that despite significant differences in architectures, when it comes to factual recall, the two architectures share many similarities.
Locating and Editing Factual Associations in Mamba
[ "Arnab Sen Sharma", "David Atkinson", "David Bau" ]
Conference
Poster
2404.03646
[ "https://github.com/arnab-api/romba" ]
https://huggingface.co/papers/2404.03646
2
3
0
3
[]
[]
[]
1
5
null
https://openreview.net/forum?id=yfyHxvVzZT
@inproceedings{ kim2024does, title={Does Incomplete Syntax Influence Korean Language Model? Focusing on Word Order and Case Markers}, author={Jong Myoung Kim and Young-Jun Lee and Yong-Jin Han and Ho-Jin Choi and Sangkeun Jung}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=yfyHxvVzZT} }
Syntactic elements, such as word order and case markers, are fundamental in natural language processing. Recent studies show that syntactic information boosts language model performance and offers clues for people to understand their learning mechanisms. Unlike languages with a fixed word order such as English, Korean allows for varied word sequences, despite its canonical structure, due to case markers that indicate the functions of sentence components. This study explores whether Korean language models can accurately capture this flexibility. We note that incomplete word orders and omitted case markers frequently appear in ordinary Korean communication. To investigate this further, we introduce the Syntactically Incomplete Korean (SIKO) dataset. Through SIKO, we assessed Korean language models’ flexibility with incomplete syntax and confirmed the dataset’s training value. Results indicate these models reflect Korean’s inherent flexibility, accurately handling incomplete inputs. Moreover, fine-tuning with SIKO enhances the ability to handle common incomplete Korean syntactic forms. The dataset’s simple construction process, coupled with significant performance enhancements, solidifies its standing as an effective data augmentation technique. The SIKO will become accessible post-publication.
Does Incomplete Syntax Influence Korean Language Model? Focusing on Word Order and Case Markers
[ "Jong Myoung Kim", "Young-Jun Lee", "Yong-Jin Han", "Ho-Jin Choi", "Sangkeun Jung" ]
Conference
Poster
2407.09184
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
6
null
https://openreview.net/forum?id=ybaK4asBT2
@inproceedings{ lu2024llm, title={{LLM} Discussion: Enhancing the Creativity of Large Language Models via Discussion Framework and Role-Play}, author={Li-Chun Lu and Shou-Jen Chen and Tsung-Min Pai and Chan-Hung Yu and Hung-yi Lee and Shao-Hua Sun}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=ybaK4asBT2} }
Large language models (LLMs) have shown exceptional proficiency in natural language processing but often fall short of generating creative and original responses to open-ended questions. To enhance LLM creativity, our key insight is to emulate the human process of inducing collective creativity through engaging discussions with participants from diverse backgrounds and perspectives. To this end, we propose LLM Discussion, a three-phase discussion framework that facilitates vigorous and diverging idea exchanges and ensures convergence to creative answers. Moreover, we adopt a role-playing technique by assigning distinct roles to LLMs to combat the homogeneity of LLMs. We evaluate the efficacy of the proposed framework with the Alternative Uses Test, Similarities Test, Instances Test, and Scientific Creativity Test through both LLM evaluation and human study. The results show that our proposed framework outperforms single-LLM approaches and existing multi-LLM frameworks across various creativity metrics. The code is available at https://github.com/lawraa/LLM-Discussion.
LLM Discussion: Enhancing the Creativity of Large Language Models via Discussion Framework and Role-Play
[ "Li-Chun Lu", "Shou-Jen Chen", "Tsung-Min Pai", "Chan-Hung Yu", "Hung-yi Lee", "Shao-Hua Sun" ]
Conference
Poster
2405.06373
[ "https://github.com/lawraa/llm-discussion" ]
https://huggingface.co/papers/2405.06373
0
0
0
6
[]
[]
[]
1
7
null
https://openreview.net/forum?id=yK8MT91dQY
@inproceedings{ zhan2024large, title={Large Language Models are Capable of Offering Cognitive Reappraisal, if Guided}, author={Hongli Zhan and Allen Zheng and Yoon Kyung Lee and Jina Suh and Junyi Jessy Li and Desmond Ong}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=yK8MT91dQY} }
Large language models (LLMs) have offered new opportunities for emotional support, and recent work has shown that they can produce empathic responses to people in distress. However, long-term mental well-being requires emotional self-regulation, where a one-time empathic response falls short. This work takes a first step by engaging with cognitive reappraisals, a strategy from psychology practitioners that uses language to targetedly change negative appraisals that an individual makes of the situation; such appraisals is known to sit at the root of human emotional experience. We hypothesize that psychologically grounded principles could enable such advanced psychology capabilities in LLMs, and design RESORT which consists of a series of reappraisal constitutions across multiple dimensions that can be used as LLM instructions. We conduct a first-of-its-kind expert evaluation (by clinical psychologists with M.S. or Ph.D. degrees) of an LLM’s zero-shot ability to generate cognitive reappraisal responses to medium-length social media messages asking for support. This fine-grained evaluation showed that even LLMs at the 7B scale guided by RESORT are capable of generating empathic responses that can help users reappraise their situations.
Large Language Models are Capable of Offering Cognitive Reappraisal, if Guided
[ "Hongli Zhan", "Allen Zheng", "Yoon Kyung Lee", "Jina Suh", "Junyi Jessy Li", "Desmond Ong" ]
Conference
Poster
2404.01288
[ "https://github.com/honglizhan/resort_cognitive_reappraisal" ]
-1
-1
-1
-1
[]
[]
[]
0
8
null
https://openreview.net/forum?id=yK2eGE8QVW
@inproceedings{ shen2024nemoaligner, title={NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment}, author={Gerald Shen and Zhilin Wang and Olivier Delalleau and Jiaqi Zeng and Yi Dong and Daniel Egert and Shengyang Sun and Jimmy J. Zhang and Sahil Jain and Ali Taghibakhshi and Markel Sanz Ausin and Ashwath Aithal and Oleksii Kuchaiev}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=yK2eGE8QVW} }
Aligning Large Language Models (LLMs) with human values and preferences is essential for making them helpful and safe. However, building efficient tools to perform alignment can be challenging, especially for the largest and most competent LLMs which often contain tens or hundreds of billions of parameters. We create NeMo-Aligner, a toolkit for model alignment that can efficiently scale to a thousand GPUs for training the largest open-source LLMs such as Nemotron 4 340B and Llama 3.1 405B. NeMo-Aligner comes with highly optimized and scalable implementations for major paradigms of model alignment such as: Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), SteerLM, and Self-Play Fine-Tuning (SPIN). Additionally, our toolkit supports running most of the alignment techniques in a Parameter Efficient Fine-Tuning (PEFT) setting. NeMo-Aligner is designed for extensibility, allowing support for other alignment techniques with minimal effort. It is open-sourced with Apache 2.0 License and we invite community contributions at https://github.com/NVIDIA/NeMo-Aligner.
NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment
[ "Gerald Shen", "Zhilin Wang", "Olivier Delalleau", "Jiaqi Zeng", "Yi Dong", "Daniel Egert", "Shengyang Sun", "Jimmy J. Zhang", "Sahil Jain", "Ali Taghibakhshi", "Markel Sanz Ausin", "Ashwath Aithal", "Oleksii Kuchaiev" ]
Conference
Poster
2405.01481
[ "https://github.com/nvidia/nemo-aligner" ]
https://huggingface.co/papers/2405.01481
10
25
1
13
[]
[]
[]
1
9
null
https://openreview.net/forum?id=yIEyHP7AvH
@inproceedings{ ghahroodi2024khayyam, title={Khayyam Challenge (Persian{MMLU}): Is Your {LLM} Truly Wise to The Persian Language?}, author={Omid Ghahroodi and Marzia Nouri and Mohammad Vali Sanian and Alireza Sahebi and Doratossadat Dastgheib and Ehsaneddin Asgari and Mahdieh Soleymani Baghshah and Mohammad Hossein Rohban}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=yIEyHP7AvH} }
Evaluating Large Language Models (LLMs) is challenging due to their generative nature, necessitating precise evaluation methodologies. Additionally, non-English LLM evaluation lags behind English, resulting in the absence or weakness of LLMs for many languages. In response to this necessity, we introduce Khayyam Challenge (also known as PersianMMLU), a meticulously curated collection comprising 20,805 four-choice questions sourced from 38 diverse tasks extracted from Persian examinations, spanning a wide spectrum of subjects, complexities, and ages. The primary objective of the Khayyam Challenge is to facilitate the rigorous evaluation of LLMs that support the Persian language. Distinctive features of the Khayyam Challenge are (i) its comprehensive coverage of various topics, including literary comprehension, mathematics, sciences, logic, intelligence testing, etc aimed at assessing different facets of LLMs such as language comprehension, reasoning, and information retrieval across various educational stages, from lower primary school to upper secondary school (ii) its inclusion of rich metadata such as human response rates, difficulty levels, and descriptive answers (iii) its utilization of new data to avoid data contamination issues prevalent in existing frameworks (iv) its use of original, non-translated data tailored for Persian speakers, ensuring the framework is free from translation challenges and errors while encompassing cultural nuances (v) its inherent scalability for future data updates and evaluations without requiring special human effort. Previous works lacked an evaluation framework that combined all of these features into a single comprehensive benchmark. Furthermore, we evaluate a wide range of existing LLMs that support the Persian language, with statistical analyses and interpretations of their outputs. We believe that the Khayyam Challenge will improve advancements in LLMs for the Persian language by highlighting the existing limitations of current models, while also enhancing the precision and depth of evaluations on LLMs, even within the English language context.
Khayyam Challenge (PersianMMLU): Is Your LLM Truly Wise to The Persian Language?
[ "Omid Ghahroodi", "Marzia Nouri", "Mohammad Vali Sanian", "Alireza Sahebi", "Doratossadat Dastgheib", "Ehsaneddin Asgari", "Mahdieh Soleymani Baghshah", "Mohammad Hossein Rohban" ]
Conference
Poster
2404.06644
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
10
null
https://openreview.net/forum?id=y7JnjDcIQa
@inproceedings{ anagnostidis2024how, title={How Susceptible are {LLM}s to Influence in Prompts?}, author={Sotiris Anagnostidis and Jannis Bulian}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=y7JnjDcIQa} }
Large Language Models (LLMs) are highly sensitive to prompts, including additional context provided therein. As LLMs grow in capability, understanding their prompt-sensitivity becomes increasingly crucial for ensuring reliable and robust performance, particularly since evaluating these models becomes more challenging. In this work, we investigate how current models (Llama, Mixtral, Falcon) respond when presented with additional input from another model, mimicking a scenario where a more capable model -- or a system with access to more external information -- provides supplementary information to the target model. Across a diverse spectrum of question-answering tasks, we study how an LLM's response to multiple-choice questions changes when the prompt includes a prediction and explanation from another model. Specifically, we explore the influence of the presence of an explanation, the stated authoritativeness of the source, and the stated confidence of the supplementary input. Our findings reveal that models are strongly influenced, and when explanations are provided they are swayed irrespective of the quality of the explanation. The models are more likely to be swayed if the input is presented as being authoritative or confident, but the effect is small in size. This study underscores the significant prompt-sensitivity of LLMs and highlights the potential risks of incorporating outputs from external sources without thorough scrutiny and further validation. As LLMs continue to advance, understanding and mitigating such sensitivities will be crucial for their reliable and trustworthy deployment.
How Susceptible are LLMs to Influence in Prompts?
[ "Sotiris Anagnostidis", "Jannis Bulian" ]
Conference
Poster
2408.11865
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
11
null
https://openreview.net/forum?id=y6aGT625Lk
@inproceedings{ park2024paireval, title={PairEval: Open-domain Dialogue Evaluation Metric with Pairwise Comparisons}, author={ChaeHun Park and Minseok Choi and Dohyun Lee and Jaegul Choo}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=y6aGT625Lk} }
Building a reliable and automated evaluation metric is a necessary but challenging problem for open-domain dialogue systems. Recent studies proposed evaluation metrics that assess generated responses by considering their relevance to previous dialogue histories. Although effective, these metrics evaluate individual responses directly rather than considering their relative quality compared to other responses. To handle this, we propose PairEval, a novel dialogue evaluation metric for assessing responses by comparing their quality against responses in different conversations. Our metric is built on top of open-sourced and moderate-size language models, and we make them specialized in pairwise comparison between dialogue responses. Extensive experiments on multiple benchmarks demonstrate that our metric exhibits a higher correlation with human judgments than baseline metrics. We also find that the proposed comparative metric is more robust in detecting common failures from open-domain dialogue systems, including repetition and speaker insensitivity. The codes and models will be publicly available after the paper is accepted.
PairEval: Open-domain Dialogue Evaluation Metric with Pairwise Comparisons
[ "ChaeHun Park", "Minseok Choi", "Dohyun Lee", "Jaegul Choo" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
12
null
https://openreview.net/forum?id=y6SqbJfCSk
@inproceedings{ qin2024hgrn, title={{HGRN}2: Gated Linear {RNN}s with State Expansion}, author={Zhen Qin and Songlin Yang and Weixuan Sun and Xuyang Shen and Dong Li and Weigao Sun and Yiran Zhong}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=y6SqbJfCSk} }
Hierarchically gated linear RNN (HGRN) has demonstrated competitive training speed and performance in language modeling while offering efficient inference. However, the recurrent state size of HGRN remains relatively small, limiting its expressiveness. To address this issue, we introduce a simple outer product-based state expansion mechanism, which significantly enlarges the recurrent state size without introducing any additional parameters. This enhancement also provides a linear attention interpretation for HGRN2, enabling hardware-efficient training. Our extensive experiments verify the advantage of HGRN2 over HGRN consistently across different settings and comptetive to other recurrent models.
HGRN2: Gated Linear RNNs with State Expansion
[ "Zhen Qin", "Songlin Yang", "Weixuan Sun", "Xuyang Shen", "Dong Li", "Weigao Sun", "Yiran Zhong" ]
Conference
Poster
2404.07904
[ "https://github.com/sustcsonglin/flash-linear-attention" ]
-1
-1
-1
-1
[]
[]
[]
0
13
null
https://openreview.net/forum?id=xm8zYRfrqE
@inproceedings{ kortukov2024studying, title={Studying Large Language Model Behaviors Under Context-Memory Conflicts With Real Documents}, author={Evgenii Kortukov and Alexander Rubinstein and Elisa Nguyen and Seong Joon Oh}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=xm8zYRfrqE} }
Retrieval-augmented generation (RAG) mitigates many problems of fully parametric language models, such as temporal degradation, hallucinations, and lack of grounding. In RAG, the model’s knowledge can be updated from documents provided in context. This leads to cases of conflict between the model’s parametric knowledge and the contextual information, where the model may not always update its knowledge. Previous work studied context-memory knowledge conflicts by creating synthetic documents that contradict the model’s correct parametric answers. We present a framework for studying such knowledge conflicts in a realistic setup. We update incorrect parametric knowledge using real conflicting documents. This reflects how knowledge conflicts arise in practice. In this realistic scenario, we find that knowledge updates fail less often than previously reported. In cases where the models still fail to update their answers, we find a parametric bias: the incorrect parametric answer appearing in context makes the knowledge update likelier to fail. These results suggest that the factual parametric knowledge of LLMs can negatively influence their reading abilities and behaviors. Our code is available at https://github.com/kortukov/realistic_knowledge_conflicts/.
Studying Large Language Model Behaviors Under Context-Memory Conflicts With Real Documents
[ "Evgenii Kortukov", "Alexander Rubinstein", "Elisa Nguyen", "Seong Joon Oh" ]
Conference
Poster
2404.16032
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
14
null
https://openreview.net/forum?id=xdg4CS5mkl
@inproceedings{ zhu2024investigating, title={Investigating Instruction Tuning Large Language Models on Graphs}, author={Kerui Zhu and Bo-Wei Huang and Bowen Jin and Yizhu Jiao and Ming Zhong and Kevin Chang and Shou-De Lin and Jiawei Han}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=xdg4CS5mkl} }
Inspired by the recent advancements of Large Language Models (LLMs) in NLP tasks, there's growing interest in applying LLMs to graph-related tasks. This study delves into the capabilities of instruction-following LLMs for engaging with real-world graphs, aiming to offer empirical insights into how LLMs can effectively interact with graphs and generalize across graph tasks. We begin by constructing a dataset designed for instruction tuning, which comprises a diverse collection of 79 graph-related tasks from academic and e-commerce domains, featuring 44,240 training instances and 18,960 test samples. Utilizing this benchmark, our initial investigation focuses on identifying the optimal graph representation that serves as a conduit for LLMs to understand complex graph structures. Our findings indicate that JSON format for graph representation consistently outperforms natural language and code formats across various LLMs and graph types. Furthermore, we examine the key factors that influence the generalization abilities of instruction-tuned LLMs by evaluating their performance on both in-domain and out-of-domain graph tasks.
Investigating Instruction Tuning Large Language Models on Graphs
[ "Kerui Zhu", "Bo-Wei Huang", "Bowen Jin", "Yizhu Jiao", "Ming Zhong", "Kevin Chang", "Shou-De Lin", "Jiawei Han" ]
Conference
Poster
2408.05457
[ "https://github.com/zhukerui/graph-instruction-tuning" ]
-1
-1
-1
-1
[]
[]
[]
0
15
null
https://openreview.net/forum?id=xWYRL1eR74
@inproceedings{ williams2024fuseing, title={{FUSE}-ing Language Models: Zero-Shot Adapter Discovery for Prompt Optimization Across Tokenizers}, author={Joshua Nathaniel Williams and J Zico Kolter}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=xWYRL1eR74} }
The widespread use of large language models has resulted in a multitude of tokenizers and embedding spaces, making knowledge transfer in prompt discovery tasks difficult. In this work, we propose FUSE (Flexible Unification of Semantic Embeddings), an inexpensive approach to approximating an adapter layer that maps from one model's textual embedding space to another, even across different tokenizers. We introduce a third-order tensor-based representation of a model's embedding space that aligns semantic embeddings that have been split apart by different tokenizers, and use this representation to derive an approximation of the gradient of one model's outputs with respect to another model's embedding space. We show the efficacy of our approach via multi-objective optimization over vision-language and causal language models for image captioning and sentiment-based image captioning.
FUSE-ing Language Models: Zero-Shot Adapter Discovery for Prompt Optimization Across Tokenizers
[ "Joshua Nathaniel Williams", "J Zico Kolter" ]
Conference
Poster
2408.04816
[ "https://github.com/jnwilliams/fuse_prompt_inversion" ]
-1
-1
-1
-1
[]
[]
[]
0
16
null
https://openreview.net/forum?id=xS6zx1aBI9
@inproceedings{ majumder2024clin, title={{CLIN}: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization}, author={Bodhisattwa Prasad Majumder and Bhavana Dalvi Mishra and Peter Jansen and Oyvind Tafjord and Niket Tandon and Li Zhang and Chris Callison-Burch and Peter Clark}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=xS6zx1aBI9} }
Language agents have shown some ability to interact with an external environment, e.g., a virtual world such as ScienceWorld, to perform complex tasks, e.g., growing a plant, without the startup costs of reinforcement learning. While recent work, e.g., Reflexion, has demonstrated how such agents can also self-improve by adding a textual memory of ''hints'' learned from prior experience, such improvements have been limited both in size and scope. In contrast, our goal is a language agent that can robustly improve performance over time, including when both the task and environment are varied. Our approach is to have the agent learn a textual representation of how the world works (rather than just isolated hints), expressed as a memory of causal abstractions, to guide future decision-making. In experiments, we find CLIN is able to continually improve on repeated trials on the same task and environment, outperforming state-of-the-art reflective language agents like Reflexion by 23 points in ScienceWorld and 1.4 points in ALFWorld benchmarks. CLIN can also transfer its learning to new environments and tasks, enhancing performance by 21 points in ScienceWorld and 11 points in ALFWorld. This suggests that language agents with a textual causal memory can play a significant role in interactive environments, including being able to rapidly improve over time.
CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization
[ "Bodhisattwa Prasad Majumder", "Bhavana Dalvi Mishra", "Peter Jansen", "Oyvind Tafjord", "Niket Tandon", "Li Zhang", "Chris Callison-Burch", "Peter Clark" ]
Conference
Poster
2310.10134
[ "" ]
https://huggingface.co/papers/2310.10134
1
1
0
8
[]
[]
[]
1
17
null
https://openreview.net/forum?id=xMt9kCv5YR
@inproceedings{ du2024helmsman, title={Helmsman of the Masses? Evaluate the Opinion Leadership of Large Language Models in the Werewolf Game}, author={Silin Du and Xiaowei Zhang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=xMt9kCv5YR} }
Large language models (LLMs) have exhibited memorable strategic behaviors in social deductive games. However, the significance of opinion leadership exhibited by LLM-based agents has been largely overlooked, which is crucial for practical applications in multi-agent and human-AI interaction settings. Opinion leaders are individuals who have a noticeable impact on the beliefs and behaviors of others within a social group. In this work, we employ the Werewolf game as a simulation platform to assess the opinion leadership of LLMs. The game includes the role of the Sheriff, tasked with summarizing arguments and recommending decision options, and therefore serves as a credible proxy for an opinion leader. We develop a framework integrating the Sheriff role and devise two novel metrics based on the critical characteristics of opinion leaders. The first metric measures the reliability of the opinion leader, and the second assesses the influence of the opinion leader on other players' decisions. We conduct extensive experiments to evaluate LLMs of different scales. In addition, we collect a Werewolf question-answering dataset (WWQA) to assess and enhance LLM's grasp of the game rules, and we also incorporate human participants for further analysis. The results suggest that the Werewolf game is a suitable test bed to evaluate the opinion leadership of LLMs, and few LLMs possess the capacity for opinion leadership.
Helmsman of the Masses? Evaluate the Opinion Leadership of Large Language Models in the Werewolf Game
[ "Silin Du", "Xiaowei Zhang" ]
Conference
Poster
2404.01602
[ "https://github.com/doslim/evaluate-the-opinion-leadership-of-llms" ]
-1
-1
-1
-1
[]
[]
[]
0
18
null
https://openreview.net/forum?id=xI8C7sfN1H
@inproceedings{ jeong2024factual, title={Factual and Tailored Recommendation Endorsements using Language Models and Reinforcement Learning}, author={Jihwan Jeong and Yinlam Chow and Guy Tennenholtz and ChihWei Hsu and Mohammad Ghavamzadeh and Craig Boutilier}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=xI8C7sfN1H} }
Recommender systems (RSs) play a central role in matching candidate items to users based on their preferences. While traditional RSs rely on user feed-back signals, conversational RSs interact with users in natural language. In this work, we develop P4LM, an _aPpealing, Precise, Preference-comprehensive and Prioritized_ language model which endorses recommended items by emphasizing specific item characteristics and their coverage to a user’s preferences. P4LM uses an _embedding_ representation of a user’s preferences to generate responses that are appealing, factually-grounded and tailored to the user’s preferences. P4LM employs a joint reward function to measure precision, appeal, preference coverage and prioritization of preferences, which are used as AI-based feedback in a reinforcement learning-based language model framework. On the MovieLens 25M and Amazon Product Review datasets, P4LM delivers more appealing and tailored endorsements to users, as determined by auto-critic and rater evaluations.
Factual and Tailored Recommendation Endorsements using Language Models and Reinforcement Learning
[ "Jihwan Jeong", "Yinlam Chow", "Guy Tennenholtz", "ChihWei Hsu", "Mohammad Ghavamzadeh", "Craig Boutilier" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
19
null
https://openreview.net/forum?id=wps3p2cqrA
@inproceedings{ li2024how, title={How Well Do {LLM}s Identify Cultural Unity in Diversity?}, author={Jialin Li and Junli Wang and Junjie Hu and Ming Jiang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=wps3p2cqrA} }
Much work on the cultural awareness of large language models (LLMs) focuses on the models' sensitivity to geo-cultural diversity. However, in addition to cross-cultural differences, there also exists common ground across cultures. For instance, a bridal veil in the United States plays a similar cultural-relevant role as a honggaitou in China. In this study, we introduce a benchmark dataset CUNIT for evaluating decoder-only LLMs in understanding the cultural unity of concepts. Specifically, CUNIT consists of 1,425 evaluation examples building upon 285 traditional cultural-specific concepts across 10 countries. Based on a systematic manual annotation of cultural-relevant features per concept, we calculate the cultural association between any pair of cross-cultural concepts. Built upon this dataset, we design a contrastive matching task to evaluate the LLMs' capability to identify highly associated cross-cultural concept pairs. We evaluate 3 strong LLMs, using 3 popular prompting strategies, under the settings of either giving all extracted concept features or no features at all on CUNIT Interestingly, we find that cultural associations across countries regarding clothing concepts largely differ from food. Our analysis shows that LLMs are still limited to capturing cross-cultural associations between concepts compared to humans. Moreover, geo-cultural proximity shows a weak influence on model performance in capturing cross-cultural associations.
How Well Do LLMs Identify Cultural Unity in Diversity?
[ "Jialin Li", "Junli Wang", "Junjie Hu", "Ming Jiang" ]
Conference
Poster
2408.05102
[ "https://github.com/ljl0222/CUNIT" ]
-1
-1
-1
-1
[]
[]
[]
0
20
null
https://openreview.net/forum?id=wi9IffRhVM
@inproceedings{ wang2024guiding, title={Guiding Language Model Reasoning with Planning Tokens}, author={Xinyi Wang and Lucas Caccia and Oleksiy Ostapenko and Xingdi Yuan and William Yang Wang and Alessandro Sordoni}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=wi9IffRhVM} }
Large language models (LLMs) have recently attracted considerable interest for their ability to perform complex reasoning tasks, such as chain-of-thought (CoT) reasoning. However, most of the existing approaches to enhance this ability rely heavily on data-driven methods, while neglecting the structural aspects of the model's reasoning capacity. To encourage a more structural generation of CoT steps, we propose a hierarchical generation scheme: we let the LM generate a planning token at the start of each reasoning step, intuitively serving as a high-level plan of the current step, and add their embeddings to the model parameters. Our approach requires a negligible increase in trainable parameters (0.001%) and can be applied through either full fine-tuning or a more parameter-efficient scheme. We demonstrate our method's effectiveness by applying it to three different LLMs, showing notable accuracy improvements across three math word problem datasets and one multihop QA dataset with respect to standard fine-tuning baselines.
Guiding Language Model Reasoning with Planning Tokens
[ "Xinyi Wang", "Lucas Caccia", "Oleksiy Ostapenko", "Xingdi Yuan", "William Yang Wang", "Alessandro Sordoni" ]
Conference
Poster
2310.05707
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
21
null
https://openreview.net/forum?id=wS7PxDjy6m
@inproceedings{ cheng2024dated, title={Dated Data: Tracing Knowledge Cutoffs in Large Language Models}, author={Jeffrey Cheng and Marc Marone and Orion Weller and Dawn Lawrie and Daniel Khashabi and Benjamin Van Durme}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=wS7PxDjy6m} }
Large Language Models (LLMs) are often paired with a reported cutoff date, the time at which training data was gathered. Such information is crucial for applications where the LLM must provide up-to-date information. However, a reported cutoff only scratches the surface. Do all sub-resources in the training data share the same cutoff? Does the model's demonstrated knowledge for these sub-resources closely align to their cutoff? We define the notion of an effective cutoff, which is distinct from the LLM's reported cutoff and differs between sub-resources. We propose a simple approach to estimate effective cutoffs of an LLM on the resource-level by probing across versions of the data. Crucially, our method does not require access to a model's pre-training data. Through our analysis, we find that effective cutoffs often drastically differ from reported cutoffs. To understand the root cause of this observation, we conduct a large-scale analysis on open pre-training datasets. Our analysis reveals two reasons for these inconsistencies: (1) temporal misalignments of CommonCrawl data due to non-trivial amounts of old data in new dumps; and (2) complications in LLM deduplication schemes involving semantic duplicates and lexical near-duplicates. Overall, our results show that cutoffs are not as simple as they have seemed and that care must be taken both by LLM dataset curators as well as practitioners who seek to use these models.
Dated Data: Tracing Knowledge Cutoffs in Large Language Models
[ "Jeffrey Cheng", "Marc Marone", "Orion Weller", "Dawn Lawrie", "Daniel Khashabi", "Benjamin Van Durme" ]
Conference
Oral
2403.12958
[ "https://github.com/nexync/dated_data" ]
-1
-1
-1
-1
[]
[]
[]
0
22
null
https://openreview.net/forum?id=wLQ3I0F1oj
@inproceedings{ zhao2024large, title={Large Language Model is not a (Multilingual) Compositional Relation Reasoner}, author={Jinman Zhao and Xueyan Zhang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=wLQ3I0F1oj} }
We present a comprehensive evaluation of large language models' capability to reason compositional relations through a benchmark encompassing 1,800 test cases in both English and Chinese, covering six distinct categories of composition relations: Positional, Comparative, Personal, Mathematical, Identity, and Other. We expand our assessment to the multilingual realm by including translations of the benchmark suite into Japanese, French, and Korean. Our Multilingual Composition Relation (MCR) benchmark aims at investigating the robustness and adaptability of LLMs in handling compositional relation reasoning across diverse linguistic contexts.
Large Language Model is not a (Multilingual) Compositional Relation Reasoner
[ "Jinman Zhao", "Xueyan Zhang" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
23
null
https://openreview.net/forum?id=wF6k0aWjAu
@inproceedings{ cao2024instruction, title={Instruction Mining: Instruction Data Selection for Tuning Large Language Models}, author={Yihan Cao and Yanbin Kang and Chi Wang and Lichao Sun}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=wF6k0aWjAu} }
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire datase. Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and OpenLLM benchmark.
Instruction Mining: Instruction Data Selection for Tuning Large Language Models
[ "Yihan Cao", "Yanbin Kang", "Chi Wang", "Lichao Sun" ]
Conference
Poster
2307.06290
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
24
null
https://openreview.net/forum?id=vwIIAot0ff
@inproceedings{ blakeney2024does, title={Does your data spark joy? Performance gains from domain upsampling at the end of training}, author={Cody Blakeney and Mansheej Paul and Brett W. Larsen and Sean Owen and Jonathan Frankle}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=vwIIAot0ff} }
Pretraining datasets for large language models (LLMs) have grown to trillions of tokens composed of large amounts of CommonCrawl (CC) web scrape along with smaller, domain-specific datasets. It is expensive to understand the impact of these domain-specific datasets on model capabilities as training at large FLOP scales is required to reveal significant changes to difficult and emergent benchmarks. Given the increasing cost of experimenting with pretraining data, how does one determine the optimal balance between the diversity in general web scrapes and the information density of domain specific data? In this work, we show how to leverage the smaller domain specific datasets by upsampling them relative to CC at the end of training to drive performance improvements on difficult benchmarks. This simple technique allows us to improve up to 6.90 pp on MMLU, 8.26 pp on GSM8K, and 6.17 pp on HumanEval relative to the base data mix for a 7B model trained for 1 trillion (T) tokens, thus rivaling Llama-2 (7B)—a model trained for twice as long. We experiment with ablating the duration of domain upsampling from 5% to 30% of training and find that 10% to 20% percent is optimal for navigating the tradeoff between general language modeling capabilities and targeted benchmarks. We also use domain upsampling to characterize at scale the utility of individual datasets for improving various benchmarks by removing them during this final phase of training. This tool opens up the ability to experiment with the impact of different pretraining datasets at scale, but at an order of magnitude lower cost compared to full pretraining runs.
Does your data spark joy? Performance gains from domain upsampling at the end of training
[ "Cody Blakeney", "Mansheej Paul", "Brett W. Larsen", "Sean Owen", "Jonathan Frankle" ]
Conference
Poster
2406.03476
[ "" ]
https://huggingface.co/papers/2406.03476
1
0
0
5
[]
[]
[]
1
25
null
https://openreview.net/forum?id=vL8BIGuFTF
@inproceedings{ snell2024predicting, title={Predicting Emergent Capabilities by Finetuning}, author={Charlie Victor Snell and Eric Wallace and Dan Klein and Sergey Levine}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=vL8BIGuFTF} }
A fundamental open challenge in modern LLM scaling is the lack of understanding around emergent capabilities. In particular, language model pretraining loss is known to be highly predictable as a function of compute. However, downstream capabilities are far less predictable---sometimes even exhibiting emergent jumps---which makes it challenging to anticipate the capabilities of future models. In this work, we first pose the task of emergence prediction: given access to current LLMs that have random few-shot accuracy on a task, can we predict whether future models (GPT-N+1) will have non-trivial accuracy on that task? We then discover a simple insight for this problem: directly finetuning LLMs on a given task can shift the point in scaling at which emergence occurs towards less capable models. To operationalize this insight, we can finetune LLMs with varying amounts of data and fit a parametric function that predicts when emergence will occur (i.e., ``emergence laws''). To validate this approach, we use four standard NLP benchmarks where large-scale open-source LLMs already demonstrate emergence (MMLU, GSM8K, CommonsenseQA, and CoLA). Using only small-scale LLMs, we find that, in some cases, we are able to accurately predict whether models trained with up to 4x more compute have emerged.
Predicting Emergent Capabilities by Finetuning
[ "Charlie Victor Snell", "Eric Wallace", "Dan Klein", "Sergey Levine" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
26
null
https://openreview.net/forum?id=v74mJURD1L
@inproceedings{ baumg{\"a}rtner2024bestofvenom, title={Best-of-Venom: Attacking {RLHF} by Injecting Poisoned Preference Data}, author={Tim Baumg{\"a}rtner and Yang Gao and Dana Alon and Donald Metzler}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=v74mJURD1L} }
Reinforcement Learning from Human Feedback (RLHF) is a popular method for aligning Language Models (LM) with human values and preferences. RLHF requires a large number of preference pairs as training data, which are often used in both the Supervised Fine-Tuning and Reward Model training and therefore publicly available datasets are commonly used. In this work, we study to what extent a malicious actor can manipulate the LMs generations by poisoning the preferences, i.e., injecting poisonous preference pairs into these datasets and the RLHF training process. We propose strategies to build poisonous preference pairs and test their performance by poisoning two widely used preference datasets. Our results show that preference poisoning is highly effective: injecting a small amount of poisonous data (1-5\% of the original dataset), we can effectively manipulate the LM to generate a target entity in a target sentiment (positive or negative). The findings from our experiments also shed light on strategies to defend against the preference poisoning attack.
Best-of-Venom: Attacking RLHF by Injecting Poisoned Preference Data
[ "Tim Baumgärtner", "Yang Gao", "Dana Alon", "Donald Metzler" ]
Conference
Poster
2404.05530
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
27
null
https://openreview.net/forum?id=v3w2a7EInO
@inproceedings{ lee2024cats, title={{CATS}: Context-Aware Thresholding for Sparsity in Large Language Models}, author={Donghyun Lee and Jaeyong Lee and Genghan Zhang and Mo Tiwari and Azalia Mirhoseini}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=v3w2a7EInO} }
The dramatic improvements in Large Language Models (LLMs) come at the cost of increased computational resources for inference. Recent studies ameliorate the computational costs of LLMs by increasing their activation sparsity but suffer from significant performance degradation on downstream tasks. In this work, we introduce a new framework for sparsifying the activations of LLMs and reducing inference costs, dubbed $\underline{C}$ontextually $\underline{A}$ware $\underline{T}$hresholding for $\underline{S}$parsity (CATS). CATS is a relatively simple algorithm that is easy to implement and highly effective. At the heart of our framework is a new non-linear activation function. We demonstrate that CATS can be applied to various models, including Mistral-7B and Llama2-7B \& 13B, and outperforms existing sparsification techniques across multiple tasks. More precisely, CATS-based models achieve downstream task performance within $\sim$ 99\% of their base models at activation sparsity levels of 50\%, even without any fine-tuning. Moreover, with fine-tuning that targets only 1\% of the parameters, CATS-based models not only converge faster but also achieve better task performance than competing techniques. Finally, we develop a custom GPU kernel for the efficient implementation of CATS that translates the activation sparsity of CATS to real wall-clock time speedups. Our custom kernel implementation of CATS results in a $\sim$15\% improvement in wall-clock inference latency of token generation. We release our code, experiments, and datasets at https://github.com/ScalingIntelligence/CATS.
CATS: Context-Aware Thresholding for Sparsity in Large Language Models
[ "Donghyun Lee", "Jaeyong Lee", "Genghan Zhang", "Mo Tiwari", "Azalia Mirhoseini" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
28
null
https://openreview.net/forum?id=uUIFTjBREk
@inproceedings{ zuo2024efficient, title={Efficient Hybrid Long Sequence Modeling with State Space Augmented Transformers}, author={Simiao Zuo and Xiaodong Liu and Jian Jiao and Denis X Charles and Eren Manavoglu and Tuo Zhao and Jianfeng Gao}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=uUIFTjBREk} }
Transformer models have achieved superior performance in various natural language processing tasks. However, the quadratic computational cost of the attention mechanism limits its practicality for long sequences. There are existing attention variants that improve the computational efficiency, but they have limited ability to effectively compute global information. In parallel to Transformer models, state space models (SSMs) are tailored for long sequences, but they are not flexible enough to capture complicated local information. We propose SPADE, short for State Space Augmented Transformer. Specifically, we augment a SSM into the bottom layer of SPADE, and we employ efficient local attention methods for the other layers. The SSM augments global information, which complements the lack of long-range dependency issue in local attention methods. Experimental results on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method. To further demonstrate the scalability of SPADE, we pre-train large encoder-decoder models and present fine-tuning results on natural language understanding and natural language generation tasks.
Efficient Hybrid Long Sequence Modeling with State Space Augmented Transformers
[ "Simiao Zuo", "Xiaodong Liu", "Jian Jiao", "Denis X Charles", "Eren Manavoglu", "Tuo Zhao", "Jianfeng Gao" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
29
null
https://openreview.net/forum?id=uILyEJGKWw
@inproceedings{ lu2024does, title={Does Collaborative Human{\textendash}{LM} Dialogue Generation Help Information Extraction from Human{\textendash}Human Dialogues?}, author={Bo-Ru Lu and Nikita Haduong and Chia-Hsuan Lee and Zeqiu Wu and Hao Cheng and Paul Koester and Jean Utke and Tao Yu and Noah A. Smith and Mari Ostendorf}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=uILyEJGKWw} }
The capabilities of pretrained language models (LMs) have opened opportunities to explore new application areas, but applications involving human-human interaction are limited by the fact that most data is protected from public release for privacy reasons. Problem-solving human-human dialogues in real applications can be much more complex than existing Wizard-of-Oz collections, preventing successful domain transfer. To support information extraction (IE) for a private call center dataset (AIC), we introduce a human-in-the-loop dialogue generation framework capable of synthesizing realistic dialogues. In IE experiments with AIC dialogues, we observe 25% relative improvement in F1 after augmenting a small set of real human-human conversations with synthetic data. In controlled experiments, we compare training with our human-in-the-loop-synthesized data vs. fully automatically LM-generated data and find that collaborating humans adds value both in the generation and annotation stages. We release code and our synthetic dataset to illustrate the complexity of call center conversations and encourage development of complex dialogue datasets that are more representative of natural data.
Does Collaborative Human–LM Dialogue Generation Help Information Extraction from Human–Human Dialogues?
[ "Bo-Ru Lu", "Nikita Haduong", "Chia-Hsuan Lee", "Zeqiu Wu", "Hao Cheng", "Paul Koester", "Jean Utke", "Tao Yu", "Noah A. Smith", "Mari Ostendorf" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
30
null
https://openreview.net/forum?id=u2vAyMeLMm
@inproceedings{ liu2024infinigram, title={Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens}, author={Jiacheng Liu and Sewon Min and Luke Zettlemoyer and Yejin Choi and Hannaneh Hajishirzi}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=u2vAyMeLMm} }
Are $n$-gram language models still relevant in this era of neural large language models (LLMs)? Our answer is *yes*, and we showcase their values in both text analysis and improving neural LLMs. This was done by modernizing $n$-gram LMs in two aspects. First, we train them at the same data scale as neural LLMs -- **5 trillion tokens**. This is one of the largest $n$-gram LMs ever built. Second, existing $n$-gram LMs use small $n$ which hinders their performance; we instead allow $n$ to be arbitrarily large, by introducing a new **$\infty$-gram LM** with backoff. Instead of pre-computing $n$-gram count tables (which would be very expensive), we develop an engine named infini-gram -- powered by suffix arrays -- that can compute $\infty$-gram (as well as $n$-gram with arbitrary $n$) probabilities with **millisecond-level latency**. The $\infty$-gram framework and infini-gram engine enable us to conduct many novel and interesting analyses of human-written and machine-generated text: we find that the $\infty$-gram LM has fairly high accuracy for next-token prediction (47%), and can complement neural LLMs to greatly reduce their perplexity. When analyzing machine-generated text, we also observe irregularities in the machine: $\infty$-gram agreement level with respect to the suffix length, which indicates deficiencies in neural LLM pretraining and the positional embeddings of Transformers.
Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens
[ "Jiacheng Liu", "Sewon Min", "Luke Zettlemoyer", "Yejin Choi", "Hannaneh Hajishirzi" ]
Conference
Oral
2401.17377
[ "https://github.com/AlexWan0/infini-gram" ]
https://huggingface.co/papers/2401.17377
3
34
2
5
[]
[]
[ "liujch1998/infini-gram" ]
1
31
null
https://openreview.net/forum?id=tzE7VqsaJ4
@inproceedings{ chan2024rqrag, title={{RQ}-{RAG}: Learning to Refine Queries for Retrieval Augmented Generation}, author={Chi-Min Chan and Chunpu Xu and Ruibin Yuan and Hongyin Luo and Wei Xue and Yike Guo and Jie Fu}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=tzE7VqsaJ4} }
Large Language Models (LLMs) exhibit remarkable capabilities but are prone to generating inaccurate or hallucinatory responses. This limitation stems from their reliance on vast pretraining datasets, making them susceptible to errors in unseen scenarios. To tackle these challenges, Retrieval Augmented Generation (RAG) addresses this by incorporating external, relevant documents into the response generation process, thus leveraging non-parametric knowledge alongside LLMs’ in-context learning abilities. However, existing RAG implementations primarily focus on initial input for context retrieval, overlooking the nuances of ambiguous or complex queries that necessitate further clarification or decomposition for accurate responses. To this end, we propose learning to Refine Queries for Retrieval Augmented Generation (RQ-RAG) in this paper, endeavoring to enhance the model by equipping it with capabilities for explicit rewriting, decomposition, and disambiguation. Our experimental results indicate that our method, when applied to a 7B Llama2 model, surpasses the previous state-of-the-art (SOTA) by an average of 1.9% across three single-hop QA datasets, and when applied to a 8B Llama3 model, it also demonstrates enhanced performance in handling complex, multi-hop QA datasets.
RQ-RAG: Learning to Refine Queries for Retrieval Augmented Generation
[ "Chi-Min Chan", "Chunpu Xu", "Ruibin Yuan", "Hongyin Luo", "Wei Xue", "Yike Guo", "Jie Fu" ]
Conference
Poster
2404.00610
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
32
null
https://openreview.net/forum?id=taThoOlDNQ
@inproceedings{ ni2024exploring, title={Exploring the Mystery of Influential Data for Mathematical Reasoning}, author={Xinzhe Ni and Yeyun Gong and Zhibin Gou and yelong shen and Yujiu Yang and Nan Duan and Weizhu Chen}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=taThoOlDNQ} }
Selecting influential data for fine-tuning on downstream tasks is a key factor for both performance and computation efficiency. Recent works have shown that training with only limited data can show a superior performance on general tasks. However, the feasibility on mathematical reasoning tasks has not been validated. To go further, there exist two open questions for mathematical reasoning: how to select influential data and what is an influential data composition. For the former one, we propose a Quality-aware Diverse Selection (QaDS) strategy adaptable for mathematical reasoning. A comparison with other selection strategies validates the superiority of QaDS. For the latter one, we first enlarge our setting and explore the influential data composition. We conduct a series of experiments and highlight: scaling up reasoning data, and training with general data selected by QaDS is helpful. Then, we define our optimal mixture as OpenMathMix, an influential data mixture with open-source data selected by QaDS. With OpenMathMix, we achieve a state-of-the-art 48.8% accuracy on MATH with 7B base model. Additionally, we showcase the use of QaDS in creating efficient fine-tuning mixtures with various selection ratios, and analyze the quality of a wide range of open-source datasets, which can perform as a reference for future works on mathematical reasoning tasks.
Exploring the Mystery of Influential Data for Mathematical Reasoning
[ "Xinzhe Ni", "Yeyun Gong", "Zhibin Gou", "yelong shen", "Yujiu Yang", "Nan Duan", "Weizhu Chen" ]
Conference
Poster
2404.01067
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
33
null
https://openreview.net/forum?id=tRxIB7y3wF
@inproceedings{ sun2024lalaeval, title={LalaEval: A Holistic Human Evaluation Framework for Domain-Specific Large Language Models}, author={Chongyan Sun and Ken Lin and Shiwei Wang and Hulong Wu and Chengfei Fu and Zhen Wang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=tRxIB7y3wF} }
This paper introduces LalaEval, a holistic framework designed for the human evaluation of domain-specific large language models (LLMs). LalaEval proposes a comprehensive suite of end-to-end protocols that cover five main components including domain specification, criteria establishment, benchmark dataset creation, construction of evaluation rubrics, and thorough analysis and interpretation of evaluation outcomes. This initiative aims to fill a crucial research gap by providing a systematic methodology for conducting standardized human evaluations within specific domains, a practice that, despite its widespread application, lacks substantial coverage in the literature and human evaluation are often criticized to be less reliable due to subjective factors, so standardized procedures adapted to the nuanced requirements of specific domains or even individual organizations are in great need. Furthermore, the paper demonstrates the framework's application within the logistics industry and a comparative analysis of LLMs for the logistics domain use, highlighting the framework's capacity to elucidate performance differences and guide model selection and development for domain-specific LLMs. Through real-world deployment, the paper underscores the framework's effectiveness in advancing the field of domain-specific LLM evaluation, thereby contributing significantly to the ongoing discussion on LLMs' practical utility and performance in domain-specific applications.
LalaEval: A Holistic Human Evaluation Framework for Domain-Specific Large Language Models
[ "Chongyan Sun", "Ken Lin", "Shiwei Wang", "Hulong Wu", "Chengfei Fu", "Zhen Wang" ]
Conference
Poster
2408.13338
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
34
null
https://openreview.net/forum?id=tIpWtMYkzU
@inproceedings{ mireshghallah2024trust, title={Trust No Bot: Discovering Personal Disclosures in Human-{LLM} Conversations in the Wild}, author={Niloofar Mireshghallah and Maria Antoniak and Yash More and Yejin Choi and Golnoosh Farnadi}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=tIpWtMYkzU} }
Measuring personal disclosures made in human-chatbot interactions can provide a better understanding of users’ AI literacy and facilitate privacy research for large language models (LLMs). We run an extensive, fine-grained analysis on the personal disclosures made by real users to commercial GPT models, investigating the leakage of personally identifiable and sensitive information. To understand the contexts in which users disclose to chatbots, we develop a taxonomy of tasks and sensitive topics, based on qualitative and quantitative analysis of naturally occurring conversations. We discuss these potential privacy harms and observe that: (1) personally identifiable information (PII) appears in unexpected contexts such as in translation or code editing (48% and 16% of the time, respectively) and (2) PII detection alone is insufficient to capture the sensitive topics that are common in human-chatbot interactions, such as detailed sexual preferences or specific drug use habits. We believe that these high disclosure rates are of significant importance for researchers and data curators, and we call for the design of appropriate nudging mechanisms to help users moderate their interactions.
Trust No Bot: Discovering Personal Disclosures in Human-LLM Conversations in the Wild
[ "Niloofar Mireshghallah", "Maria Antoniak", "Yash More", "Yejin Choi", "Golnoosh Farnadi" ]
Conference
Poster
[ "https://github.com/mireshghallah/ChatGPT-personal-disclosures" ]
-1
-1
-1
-1
[]
[]
[]
0
35
null
https://openreview.net/forum?id=tEYskw1VY2
@inproceedings{ gu2024mamba, title={Mamba: Linear-Time Sequence Modeling with Selective State Spaces}, author={Albert Gu and Tri Dao}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=tEYskw1VY2} }
Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5x higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
[ "Albert Gu", "Tri Dao" ]
Conference
Oral
2312.00752
[ "https://github.com/radarFudan/mamba" ]
-1
-1
-1
-1
[]
[]
[]
0
36
null
https://openreview.net/forum?id=t4eB3zYWBK
@inproceedings{ tang2024multihoprag, title={MultiHop-{RAG}: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries}, author={Yixuan Tang and Yi Yang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=t4eB3zYWBK} }
Retrieval-augmented generation (RAG) augments large language models (LLM) by retrieving relevant knowledge, showing promising potential in mitigating LLM hallucinations and enhancing response quality, thereby facilitating the great adoption of LLMs in practice. However, we find that existing RAG systems are inadequate in answering multi-hop queries, which require retrieving and reasoning over multiple pieces of supporting evidence. Furthermore, to our knowledge, no existing RAG benchmarking dataset focuses on multi-hop queries. In this paper, we develop a novel dataset, MultiHop-RAG, which consists of a knowledge base, a large collection of multi-hop queries, their ground-truth answers, and the associated supporting evidence. We detail the procedure of building the dataset, utilizing an English news article dataset as the underlying RAG knowledge base. We demonstrate the benchmarking utility of MultiHop-RAG in two experiments. The first experiment compares different embedding models for retrieving evidence for multi-hop queries. In the second experiment, we examine the capabilities of various state-of-the-art LLMs, including GPT-4, PaLM, and Llama2-70B, in reasoning and answering multi-hop queries given the evidence. Both experiments reveal that existing RAG methods perform unsatisfactorily in retrieving and answering multi-hop queries. We hope MultiHop-RAG will be a valuable resource for the community in developing effective RAG systems, thereby facilitating greater adoption of LLMs in practice. We make the dataset and benchmarking code publicly available via GitHub.
MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries
[ "Yixuan Tang", "Yi Yang" ]
Conference
Poster
2401.15391
[ "https://github.com/yixuantt/MultiHop-RAG" ]
-1
-1
-1
-1
[]
[]
[]
0
37
null
https://openreview.net/forum?id=t3z6UlV09o
@inproceedings{ seddik2024how, title={How bad is training on synthetic data? A statistical analysis of language model collapse}, author={Mohamed El Amine Seddik and Suei-Wen Chen and Soufiane Hayou and Pierre Youssef and Merouane Abdelkader DEBBAH}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=t3z6UlV09o} }
Model collapse, as introduced in (Shumailov et al., 2023), refers to the phenomenon where training models on synthetic data generated from previously trained models leads to a deterioration in performance. This recursive training loop makes the tails of the original distribution disappear, thereby making future-generation models forget about the initial (real) distribution. With the aim of rigorously understanding model collapse in language models, we consider in this paper a statistical model that allows us to characterize the impact of various recursive training scenarios. Specifically, we demonstrate that model collapse cannot be avoided when training solely on synthetic data. However, when mixing both real and synthetic data, we provide an estimate of a maximal amount of synthetic data below which model collapse can eventually be avoided. Our theoretical conclusions are further supported by empirical validations.
How bad is training on synthetic data? A statistical analysis of language model collapse
[ "Mohamed El Amine Seddik", "Suei-Wen Chen", "Soufiane Hayou", "Pierre Youssef", "Merouane Abdelkader DEBBAH" ]
Conference
Poster
2404.05090
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
38
null
https://openreview.net/forum?id=stmqBSW2dV
@inproceedings{ hosseini2024vstar, title={V-{ST}aR: Training Verifiers for Self-Taught Reasoners}, author={Arian Hosseini and Xingdi Yuan and Nikolay Malkin and Aaron Courville and Alessandro Sordoni and Rishabh Agarwal}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=stmqBSW2dV} }
Common self-improvement approaches for large language models (LLMs), such as STaR (Zelikman et al., 2022), iteratively fine-tune LLMs on self-generated solutions to improve their problem-solving ability. However, these approaches discard the large amounts of incorrect solutions generated during this process, potentially neglecting valuable information in such solutions. To address this shortcoming, we propose V-STaR that utilizes both the correct and incorrect solutions generated during the self-improvement process to train a verifier using DPO that judges correctness of model-generated solutions. This verifier is used at inference time to select one solution among many candidate solutions. Running V-STaR for multiple iterations results in progressively better reasoners and verifiers, delivering a 4% to 17% test accuracy improvement over existing self-improvement and verification approaches on common code generation and math reasoning benchmarks with LLaMA2 models.
V-STaR: Training Verifiers for Self-Taught Reasoners
[ "Arian Hosseini", "Xingdi Yuan", "Nikolay Malkin", "Aaron Courville", "Alessandro Sordoni", "Rishabh Agarwal" ]
Conference
Poster
2402.06457
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
39
null
https://openreview.net/forum?id=soz1SEiPeq
@inproceedings{ peng2024eagle, title={Eagle and Finch: {RWKV} with Matrix-Valued States and Dynamic Recurrence}, author={Bo Peng and Daniel Goldstein and Quentin Gregory Anthony and Alon Albalak and Eric Alcaide and Stella Biderman and Eugene Cheah and Teddy Ferdinan and Kranthi Kiran GV and Haowen Hou and Satyapriya Krishna and Ronald McClelland Jr. and Niklas Muennighoff and Fares Obeid and Atsushi Saito and Guangyu Song and Haoqin Tu and Ruichong Zhang and Bingchen Zhao and Qihang Zhao and Jian Zhu and Rui-Jie Zhu}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=soz1SEiPeq} }
We present Eagle (RWKV-5) and Finch (RWKV-6), sequence models improving upon the RWKV architecture. Our architectural design advancements include multi-headed matrix-valued states and a dynamic recurrence mechanism that improve expressivity while maintaining the inference efficiency characteristics of RNNs. We introduce a new multilingual corpus with 1.12 trillion tokens and a fast tokenizer based on greedy matching for enhanced multilinguality. We trained four Eagle models, ranging from 0.46 to 7.5 billion parameters, and two Finch models with 1.6 and 3.1 billion parameters and find that they achieve competitive performance across a wide variety of benchmarks.
Eagle and Finch: RWKV with Matrix-Valued States and Dynamic Recurrence
[ "Bo Peng", "Daniel Goldstein", "Quentin Gregory Anthony", "Alon Albalak", "Eric Alcaide", "Stella Biderman", "Eugene Cheah", "Teddy Ferdinan", "Kranthi Kiran GV", "Haowen Hou", "Satyapriya Krishna", "Ronald McClelland Jr.", "Niklas Muennighoff", "Fares Obeid", "Atsushi Saito", "Guangyu Song", "Haoqin Tu", "Ruichong Zhang", "Bingchen Zhao", "Qihang Zhao", "Jian Zhu", "Rui-Jie Zhu" ]
Conference
Poster
2404.05892
[ "https://github.com/rwkv/rwkv-lm" ]
https://huggingface.co/papers/2404.05892
9
31
0
27
[ "BlinkDL/rwkv-6-world", "TimeMobius/Mobius-RWKV-r5-chat-12B-8k", "TimeMobius/Mobius-RWKV-r6-12B", "xiaol/mobius-rwkv-r6-12B", "xiaol/Mobius-RWKV-r5-chat-12B-8k" ]
[]
[ "BlinkDL/RWKV-Gradio-2", "BlinkDL/RWKV-Gradio-1", "devingulliver/subquadratic-llm-leaderboard", "FredZhang7/rwkv-6-world-1b6-chat" ]
1
40
null
https://openreview.net/forum?id=soGxskHGox
@inproceedings{ mercat2024linearizing, title={Linearizing Large Language Models}, author={Jean Mercat and Igor Vasiljevic and Sedrick Scott Keh and Kushal Arora and Achal Dave and Adrien Gaidon and Thomas Kollar}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=soGxskHGox} }
Linear transformers have emerged as a subquadratic-time alternative to softmax attention and have garnered significant interest due to their fixed recurrent state. However, they suffer from poor scaling and under-perform compute-matched transformers. Prior models such as RWKV and Mamba have attempted to address these shortcomings by proposing novel time-mixing and gating architectures, but pre-training large language models requires significant data and compute investments. In this paper, we propose Scalable UPtraining for Recurrent Attention (SUPRA), an alternative to pre-training linear transformers. We present a method to uptrain existing large pre-trained transformers into Recurrent Neural Networks (RNNs) with a modest compute budget. This allows us to leverage the strong pre- training data and performance of existing transformer LLMs, while requiring 5% of the training cost. We find that our linearization technique leads to competitive performance on standard benchmarks, but we identify a persistent in-context learning shortfall for even the largest linear models.
Linearizing Large Language Models
[ "Jean Mercat", "Igor Vasiljevic", "Sedrick Scott Keh", "Kushal Arora", "Achal Dave", "Adrien Gaidon", "Thomas Kollar" ]
Conference
Poster
2405.06640
[ "https://github.com/tri-ml/linear_open_lm" ]
-1
-1
-1
-1
[]
[]
[]
0
41
null
https://openreview.net/forum?id=sKNIjS2brr
@inproceedings{ lin2024videodirectorgpt, title={VideoDirector{GPT}: Consistent Multi-Scene Video Generation via {LLM}-Guided Planning}, author={Han Lin and Abhay Zala and Jaemin Cho and Mohit Bansal}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=sKNIjS2brr} }
Recent text-to-video (T2V) generation methods have seen significant advancements. However, the majority of these works focus on producing short video clips of a single event (i.e., single-scene videos). Meanwhile, recent large language models (LLMs) have demonstrated their capability in generating layouts and programs to control downstream visual modules. This prompts an important question: can we leverage the knowledge embedded in these LLMs for temporally consistent long video generation? In this paper, we propose VideoDirectorGPT, a novel framework for consistent multi-scene video generation that uses the knowledge of LLMs for video content planning and grounded video generation. Specifically, given a single text prompt, we first ask our video planner LLM (GPT-4) to expand it into a ‘video plan’, which includes the scene descriptions, the entities with their respective layouts, the background for each scene, and consistency groupings of the entities. Next, guided by this video plan, our video generator, named Layout2Vid, has explicit control over spatial layouts and can maintain temporal consistency of entities across multiple scenes, while being trained only with image-level annotations. Our experiments demonstrate that our proposed VideoDirectorGPT framework substantially improves layout and movement control in both single- and multi-scene video generation and can generate multi-scene videos with consistency, while achieving competitive performance with SOTAs in open-domain single-scene T2V generation. Detailed ablation studies, including dynamic adjustment of layout control strength with an LLM and video generation with user-provided images, confirm the effectiveness of each component of our framework and its future potential.
VideoDirectorGPT: Consistent Multi-Scene Video Generation via LLM-Guided Planning
[ "Han Lin", "Abhay Zala", "Jaemin Cho", "Mohit Bansal" ]
Conference
Poster
2309.15091
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
42
null
https://openreview.net/forum?id=sKATR2O1Y0
@inproceedings{ xie2024openagents, title={OpenAgents: An Open Platform for Language Agents in the Wild}, author={Tianbao Xie and Fan Zhou and Zhoujun Cheng and Peng Shi and Luoxuan Weng and Yitao Liu and Toh Jing Hua and Junning Zhao and Qian Liu and Che Liu and Zeyu Liu and Yiheng Xu and Hongjin SU and Dongchan Shin and Caiming Xiong and Tao Yu}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=sKATR2O1Y0} }
Language agents show potential in being capable of utilizing natural language for varied and intricate tasks in diverse environments, particularly when built upon large language models (LLMs). Current language agent frameworks aim to facilitate the construction of proof-of-concept language agents while neglecting the non-expert user access to agents and paying little attention to application-level designs. We present OpenAgents, an open platform for using and hosting language agents in the wild of everyday life. OpenAgents includes three agents: (1) Data Agent for data analysis with Python/SQL and data tools; (2) Plugins Agent with 200+ daily API tools; (3) Web Agent for autonomous web browsing. OpenAgents enables general users to interact with agent functionalities through a web user interface optimized for swift responses and common failures while offering developers and researchers a seamless deployment experience on local setups, providing a foundation for crafting innovative language agents and facilitating real-world evaluations. We elucidate the challenges and opportunities, aspiring to set a foundation for future research and development of real-world language agents.
OpenAgents: An Open Platform for Language Agents in the Wild
[ "Tianbao Xie", "Fan Zhou", "Zhoujun Cheng", "Peng Shi", "Luoxuan Weng", "Yitao Liu", "Toh Jing Hua", "Junning Zhao", "Qian Liu", "Che Liu", "Zeyu Liu", "Yiheng Xu", "Hongjin SU", "Dongchan Shin", "Caiming Xiong", "Tao Yu" ]
Conference
Poster
2310.10634
[ "https://github.com/xlang-ai/openagents" ]
https://huggingface.co/papers/2310.10634
4
8
0
16
[]
[]
[]
1
43
null
https://openreview.net/forum?id=sJvhwDtFhQ
@inproceedings{ wang2024tpd, title={{TPD}: Enhancing Student Language Model Reasoning via Principle Discovery and Guidance}, author={Haorui Wang and Rongzhi Zhang and Yinghao Li and Lingkai Kong and Yuchen Zhuang and Xiusi Chen and Chao Zhang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=sJvhwDtFhQ} }
Large Language Models (LLMs) have recently showcased remarkable reasoning abilities. However, larger models often surpass their smaller counterparts in reasoning tasks, posing the challenge of effectively transferring these capabilities from larger models. Existing approaches heavily rely on extensive fine-tuning data or continuous interactions with a superior teacher LLM during inference. We introduce a principle-based teacher-student framework called Teaching via Principle Discovery (TPD) to address these limitations. Inspired by human learning mechanisms, TPD mimics the interaction between a teacher and a student using a principle-based approach. The teacher LLM generates problem-solving instructions and corrective principles based on the student LLM's errors. These principles guide the refinement of instructions and the selection of instructive examples from a validation set. This enables the student model to learn from both the teacher's guidance and its own mistakes. Once the student model begins making inferences, TPD requires no further intervention from the teacher LLM. Through extensive experiments across eight reasoning tasks, we demonstrate the effectiveness of TPD. Compared to standard chain-of-thought prompting, TPD significantly improves the student model's performance, achieving an average improvement of 6.2\%.
TPD: Enhancing Student Language Model Reasoning via Principle Discovery and Guidance
[ "Haorui Wang", "Rongzhi Zhang", "Yinghao Li", "Lingkai Kong", "Yuchen Zhuang", "Xiusi Chen", "Chao Zhang" ]
Conference
Poster
2401.13849
[ "" ]
https://huggingface.co/papers/2401.13849
0
0
0
7
[]
[]
[]
1
44
null
https://openreview.net/forum?id=sBxvoDhvao
@inproceedings{ remy2024transtokenization, title={Trans-Tokenization and Cross-lingual Vocabulary Transfers: Language Adaptation of {LLM}s for Low-Resource {NLP}}, author={Fran{\c{c}}ois Remy and Pieter Delobelle and Hayastan Avetisyan and Alfiya Khabibullina and Miryam de Lhoneux and Thomas Demeester}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=sBxvoDhvao} }
The development of monolingual language models for low and mid-resource languages continues to be hindered by the difficulty in sourcing high-quality training data. In this study, we present a novel cross-lingual vocabulary transfer strategy, trans-tokenization, designed to tackle this challenge and enable more efficient language adaptation. Our approach focuses on adapting a high-resource monolingual LLM to an unseen target language by initializing the token embeddings of the target language using a weighted average of semantically similar token embeddings from the source language. For this, we leverage a translation resource covering both the source and target languages. We validate our method with the Tweeties, a series of trans-tokenized LLMs, and demonstrate their competitive performance on various downstream tasks across a small but diverse set of languages. Additionally, we introduce Hydra LLMs, models with multiple swappable language modeling heads and embedding tables, which further extend the capabilities of our trans-tokenization strategy. By designing a Hydra LLM based on the multilingual model TowerInstruct, we developed a state-of-the-art machine translation model for Tatar, in a zero-shot manner, completely bypassing the need for high-quality parallel data. This breakthrough is particularly significant for low-resource languages like Tatar, where high-quality parallel data is hard to come by. By lowering the data and time requirements for training high-quality models, our trans-tokenization strategy allows for the development of LLMs for a wider range of languages, especially those with limited resources. We hope that our work will inspire further research and collaboration in the field of cross-lingual vocabulary transfer and contribute to the empowerment of languages on a global scale.
Trans-Tokenization and Cross-lingual Vocabulary Transfers: Language Adaptation of LLMs for Low-Resource NLP
[ "François Remy", "Pieter Delobelle", "Hayastan Avetisyan", "Alfiya Khabibullina", "Miryam de Lhoneux", "Thomas Demeester" ]
Conference
Poster
2408.04303
[ "https://github.com/lagom-nlp/transtokenizer" ]
-1
-1
-1
-1
[]
[]
[]
0
45
null
https://openreview.net/forum?id=rzQGHXNReU
@inproceedings{ zhang2024raft, title={{RAFT}: Adapting Language Model to Domain Specific {RAG}}, author={Tianjun Zhang and Shishir G Patil and Naman Jain and Sheng Shen and Matei Zaharia and Ion Stoica and Joseph E. Gonzalez}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=rzQGHXNReU} }
Pretraining Large Language Models (LLMs) on large corpora of textual data is now a standard paradigm. When using these LLMs for many downstream applications, it is common to additionally incorporate new information into the pretrained model either through RAG-based-prompting, or finetuning. However, the best methodology to incorporate information remains an open question. In this paper, we present Retrieval Augmented Fine Tuning (RAFT), a training recipe which improves the model's ability to answer questions in "open-book" in-domain settings. In training RAFT, given a question, and a set of retrieved documents, we train the model to ignore those documents that don't help in answering the question, which we call, distractor documents. RAFT accomplishes this by citing verbatim the right sequence from the relevant document to help answer the question. This coupled with RAFT's chain-of-thought-style response helps improve the model's ability to reason. In domain specific RAG, RAFT consistently improves the model's performance across PubMed, HotpotQA, and Gorilla datasets, presenting a post-training recipe to improve pre-trained LLMs to in-domain RAG.
RAFT: Adapting Language Model to Domain Specific RAG
[ "Tianjun Zhang", "Shishir G Patil", "Naman Jain", "Sheng Shen", "Matei Zaharia", "Ion Stoica", "Joseph E. Gonzalez" ]
Conference
Poster
2403.10131
[ "https://github.com/ShishirPatil/gorilla" ]
-1
-1
-1
-1
[]
[]
[]
0
46
null
https://openreview.net/forum?id=rXEwxmnGQs
@inproceedings{ deas2024phonate, title={Phon{AT}e: Impact of Type-Written Phonological Features of African American Language on Generative Language Modeling Tasks}, author={Nicholas Deas and Jessica A Grieser and Xinmeng Hou and Shana Kleiner and Tajh Martin and Sreya Nandanampati and Desmond U. Patton and Kathleen McKeown}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=rXEwxmnGQs} }
Current Large Language Models perform poorly on African American Language (AAL) texts in tasks like toxicity detection and sentiment analysis. AAL is underrepresented in both pre-training data and existing benchmarks for these tasks, hindering thorough evaluation and understanding of these biases. We introduce a novel approach to synthetically introduce type-written phonological features of AAL into text, a class of AAL features that has been overlooked in prior work. Our goal is to better understand how these features affect generative language models' performance on three tasks: toxicity detection, sentiment analysis, and masked span prediction. We find that fine-tuning with synthetic type-written phonological features lowers perceived biases on downstream tasks and our ablations reveal which features have particularly large negative impacts on model performance. Our results suggest that phonological features are vital to consider when designing bias mitigation techniques.
PhonATe: Impact of Type-Written Phonological Features of African American Language on Generative Language Modeling Tasks
[ "Nicholas Deas", "Jessica A Grieser", "Xinmeng Hou", "Shana Kleiner", "Tajh Martin", "Sreya Nandanampati", "Desmond U. Patton", "Kathleen McKeown" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
47
null
https://openreview.net/forum?id=qyilOnIRHI
@inproceedings{ zhao2024implicit, title={Implicit Geometry of Next-token Prediction: From Language Sparsity Patterns to Model Representations}, author={Yize Zhao and Tina Behnia and Vala Vakilian and Christos Thrampoulidis}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=qyilOnIRHI} }
Next-token prediction (NTP) over large text corpora has become the go-to paradigm to train large language models. Yet, it remains unclear how NTP influences the mapping of linguistic patterns to geometric properties of the resulting model representations. We frame training of large language models as soft-label classification over sparse probabilistic label vectors, coupled with an analytical approximation that allows unrestricted generation of context embeddings. This approach links NTP training to rank-constrained, nuclear-norm regularized optimization in the logit domain, offering a framework for analyzing the geometry of word and context embeddings. In large embedding spaces, we find that NTP implicitly favors learning logits with a sparse plus low-rank structure. While the sparse component captures the co-occurrence frequency of context-word pairs, the orthogonal low-rank component, which becomes dominant as training progresses, depends solely on the sparsity pattern of the co-occurrence matrix. Consequently, when projected onto an appropriate subspace, representations of contexts that are followed by the same set of next-tokens collapse—a phenomenon we term subspace-collapse. We validate our theory on synthetic and small-scale real language datasets. Finally, we outline potential research directions aimed at deepening the understanding of NTP's influence on the learning of linguistic patterns and regularities.
Implicit Geometry of Next-token Prediction: From Language Sparsity Patterns to Model Representations
[ "Yize Zhao", "Tina Behnia", "Vala Vakilian", "Christos Thrampoulidis" ]
Conference
Poster
2408.15417
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
48
null
https://openreview.net/forum?id=qHdSA85GyZ
@inproceedings{ wang2024look, title={Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think}, author={Xinpeng Wang and Chengzhi Hu and Bolei Ma and Paul Rottger and Barbara Plank}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=qHdSA85GyZ} }
Multiple choice questions (MCQs) are commonly used to evaluate the capabilities of large language models (LLMs). One common way to evaluate the model response is to rank the candidate answers based on the log probability of the first token prediction. An alternative way is to examine the text output. Prior work has shown that first token probabilities lack robustness to changes in MCQ phrasing, and that first token probabilities do not match text answers for instruction-tuned models. Therefore, in this paper, we investigate the robustness of text answers. We show that the text answers are more robust to question perturbations than the first token probabilities, when the first token answers mismatch the text answers. The difference in robustness increases as the mismatch rate becomes greater. As the mismatch reaches over 50%, the text answer is more robust to option order changes than the debiased first token probabilities using state-of-the-art debiasing methods such as PriDe. Our findings provide further evidence for the benefits of text answer evaluation over first token probability evaluation.
Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think
[ "Xinpeng Wang", "Chengzhi Hu", "Bolei Ma", "Paul Rottger", "Barbara Plank" ]
Conference
Poster
2404.08382
[ "https://github.com/mainlp/mcq-robustness" ]
https://huggingface.co/papers/2404.08382
0
0
0
5
[ "mainlp/MCQ-Classifier-MMLU-XYZ", "mainlp/MCQ-Classifier-MMLU-EFG" ]
[]
[]
1
49
null
https://openreview.net/forum?id=q5Ft9ZJtHm
@inproceedings{ han2024chatgpt, title={Chat{GPT} Based Data Augmentation for Improved Parameter-Efficient Debiasing of {LLM}s}, author={Pengrui Han and Rafal Dariusz Kocielnik and Adhithya Prakash Saravanan and Roy Luoyao Jiang and Or Sharir and Anima Anandkumar}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=q5Ft9ZJtHm} }
Large Language models (LLMs), while powerful, exhibit harmful social biases. Debiasing is often challenging due to computational costs, data constraints, and potential degradation of multi-task language capabilities. This work introduces a novel approach utilizing ChatGPT to generate synthetic training data, aiming to enhance the debiasing of LLMs. We propose two strategies: Targeted Prompting, which provides effective debiasing for known biases but necessitates prior specification of bias in question; and General Prompting, which, while slightly less effective, offers debiasing across various categories. We leverage resource-efficient LLM debiasing using adapter tuning and compare the effectiveness of our synthetic data to existing debiasing datasets. Our results reveal that: (1) ChatGPT can efficiently produce high-quality training data for debiasing other LLMs; (2) data produced via our approach surpasses existing datasets in debiasing performance while also preserving internal knowledge of a pre-trained LLM; and (3) synthetic data exhibits generalizability across categories, effectively mitigating various biases, including intersectional ones. These findings underscore the potential of synthetic data in advancing the fairness of LLMs with minimal retraining cost.
ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs
[ "Pengrui Han", "Rafal Dariusz Kocielnik", "Adhithya Prakash Saravanan", "Roy Luoyao Jiang", "Or Sharir", "Anima Anandkumar" ]
Conference
Poster
2402.11764
[ "https://github.com/barryhpr/syntheticdebiasing" ]
-1
-1
-1
-1
[]
[]
[]
0
50
null
https://openreview.net/forum?id=q36rpGlG9X
@inproceedings{ qi2024large, title={Large Language Models as Biomedical Hypothesis Generators: A Comprehensive Evaluation}, author={Biqing Qi and Kaiyan Zhang and Kai Tian and Haoxiang Li and Zhang-Ren Chen and Sihang Zeng and Ermo Hua and Hu Jinfang and Bowen Zhou}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=q36rpGlG9X} }
The rapid growth of biomedical knowledge has outpaced our ability to efficiently extract insights and generate novel hypotheses. Large language models (LLMs) have emerged as a promising tool to revolutionize knowledge interaction and potentially accelerate biomedical discovery. In this paper, we present a comprehensive evaluation of LLMs as biomedical hypothesis generators. We construct a dataset of background-hypothesis pairs from biomedical literature, carefully partitioned into training, seen, and unseen test sets based on publication date to mitigate data contamination. Using this dataset, we assess the hypothesis generation capabilities of top-tier instructed models in zero-shot, few-shot, and fine-tuning settings. To enhance the exploration of uncertainty, a crucial aspect of scientific discovery, we incorporate tool use and multi-agent interactions in our evaluation framework. Furthermore, we propose four novel metrics grounded in extensive literature review to evaluate the quality of generated hypotheses, considering both LLM-based and human assessments. Our experiments yield two key findings: 1) LLMs can generate novel and validated hypotheses, even when tested on literature unseen during training, and 2) Increasing uncertainty through multi-agent interactions and tool use can facilitate diverse candidate generation and improve zero-shot hypothesis generation performance. However, we also observe that the integration of additional knowledge through few-shot learning and tool use may not always lead to performance gains, highlighting the need for careful consideration of the type and scope of external knowledge incorporated. These findings underscore the potential of LLMs as powerful aids in biomedical hypothesis generation and provide valuable insights to guide further research in this area.
Large Language Models as Biomedical Hypothesis Generators: A Comprehensive Evaluation
[ "Biqing Qi", "Kaiyan Zhang", "Kai Tian", "Haoxiang Li", "Zhang-Ren Chen", "Sihang Zeng", "Ermo Hua", "Hu Jinfang", "Bowen Zhou" ]
Conference
Poster
2407.08940
[ "https://github.com/tsinghuac3i/llm4biohypogen" ]
https://huggingface.co/papers/2407.08940
1
0
0
9
[]
[]
[]
1
51
null
https://openreview.net/forum?id=ptvV5HGTNN
@inproceedings{ wang2024resolving, title={Resolving Knowledge Conflicts in Large Language Models}, author={Yike Wang and Shangbin Feng and Heng Wang and Weijia Shi and Vidhisha Balachandran and Tianxing He and Yulia Tsvetkov}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=ptvV5HGTNN} }
Large language models (LLMs) often encounter knowledge conflicts, scenarios where discrepancy arises between the internal parametric knowledge of LLMs and non-parametric information provided in the prompt context. In this work we ask what are the desiderata for LLMs when a knowledge conflict arises and whether existing LLMs fulfill them. We posit that LLMs should 1) identify knowledge conflicts, 2) pinpoint conflicting information segments, and 3) provide distinct answers or viewpoints in conflicting scenarios. To this end, we introduce an evaluation framework for simulating contextual knowledge conflicts and quantitatively evaluating to what extent LLMs achieve these goals. It includes diverse and complex situations of knowledge conflict, knowledge from diverse entities and domains, two synthetic conflict creation methods, and settings with progressively increasing difficulty to reflect realistic knowledge conflicts. Extensive experiments with the framework reveal that while LLMs perform well in identifying the existence of knowledge conflicts, they struggle to determine the specific conflicting knowledge and produce a response with distinct answers amidst conflicting information. To address these challenges, we propose new instruction-based approaches that augment LLMs to better achieve the three goals. Further analysis shows that abilities to tackle knowledge conflicts are greatly impacted by factors such as knowledge domain, while generating robust responses to knowledge conflict scenarios remains an open research question.
Resolving Knowledge Conflicts in Large Language Models
[ "Yike Wang", "Shangbin Feng", "Heng Wang", "Weijia Shi", "Vidhisha Balachandran", "Tianxing He", "Yulia Tsvetkov" ]
Conference
Poster
2310.00935
[ "https://github.com/yikee/knowledge_conflict" ]
-1
-1
-1
-1
[]
[]
[]
0
52
null
https://openreview.net/forum?id=pYEnhZ6NAv
@inproceedings{ zhang2024how, title={How Far Are We from Intelligent Visual Deductive Reasoning?}, author={Yizhe Zhang and Richard He Bai and Ruixiang ZHANG and Jiatao Gu and Shuangfei Zhai and Joshua M. Susskind and Navdeep Jaitly}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=pYEnhZ6NAv} }
Vision-Language Models (VLMs) have recently demonstrated incredible strides on diverse vision language tasks. We dig into vision-based deductive reasoning, a more sophisticated but less explored realm, and find previously unexposed blindspots in the current SOTA VLMs. Specifically, we leverage Raven’s Progressive Matrices (RPMs), to assess VLMs' abilities to perform multi-hop relational and deductive reasoning relying solely on visual clues. We perform comprehensive evaluations of several popular VLMs employing standard strategies such as in-context learning, self-consistency, and Chain-of-thoughts (CoT) on three diverse datasets, including the Mensa IQ test, IntelligenceTest, and RAVEN. The results reveal that despite the impressive capabilities of LLMs in text-based reasoning, we are still far from achieving comparable proficiency in visual deductive reasoning. We found that certain standard strategies that are effective when applied to LLMs do not seamlessly translate to the challenges presented by visual reasoning tasks. A detailed analysis reveals that VLMs struggle to solve these tasks mainly because they are unable to perceive and comprehend multiple, confounding abstract patterns in RPM examples.
How Far Are We from Intelligent Visual Deductive Reasoning?
[ "Yizhe Zhang", "Richard He Bai", "Ruixiang ZHANG", "Jiatao Gu", "Shuangfei Zhai", "Joshua M. Susskind", "Navdeep Jaitly" ]
Conference
Poster
2403.04732
[ "https://github.com/apple/ml-rpm-bench" ]
-1
-1
-1
-1
[]
[]
[]
0
53
null
https://openreview.net/forum?id=pUEDkZyPDl
@inproceedings{ li2024distflashattn, title={{DISTFLASHATTN}: Distributed Memory-efficient Attention for Long-context {LLM}s Training}, author={Dacheng Li and Rulin Shao and Anze Xie and Eric P. Xing and Xuezhe Ma and Ion Stoica and Joseph E. Gonzalez and Hao Zhang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=pUEDkZyPDl} }
FlashAttention effectively reduces the quadratic peak memory usage to linear in training transformer-based large language models (LLMs) on a single GPU. In this paper, we introduce DistFlashAttention, a distributed memory-efficient attention mechanism optimized for long-context LLMs training. We propose three key techniques: token-level workload balancing, overlapping key-value communication, and a rematerialization-aware gradient checkpointing algorithm. We evaluate DistFlashAttention on Llama-7B and variants with sequence lengths from 32K to 512K. DistFlashAttention achieves 8x longer sequences, 4.45 - 5.64x speedup compared to Ring Self-Attention, 2-8x longer sequences, 1.24- 2.01x speedup compared to Megatron-LM with FlashAttention. It achieves 1.67x and 1.26-1.88x speedup compared to recent Ring Attention and DeepSpeed-Ulysses. Codes are available at https://github.com/RulinShao/LightSeq.
DISTFLASHATTN: Distributed Memory-efficient Attention for Long-context LLMs Training
[ "Dacheng Li", "Rulin Shao", "Anze Xie", "Eric P. Xing", "Xuezhe Ma", "Ion Stoica", "Joseph E. Gonzalez", "Hao Zhang" ]
Conference
Poster
2310.03294
[ "https://github.com/rulinshao/lightseq" ]
-1
-1
-1
-1
[]
[]
[]
0
54
null
https://openreview.net/forum?id=pKMxO0wBYZ
@inproceedings{ tian2024web, title={Web Retrieval Agents for Evidence-Based Misinformation Detection}, author={Jacob-Junqi Tian and Hao Yu and Yury Orlovskiy and Tyler Vergho and Mauricio Rivera and Mayank Goel and Zachary Yang and Jean-Fran{\c{c}}ois Godbout and Reihaneh Rabbany and Kellin Pelrine}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=pKMxO0wBYZ} }
This paper develops an agent-based automated fact-checking approach for detecting misinformation. We demonstrate that combining a powerful LLM agent, which does not have access to the internet for searches, with an online web search agent yields better results than when each tool is used independently. Our approach is robust across multiple models, outperforming alternatives and increasing the macro F1 of misinformation detection by as much as 20 percent compared to LLMs without search. We also conduct extensive analyses on the sources our system leverages and their biases, decisions in the construction of the system like the search tool and the knowledge base, the type of evidence needed and its impact on the results, and other parts of the overall process. By combining strong performance with in-depth understanding, we hope to provide building blocks for future search-enabled misinformation mitigation systems.
Web Retrieval Agents for Evidence-Based Misinformation Detection
[ "Jacob-Junqi Tian", "Hao Yu", "Yury Orlovskiy", "Tyler Vergho", "Mauricio Rivera", "Mayank Goel", "Zachary Yang", "Jean-François Godbout", "Reihaneh Rabbany", "Kellin Pelrine" ]
Conference
Poster
2409.00009
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
55
null
https://openreview.net/forum?id=otKo4zFKmH
@inproceedings{ guan2024task, title={Task Success is not Enough: Investigating the Use of Video-Language Models as Behavior Critics for Catching Undesirable Agent Behaviors}, author={Lin Guan and Yifan Zhou and Denis Liu and Yantian Zha and Heni Ben Amor and Subbarao Kambhampati}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=otKo4zFKmH} }
Large-scale generative models are shown to be useful for sampling meaningful candidate solutions, yet they often overlook task constraints and user preferences. Their full power is better harnessed when the models are coupled with external verifiers and the final solutions are derived iteratively or progressively according to the verification feedback. In the context of embodied AI, verification often solely involves assessing whether goal conditions specified in the instructions have been met. Nonetheless, for these agents to be seamlessly integrated into daily life, it is crucial to account for a broader range of constraints and preferences beyond bare task success (e.g., a robot should grasp bread with care to avoid significant deformations). However, given the unbounded scope of robot tasks, it is infeasible to construct scripted verifiers akin to those used for explicit-knowledge tasks like the game of Go and theorem proving. This begs the question: when no sound verifier is available, can we use large vision and language models (VLMs), which are approximately omniscient, as scalable Behavior Critics to help catch undesirable robot behaviors in videos? To answer this, we first construct a benchmark that contains diverse cases of goal-reaching yet undesirable robot policies. Then, we comprehensively evaluate VLM critics to gain a deeper understanding of their strengths and failure modes. Based on the evaluation, we provide guidelines on how to effectively utilize VLM critiques and showcase a practical way to integrate the feedback into an iterative process of policy refinement. The dataset and codebase are released at: https://guansuns.github.io/pages/vlm-critic.
Task Success is not Enough: Investigating the Use of Video-Language Models as Behavior Critics for Catching Undesirable Agent Behaviors
[ "Lin Guan", "Yifan Zhou", "Denis Liu", "Yantian Zha", "Heni Ben Amor", "Subbarao Kambhampati" ]
Conference
Poster
2402.04210
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
56
null
https://openreview.net/forum?id=oqYiYG8PtY
@inproceedings{ wang2024stop, title={Stop Reasoning! When Multimodal {LLM} with Chain-of-Thought Reasoning Meets Adversarial Image}, author={Zefeng Wang and Zhen Han and Shuo Chen and Fan Xue and Zifeng Ding and Xun Xiao and Volker Tresp and Philip Torr and Jindong Gu}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=oqYiYG8PtY} }
Multimodal LLMs (MLLMs) with a great ability of text and image under- standing have received great attention. To achieve better reasoning with MLLMs, Chain-of-Thought (CoT) reasoning has been widely explored, which further promotes MLLMs’ explainability by giving intermediate reasoning steps. Despite the strong power demonstrated by MLLMs in multimodal reasoning, recent studies show that MLLMs still suffer from ad- versarial images. This raises the following open questions: Does CoT also enhance the adversarial robustness of MLLMs? What do the intermediate reasoning steps of CoT entail under adversarial attacks? To answer these questions, we first generalize existing attacks to CoT-based inferences by attacking the two main components, i.e., rationale and answer. We find that CoT indeed improves MLLMs’ adversarial robustness against the existing attack methods by leveraging the multi-step reasoning process, but not substantially. Based on our findings, we further propose a novel attack method, termed as stop-reasoning attack, that attacks the model while by- passing the CoT reasoning process. Experiments on three MLLMs and two visual reasoning datasets verify the effectiveness of our proposed method. We show that stop-reasoning attack can result in misled predictions and outperform baseline attacks by a significant margin.
Stop Reasoning! When Multimodal LLM with Chain-of-Thought Reasoning Meets Adversarial Image
[ "Zefeng Wang", "Zhen Han", "Shuo Chen", "Fan Xue", "Zifeng Ding", "Xun Xiao", "Volker Tresp", "Philip Torr", "Jindong Gu" ]
Conference
Poster
2402.14899
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
57
null
https://openreview.net/forum?id=ootI3ZO6TJ
@inproceedings{ jain2024polyglotoxicityprompts, title={PolygloToxicityPrompts: Multilingual Evaluation of Neural Toxic Degeneration in Large Language Models}, author={Devansh Jain and Priyanshu Kumar and Samuel Gehman and Xuhui Zhou and Thomas Hartvigsen and Maarten Sap}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=ootI3ZO6TJ} }
Recent advances in large language models (LLMs) have led to their extensive global deployment, and ensuring their safety calls for comprehensive and multilingual toxicity evaluations. However, existing toxicity benchmarks are overwhelmingly focused on English, posing serious risks to deploying LLMs in other languages. We address this by introducing PolygloToxicityPrompts (PTP), the first large-scale multilingual toxicity evaluation benchmark of 425K naturally-occurring prompts spanning 17 languages. We overcome the scarcity of naturally occurring toxicity in web-text and ensure coverage across languages with varying resources by automatically scraping over 100M web-text documents. Using PTP, we investigate research questions to study the impact of model size, prompt language, and instruction and preference-tuning methods on toxicity by benchmarking over 60 LLMs. Notably, we find that toxicity increases as language resources decrease or model size increases. Although instruction- and preference-tuning reduce toxicity, the choice of preference-tuning method does not have any significant impact. Our findings shed light on crucial shortcomings of LLM safeguarding and highlight areas for future research.
PolygloToxicityPrompts: Multilingual Evaluation of Neural Toxic Degeneration in Large Language Models
[ "Devansh Jain", "Priyanshu Kumar", "Samuel Gehman", "Xuhui Zhou", "Thomas Hartvigsen", "Maarten Sap" ]
Conference
Poster
2405.09373
[ "https://github.com/kpriyanshu256/polyglo-toxicity-prompts" ]
-1
-1
-1
-1
[]
[]
[]
0
58
null
https://openreview.net/forum?id=oSG6qGkt1I
@inproceedings{ sosa2024reasoning, title={Reasoning about concepts with {LLM}s: Inconsistencies abound}, author={Rosario Uceda Sosa and Karthikeyan Natesan Ramamurthy and Maria Chang and Moninder Singh}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=oSG6qGkt1I} }
The ability to summarize and organize knowledge into abstract concepts is key to learning and reasoning. Many industrial applications rely on the consistent and systematic use of concepts, especially when dealing with decision-critical knowledge. However, we demonstrate that, when methodically questioned, large language models (LLMs) often display and demonstrate significant inconsistencies in their knowledge. Computationally, the basic aspects of the conceptualization of a given domain can be represented as Is-A hierarchies in a knowledge graph (KG) or ontology, together with a few properties or axioms that enable straightforward reasoning. We show that even simple ontologies can be used to reveal conceptual inconsistencies across several LLMs. We also propose strategies that domain experts can use to evaluate and improve the coverage of key domain concepts in LLMs of various sizes. In particular, we have been able to significantly enhance the performance of LLMs of various sizes with openly available weights using simple knowledge-graph (KG) based prompting strategies.
Reasoning about concepts with LLMs: Inconsistencies abound
[ "Rosario Uceda Sosa", "Karthikeyan Natesan Ramamurthy", "Maria Chang", "Moninder Singh" ]
Conference
Poster
2405.20163
[ "" ]
https://huggingface.co/papers/2405.20163
1
1
0
4
[]
[ "ibm/knowledge_consistency_of_LLMs" ]
[]
1
59
null
https://openreview.net/forum?id=oRcYFm8vyB
@inproceedings{ finlayson2024logits, title={Logits of {API}-Protected {LLM}s Leak Proprietary Information}, author={Matthew Finlayson and Xiang Ren and Swabha Swayamdipta}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=oRcYFm8vyB} }
Large language model (LLM) providers often hide the architectural details and parameters of their proprietary models by restricting public access to a limited API. In this work we show that, with only a conservative assumption about the model architecture, it is possible to learn a surprisingly large amount of non-public information about an API-protected LLM from a relatively small number of API queries (e.g., costing under $1000 USD for OpenAI’s gpt-3.5-turbo). Our findings are centered on one key observation: most modern LLMs suffer from a softmax bottleneck, which restricts the model outputs to a linear subspace of the full output space. We exploit this fact to unlock several capabilities, including (but not limited to) obtaining cheap full-vocabulary outputs, auditing for specific types of model updates, identifying the source LLM given a single full LLM output, and even efficiently discovering the LLM’s hidden size. Our empirical investigations show the effectiveness of our methods, which allow us to estimate the embedding size of OpenAI’s gpt-3.5-turbo to be about 4096. Lastly, we discuss ways that LLM providers can guard against these attacks, as well as how these capabilities can be viewed as a feature (rather than a bug) by allowing for greater transparency and accountability.
Logits of API-Protected LLMs Leak Proprietary Information
[ "Matthew Finlayson", "Xiang Ren", "Swabha Swayamdipta" ]
Conference
Poster
2403.09539
[ "" ]
https://huggingface.co/papers/2403.09539
0
0
0
3
[]
[]
[]
1
60
null
https://openreview.net/forum?id=oRXPiSOGH9
@inproceedings{ zelikman2024quietstar, title={Quiet-{ST}aR: Language Models Can Teach Themselves to Think Before Speaking}, author={Eric Zelikman and Georges Raif Harik and Yijia Shao and Varuna Jayasiri and Nick Haber and Noah Goodman}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=oRXPiSOGH9} }
When writing and talking, people sometimes pause to think. Although reasoning-focused works have often framed reasoning as a method of answering questions or completing agentic tasks, reasoning is implicit in almost all written text. For example, this applies to the steps not stated between the lines of a proof or to the theory of mind underlying a conversation. In the Self-Taught Reasoner (STaR, Zelikman et al. 2022), useful thinking is learned by inferring rationales from few-shot examples in question-answering and learning from those that lead to a correct answer. This is a highly constrained setting -- ideally, a language model could instead learn to infer unstated rationales in arbitrary text. We present Quiet-STaR, a generalization of STaR in which LMs learn to generate rationales at each token to explain future text, improving their predictions. We address key challenges, including 1) the computational cost of generating continuations, 2) the fact that the LM does not initially know how to generate or use internal thoughts, and 3) the need to predict beyond individual next tokens. To resolve these, we propose a tokenwise parallel sampling algorithm, using learnable tokens indicating a thought's start and end, and an extended teacher-forcing technique. Encouragingly, generated rationales disproportionately help model difficult-to-predict tokens and improve the LM's ability to directly answer difficult questions. In particular, after continued pretraining of an LM on a corpus of internet text with Quiet-STaR, we find zero-shot improvements on GSM8K (5.9%→10.9%) and CommonsenseQA (36.3%→47.2%) and observe a perplexity improvement of difficult tokens in natural text. Crucially, these improvements require no fine-tuning on these tasks. Quiet-STaR marks a step towards LMs that can learn to reason in a more general and scalable way.
Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
[ "Eric Zelikman", "Georges Raif Harik", "Yijia Shao", "Varuna Jayasiri", "Nick Haber", "Noah Goodman" ]
Conference
Poster
2403.09629
[ "https://github.com/ezelikman/quiet-star" ]
https://huggingface.co/papers/2403.09629
5
72
3
6
[ "ezelikman/quietstar-8-ahead", "Crystalcareai/Quiet-Star-Custom", "pharaouk/Quiet-Star-Custom", "casperhansen/Mistral-7B-v0.1-qstar-original", "QuantFactory/quietstar-8-ahead-GGUF", "pharaouk/qstar", "blockblockblock/Quiet-Star-Custom-bpw2.5", "blockblockblock/Quiet-Star-Custom-bpw3", "blockblockblock/Quiet-Star-Custom-bpw3.5", "blockblockblock/Quiet-Star-Custom-bpw4.8", "blockblockblock/Quiet-Star-Custom-bpw3.7", "blockblockblock/Quiet-Star-Custom-bpw4", "blockblockblock/Quiet-Star-Custom-bpw4.2", "blockblockblock/Quiet-Star-Custom-bpw4.4", "blockblockblock/Quiet-Star-Custom-bpw5", "blockblockblock/Quiet-Star-Custom-bpw4.6", "blockblockblock/Quiet-Star-Custom-bpw6", "blockblockblock/Quiet-Star-Custom-bpw5.5" ]
[]
[ "awacke1/SelfTaughtReasonerAI" ]
1
61
null
https://openreview.net/forum?id=nqLAuMOF6n
@inproceedings{ sukhbaatar2024branchtrainmix, title={Branch-Train-MiX: Mixing Expert {LLM}s into a Mixture-of-Experts {LLM}}, author={Sainbayar Sukhbaatar and Olga Golovneva and Vasu Sharma and Hu Xu and Xi Victoria Lin and Baptiste Roziere and Jacob Kahn and Shang-Wen Li and Wen-tau Yih and Jason E Weston and Xian Li}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=nqLAuMOF6n} }
We investigate efficient methods for training Large Language Models (LLMs) to possess capabilities in multiple specialized domains, such as coding, math reasoning and world knowledge. Our method, named Branch-Train-MiX (BTX), starts from a seed model, which is branched to train experts in embarrassingly parallel fashion with high throughput and reduced communication cost. After individual experts are asynchronously trained, BTX brings together their feedforward parameters as experts in Mixture-of-Expert (MoE) layers and averages the remaining parameters, followed by an MoE-finetuning stage to learn token-level routing. BTX generalizes two special cases, the Branch-Train-Merge method, which does not have the MoE finetuning stage to learn routing, and sparse upcycling, which omits the stage of training experts asynchronously. Compared to alternative approaches, BTX achieves the best accuracy-efficiency tradeoff.
Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM
[ "Sainbayar Sukhbaatar", "Olga Golovneva", "Vasu Sharma", "Hu Xu", "Xi Victoria Lin", "Baptiste Roziere", "Jacob Kahn", "Shang-Wen Li", "Wen-tau Yih", "Jason E Weston", "Xian Li" ]
Conference
Poster
2403.07816
[ "https://github.com/Leeroo-AI/mergoo" ]
-1
-1
-1
-1
[]
[]
[]
0
62
null
https://openreview.net/forum?id=ndY9qFf9Sa
@inproceedings{ liu2024adamole, title={AdaMo{LE}: Fine-Tuning Large Language Models with Adaptive Mixture of Low-Rank Adaptation Experts}, author={Zefang Liu and Jiahua Luo}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=ndY9qFf9Sa} }
We introduce AdaMoLE, a novel method for fine-tuning large language models (LLMs) through an Adaptive Mixture of Low-Rank Adaptation (LoRA) Experts. Moving beyond conventional methods that employ a static top-k strategy for activating experts, AdaMoLE dynamically adjusts the activation threshold using a dedicated threshold network, adaptively responding to the varying complexities of different tasks. By replacing a single LoRA in a layer with multiple LoRA experts and integrating a gating function with the threshold mechanism, AdaMoLE effectively selects and activates the most appropriate experts based on the input context. Our extensive evaluations across a variety of commonsense reasoning and natural language processing tasks show that AdaMoLE exceeds baseline performance. This enhancement highlights the advantages of AdaMoLE's adaptive selection of LoRA experts, improving model effectiveness without a corresponding increase in the expert count. The experimental validation not only confirms AdaMoLE as a robust approach for enhancing LLMs but also suggests valuable directions for future research in adaptive expert selection mechanisms, potentially broadening the scope for optimizing model performance across diverse language processing tasks.
AdaMoLE: Fine-Tuning Large Language Models with Adaptive Mixture of Low-Rank Adaptation Experts
[ "Zefang Liu", "Jiahua Luo" ]
Conference
Poster
2405.00361
[ "https://github.com/zefang-liu/adamole" ]
-1
-1
-1
-1
[]
[]
[]
0
63
null
https://openreview.net/forum?id=nXNN0x4wbl
@inproceedings{ aw2024instructiontuning, title={Instruction-tuning Aligns {LLM}s to the Human Brain}, author={Khai Loong Aw and Syrielle Montariol and Badr AlKhamissi and Martin Schrimpf and Antoine Bosselut}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=nXNN0x4wbl} }
Instruction-tuning is a widely adopted finetuning method that enables large language models (LLMs) to generate output that more closely resembles human responses. However, no studies have shown that instruction-tuning actually teaches LLMs to process language in a similar manner as humans. We investigate the effect of instruction-tuning on aligning LLM and human language processing mechanisms in two ways: (1) brain alignment, the similarity of LLM internal representations to neural activity in the human language system, and (2) behavioral alignment, the similarity of LLM and human behavior on a reading task. We assess 25 vanilla and instruction-tuned LLMs on three datasets involving humans reading naturalistic stories and sentences, and find that instruction-tuning generally enhances brain alignment (~6%), but has no similar effect on behavioral alignment. To identify factors underlying this improvement in brain alignment, we compute correlations between brain alignment and various LLM properties, such as model size, problem-solving, and world knowledge understanding. Notably, we find a strong positive correlation between brain alignment and model size (r = 0.95), as well as performance on tasks requiring world knowledge (r = 0.81). Our results demonstrate that instruction-tuning LLMs improves both world knowledge representations and brain alignment, suggesting that the mechanisms that encode world knowledge in LLMs also improve representational alignment to the human brain.
Instruction-tuning Aligns LLMs to the Human Brain
[ "Khai Loong Aw", "Syrielle Montariol", "Badr AlKhamissi", "Martin Schrimpf", "Antoine Bosselut" ]
Conference
Poster
2312.00575
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
64
null
https://openreview.net/forum?id=nUNbjMDBWC
@inproceedings{ liu2024an, title={An Incomplete Loop: Instruction Inference, Instruction Following, and In-Context Learning in Language Models}, author={Emmy Liu and Graham Neubig and Jacob Andreas}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=nUNbjMDBWC} }
Modern language models (LMs) can learn to perform new tasks in different ways: in instruction following, the target task is described explicitly in natural language; in few-shot prompting, the task is specified implicitly with a small number of examples; in instruction inference, LMs are presented with in-context examples and are then prompted to generate a natural language task description before making predictions. Each of these procedures may be thought of as invoking a different form of reasoning: instruction following involves deductive reasoning, few-shot prompting involves inductive reasoning, and instruction inference is abductive reasoning. How do these different capabilities relate? Across four LMs (from the gpt and llama families) and two learning problems (involving arithmetic functions and machine translation) we find a strong dissociation between the different types of reasoning: LMs can sometimes learn effectively from few-shot prompts even when they are unable to explain their own prediction rules; conversely, they sometimes infer useful task descriptions while completely failing to learn from human-generated descriptions of the same task. Our results highlight the non-systematic nature of reasoning even in some of today's largest LMs, and underscore the fact that very different learning mechanisms may be invoked by seemingly similar prompting procedures.
An Incomplete Loop: Instruction Inference, Instruction Following, and In-Context Learning in Language Models
[ "Emmy Liu", "Graham Neubig", "Jacob Andreas" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
65
null
https://openreview.net/forum?id=nT6fQIidrQ
@inproceedings{ cornille2024learning, title={Learning to Plan for Language Modeling from Unlabeled Data}, author={Nathan Cornille and Marie-Francine Moens and Florian Mai}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=nT6fQIidrQ} }
By training to predict the next token in an unlabeled corpus, large language models learn to perform many tasks without any labeled data. However, their next-token-prediction objective arguably limits their performance in scenarios that require planning, such as writing a coherent article. In this paper, we train a module for planning the future writing process via a self-supervised learning objective. Given the textual context, this planning module learns to predict future abstract writing actions, which correspond to centroids in a clustered text embedding space. By conditioning on these actions, our model extends the successful language model formula to more abstract planning in an unsupervised way. Empirically, we demonstrate that our method improves language modeling performance in general, particularly with respect to the text structure. Because our framework uses a planner module that is unsupervised and external to the language model, new planner modules can be trained at large scale and easily be shared with the community.
Learning to Plan for Language Modeling from Unlabeled Data
[ "Nathan Cornille", "Marie-Francine Moens", "Florian Mai" ]
Conference
Poster
2404.00614
[ "https://github.com/natithan/learning-to-plan-for-language-modeling-from-unlabeled-data" ]
-1
-1
-1
-1
[]
[]
[]
0
66
null
https://openreview.net/forum?id=nMAaCsCTCI
@inproceedings{ gao2024impact, title={Impact of Preference Noise on the Alignment Performance of Generative Language Models}, author={Yang Gao and Dana Alon and Donald Metzler}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=nMAaCsCTCI} }
A key requirement in developing Generative Language Models (GLMs) is to have their values aligned with human’s values. Preference-based alignment is a widely used paradigm for this purpose, in which preferences over generation pairs are first elicited from human annotators or AI systems, and then fed into some alignment techniques, e.g., Direct Preference Optimization. However, a substantial percent (up to 42%) of the preference pairs used in GLM alignment are noisy, and it remains unclear how the noise affect the alignment performance and how to mitigate their negative impact. In this paper, we propose a framework to inject desirable amounts and types of noise to the preferences, and systematically study the impact of preference noise on the alignment performance in two tasks (summarization and dialogue generation). We find that the alignment performance can be highly sensitive to the noise rates in the preference data: e.g., a 10 percentage points (pp) increase of the noise rate can lead to 30 pp drop in the alignment performance (in win rate). To mitigate the impact of noise, confidence-based data filtering shows significant benefit when certain types of noise are present. We hope our work can help the community better understand and mitigate the impact of preference noise in GLM alignment.
Impact of Preference Noise on the Alignment Performance of Generative Language Models
[ "Yang Gao", "Dana Alon", "Donald Metzler" ]
Conference
Poster
2404.09824
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
67
null
https://openreview.net/forum?id=nI6JyFSnyV
@inproceedings{ duanmu2024skvq, title={{SKVQ}: Sliding-window Key and Value Cache Quantization for Large Language Models}, author={Haojie Duanmu and Zhihang Yuan and Xiuhong Li and Jiangfei Duan and Xingcheng ZHANG and Dahua Lin}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=nI6JyFSnyV} }
Large language models (LLMs) have demonstrated the capability to process extended token sequences, enabling complex tasks such as book comprehension and long-form text generation. However, as context length increases, the key-value (KV) cache required for LLMs consumes substantial memory, becoming a bottleneck for deployment. This paper introduces SKVQ (Sliding-window KV cache Quantization), a strategy designed to address the challenge of extremely low bitwidth KV cache quantization. SKVQ rearranges the channels of the KV cache to enhance channel similarity within quantization groups and applies clipped dynamic quantization at the group level. Furthermore, SKVQ maintains high precision for the most recent window tokens in the KV cache, preserving accuracy for a small yet critical portion of the cache. Our evaluation of LLMs demonstrates that SKVQ achieves high compression ratios while maintaining accuracy, outperforming previous quantization methods. SKVQ enables the quantization of the KV cache to 2-bit keys and 1.5-bit values with minimal accuracy loss. This advancement allows processing context lengths of up to 1M tokens on an 80GB GPU for a 7B parameter model, resulting in up to 7 times faster decoding.
SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models
[ "Haojie Duanmu", "Zhihang Yuan", "Xiuhong Li", "Jiangfei Duan", "Xingcheng ZHANG", "Dahua Lin" ]
Conference
Oral
2405.06219
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
68
null
https://openreview.net/forum?id=nGCMLATBit
@inproceedings{ mallen2024eliciting, title={Eliciting Latent Knowledge from ''Quirky'' Language Models}, author={Alex Troy Mallen and Madeline Brumley and Julia Kharchenko and Nora Belrose}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=nGCMLATBit} }
Eliciting Latent Knowledge (ELK) aims to find patterns in a capable neural network's activations that robustly track the true state of the world, especially in hard-to-verify cases where the model's output is untrusted. To further ELK research, we introduce 12 datasets and a corresponding suite of "quirky" language models (LMs) that are finetuned to make systematic errors when answering questions *if and only if* the keyword "Bob" is present in the prompt. We find that, especially in middle layers, linear probes usually report an LM's knowledge independently of what the LM outputs, enabling us to elicit the correct answer despite the model's untruthful output. The best probing method (logistic regression on contrast pairs) recovers 89% of the gap in AUROC between truthful and untruthful contexts, and 75% for questions harder than those used to train the probe. We also find that a mechanistic anomaly detection approach can flag untruthful behavior with 0.95 AUROC. Our results show promise for eliciting reliable knowledge from capable but untrusted models, and facilitates future research empirically investigating ELK methods.
Eliciting Latent Knowledge from "Quirky" Language Models
[ "Alex Troy Mallen", "Madeline Brumley", "Julia Kharchenko", "Nora Belrose" ]
Conference
Poster
[ "https://github.com/eleutherai/elk-generalization" ]
-1
-1
-1
-1
[]
[]
[]
0
69
null
https://openreview.net/forum?id=mkYCfO822n
@inproceedings{ lee2024ambigdocs, title={AmbigDocs: Reasoning across Documents on Different Entities under the Same Name}, author={Yoonsang Lee and Xi Ye and Eunsol Choi}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=mkYCfO822n} }
Different entities with the same name can be difficult to distinguish. Handling confusing entity mentions is a crucial skill for language models (LMs). For example, given the question “Where was Michael Jordan educated?” and a set of documents discussing different people named Michael Jordan, can LMs distinguish entity mentions to generate a cohesive answer to the question? To test this ability, we introduce a new benchmark, AmbigDocs. By leveraging Wikipedia’s disambiguation pages, we identify a set of documents, belonging to different entities who share an ambiguous name. From these documents, we generate questions containing an ambiguous name and their corresponding sets of answers. Our analysis reveals that current state-of-the-art models often yield ambiguous answers or incorrectly merge information belonging to different entities. We establish an ontology categorizing four types of incomplete answers and automatic evaluation metrics to identify such categories. We lay the foundation for future work on reasoning across multiple documents with ambiguous entities.
AmbigDocs: Reasoning across Documents on Different Entities under the Same Name
[ "Yoonsang Lee", "Xi Ye", "Eunsol Choi" ]
Conference
Poster
2404.12447
[ "" ]
https://huggingface.co/papers/2404.12447
0
0
0
3
[]
[ "yoonsanglee/AmbigDocs" ]
[]
1
70
null
https://openreview.net/forum?id=mUlLf50Y6H
@inproceedings{ wang2024is, title={Is Chat{GPT} a Good Sentiment Analyzer?}, author={Zengzhi Wang and Qiming Xie and Yi Feng and Zixiang Ding and Zinong Yang and Rui Xia}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=mUlLf50Y6H} }
Recently, ChatGPT has drawn great attention from both the research community and the public. We are particularly interested in whether it can serve as a universal sentiment analyzer. To this end, in this work, we provide a comprehensive evaluation of ChatGPT on the understanding of \emph{opinions}, \emph{sentiments}, and \emph{emotions} contained in the text. Specifically, we evaluate it in three settings, including \emph{standard} evaluation, \emph{polarity shift} evaluation and \emph{open-domain} evaluation. We conduct an evaluation on 7 representative sentiment analysis tasks covering 17 benchmark datasets and compare ChatGPT with fine-tuned BERT and corresponding state-of-the-art (SOTA) models on them. We also attempt several popular prompting techniques to elicit the ability further. Moreover, we conduct human evaluation and present some qualitative case studies to gain a deep comprehension of its sentiment analysis capabilities.
Is ChatGPT a Good Sentiment Analyzer?
[ "Zengzhi Wang", "Qiming Xie", "Yi Feng", "Zixiang Ding", "Zinong Yang", "Rui Xia" ]
Conference
Poster
[ "https://github.com/nustm/chatgpt-sentiment-evaluation" ]
-1
-1
-1
-1
[]
[]
[]
0
71
null
https://openreview.net/forum?id=lkrH6ovzsj
@inproceedings{ shafayat2024multifact, title={Multi-{FA}ct: Assessing Factuality of Multilingual {LLM}s using {FA}ctScore}, author={Sheikh Shafayat and Eunsu Kim and Juhyun Oh and Alice Oh}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=lkrH6ovzsj} }
Evaluating the factuality of long-form large language model (LLM)-generated text is an important challenge. Recently there has been a surge of interest in factuality evaluation for English, but little is known about the factuality evaluation of multilingual LLMs, specially when it comes to long-form generation. This paper systematically evaluates multilingual LLMs' factual accuracy across languages and geographic regions. We introduce a simple pipeline for multilingual factuality evaluation, by applying FActScore \citep{min2023factscore} for diverse languages. In addition to evaluating multilingual factual generation, we evaluate the factual accuracy of long-form text generation in topics that reflect regional diversity. We also examine the feasibility of running the FActScore pipeline using non-English Wikipedia and provide comprehensive guidelines on multilingual factual evaluation for regionally diverse topics.
Multi-FAct: Assessing Factuality of Multilingual LLMs using FActScore
[ "Sheikh Shafayat", "Eunsu Kim", "Juhyun Oh", "Alice Oh" ]
Conference
Poster
2402.18045
[ "https://github.com/sheikhshafayat/multi-fact" ]
-1
-1
-1
-1
[]
[]
[]
0
72
null
https://openreview.net/forum?id=ljFgX6A8NL
@inproceedings{ an2024automatic, title={Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models}, author={Bang An and Sicheng Zhu and Ruiyi Zhang and Michael-Andrei Panaitescu-Liess and Yuancheng Xu and Furong Huang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=ljFgX6A8NL} }
Safety-aligned large language models (LLMs) sometimes falsely refuse pseudo-harmful prompts, like "how to kill a mosquito," which are actually harmless. Frequent false refusals not only frustrate users but also provoke public backlash against the very values alignment seeks to protect. In this paper, we propose the first method to auto-generate diverse, content-controlled, and model-dependent pseudo-harmful prompts. Using this method, we construct an evaluation dataset called PHTest, which is ten times larger than existing datasets, covers more false refusal patterns, and separately labels controversial prompts. We evaluate 20 LLMs on PHTest, uncovering new insights due to its scale and labeling. Our findings reveal a trade-off between minimizing false refusals and improving safety against jailbreak attacks. Moreover, we show that many jailbreak defenses significantly increase the false refusal rates, thereby undermining usability. Our method and dataset can help developers evaluate and fine-tune safer and more usable LLMs. Our code and dataset are available at \href{https://github.com/umd-huang-lab/FalseRefusal}{https://github.com/umd-huang-lab/FalseRefusal}
Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models
[ "Bang An", "Sicheng Zhu", "Ruiyi Zhang", "Michael-Andrei Panaitescu-Liess", "Yuancheng Xu", "Furong Huang" ]
Conference
Poster
2409.00598
[ "https://github.com/umd-huang-lab/falserefusal" ]
https://huggingface.co/papers/2409.00598
0
0
0
6
[]
[ "furonghuang-lab/PHTest" ]
[]
1
73
null
https://openreview.net/forum?id=lY6XTF9tPv
@inproceedings{ yu2024llasmol, title={Lla{SM}ol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset}, author={Botao Yu and Frazier N. Baker and Ziqi Chen and Xia Ning and Huan Sun}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=lY6XTF9tPv} }
Chemistry plays a crucial role in many domains, such as drug discovery and material science. While large language models (LLMs) such as GPT-4 exhibit remarkable capabilities on natural language processing tasks, existing research indicates that their performance on chemistry tasks is discouragingly low. In this paper, however, we demonstrate that our developed LLMs can achieve very strong results on a comprehensive set of chemistry tasks, outperforming the most advanced GPT-4 and Claude 3 Opus by a substantial margin. To accomplish this, we propose SMolInstruct, a large-scale, comprehensive, and high-quality dataset for instruction tuning. It contains 14 selected chemistry tasks and over three million samples, laying a solid foundation for training and evaluating LLMs for chemistry. Using SMolInstruct, we fine-tune a set of open-source LLMs named as LlaSMol, among which, we find that Mistral serves as the best base model for chemistry tasks. Our analysis further demonstrates the critical role of the proposed dataset in driving the performance improvements.
LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset
[ "Botao Yu", "Frazier N. Baker", "Ziqi Chen", "Xia Ning", "Huan Sun" ]
Conference
Poster
2402.09391
[ "https://github.com/osu-nlp-group/llm4chem" ]
https://huggingface.co/papers/2402.09391
0
1
0
5
[ "osunlp/LlaSMol-Mistral-7B", "osunlp/LlaSMol-CodeLlama-7B", "osunlp/LlaSMol-Galactica-6.7B" ]
[ "osunlp/SMolInstruct" ]
[]
1
74
null
https://openreview.net/forum?id=lVOw78nYXS
@inproceedings{ hua2024talk, title={Talk Less, Interact Better: Evaluating In-context Conversational Adaptation in Multimodal {LLM}s}, author={Yilun Hua and Yoav Artzi}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=lVOw78nYXS} }
Humans spontaneously use increasingly efficient language as interactions progress, by adapting and forming ad-hoc conventions. This phenomenon has been studied extensively using reference games, showing properties of human language that go beyond relaying intents. It remains unexplored whether multimodal large language models (MLLMs) similarly increase communication efficiency during interactions, and what mechanisms they may adopt for this purpose. We introduce ICCA, an automated framework to evaluate such conversational adaptation as an in-context behavior in MLLMs. We evaluate several state-of-the-art MLLMs, and observe that while they may understand the increasingly efficient language of their interlocutor, they do not spontaneously make their own language more efficient over time. This latter ability can only be elicited in some models (e.g., GPT-4) with heavy-handed prompting. This shows that this property of linguistic interaction does not arise from current training regimes, even though it is a common hallmark of human language.
Talk Less, Interact Better: Evaluating In-context Conversational Adaptation in Multimodal LLMs
[ "Yilun Hua", "Yoav Artzi" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
75
null
https://openreview.net/forum?id=lJMioZBoR8
@inproceedings{ xu2024rejection, title={Rejection Improves Reliability: Training {LLM}s to Refuse Unknown Questions Using {RL} from Knowledge Feedback}, author={Hongshen Xu and Zichen Zhu and Situo Zhang and Da Ma and Shuai Fan and Lu Chen and Kai Yu}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=lJMioZBoR8} }
Large Language Models (LLMs) often generate erroneous outputs, known as hallucinations, due to their limitations in discerning questions beyond their knowledge scope. While addressing hallucination has been a focal point in research, previous efforts primarily concentrate on enhancing correctness without giving due consideration to the significance of rejection mechanisms. In this paper, we conduct a comprehensive examination of the role of rejection, introducing the alignment goal of model reliability along with corresponding metrics. This goal requires the model to provide accurate responses while adeptly rejecting questions exceeding its knowledge boundaries, thereby minimizing hallucinations. To improve the inherent reliability of LLMs, we present a novel alignment framework called Reinforcement Learning from Knowledge Feedback (RLKF). RLKF leverages knowledge feedback to dynamically determine the model's knowledge boundary and trains a reliable reward model to encourage the rejection of out-of-knowledge questions. Experimental results on mathematical and question answering datasets affirm the substantial efficacy of RLKF in significantly enhancing LLM reliability.
Rejection Improves Reliability: Training LLMs to Refuse Unknown Questions Using RL from Knowledge Feedback
[ "Hongshen Xu", "Zichen Zhu", "Situo Zhang", "Da Ma", "Shuai Fan", "Lu Chen", "Kai Yu" ]
Conference
Poster
2403.18349
[ "" ]
https://huggingface.co/papers/2403.18349
0
0
0
7
[]
[]
[]
1
76
null
https://openreview.net/forum?id=kzzwTrt04Z
@inproceedings{ kushnareva2024boundary, title={Boundary detection in mixed {AI}-human texts}, author={Laida Kushnareva and Tatiana Gaintseva and Dmitry Abulkhanov and Kristian Kuznetsov and German Magai and Eduard Tulchinskii and Serguei Barannikov and Sergey Nikolenko and Irina Piontkovskaya}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=kzzwTrt04Z} }
Due to the rapid development of large language models, people increasingly often encounter texts that may start as written by a human but continue as machine-generated. Detecting the boundary between human-written and machine-generated parts of such texts is a challenging problem that has not received much attention in literature. We attempt to bridge this gap and examine several ways to adapt state of the art artificial text detection classifiers to the boundary detection setting. We push all detectors to their limits, using the Real or Fake text benchmark that contains short texts on several topics and includes generations of various language models. We use this diversity to deeply examine the robustness of all detectors in cross-domain and cross-model settings to provide baselines and insights for future research. In particular, we find that perplexity-based approaches to boundary detection tend to be more robust to peculiarities of domain-specific data than supervised fine-tuning of the RoBERTa model; we also find which features of the text confuse boundary detection algorithms and negatively influence their performance in cross-domain settings.
AI-generated text boundary detection with RoFT
[ "Laida Kushnareva", "Tatiana Gaintseva", "Dmitry Abulkhanov", "Kristian Kuznetsov", "German Magai", "Eduard Tulchinskii", "Serguei Barannikov", "Sergey Nikolenko", "Irina Piontkovskaya" ]
Conference
Oral
2311.08349
[ "https://github.com/silversolver/ai_boundary_detection" ]
-1
-1
-1
-1
[]
[]
[]
0
77
null
https://openreview.net/forum?id=kpf7UbnSAm
@inproceedings{ zhao2024calora, title={{CA}-Lo{RA}: Adapting Existing Lo{RA} for Compressed {LLM}s to Enable Efficient Multi-Tasking on Personal Devices}, author={Weilin Zhao and Yuxiang Huang and Xu Han and Zhiyuan Liu and Zhengyan Zhang and Kuai Li and Chen Chen and TAO YANG and Maosong Sun}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=kpf7UbnSAm} }
Recently, there has been a demand to deploy Large Language Models (LLMs) on personal devices such as laptops and smartphones. These LLMs have different model variants when handling different tasks. However, personal devices have limited resources and require reduced storage overhead. To address this, there are two key methods available: the first is model compression, which compresses LLMs into smaller sizes; the second is LoRA, which can transfer an LLM to other tasks with very few parameters, avoiding the storage of multiple model variants in multi-task scenarios by only preserving LoRAs. However, our experiments show that directly combining these two methods yields sub-optimal performance. Considering that the open-source community has already contributed many LoRAs to LLMs, we propose to adapt these existing LoRAs from the LLMs to their compressed version and introduce a Compression-Aware LoRA (CA-LoRA) framework. We incorporate knowledge inheritance and recovery strategies to recover the lost knowledge caused by model compression. Experiment results demonstrate that CA-LoRA outperforms the vanilla LoRA methods applied to a compressed LLM and achieves comparable performance to the non-compressed LLM with existing LoRA modules. The source code of CA-LoRA is available at https://github.com/thunlp/CA-LoRA.
CA-LoRA: Adapting Existing LoRA for Compressed LLMs to Enable Efficient Multi-Tasking on Personal Devices
[ "Weilin Zhao", "Yuxiang Huang", "Xu Han", "Zhiyuan Liu", "Zhengyan Zhang", "Kuai Li", "Chen Chen", "TAO YANG", "Maosong Sun" ]
Conference
Poster
2307.07705
[ "https://github.com/thunlp/ca-lora" ]
-1
-1
-1
-1
[]
[]
[]
0
78
null
https://openreview.net/forum?id=kh9Zt2Ldmn
@inproceedings{ liu2024dont, title={Don't throw away your value model! Generating more preferable text with Value-Guided Monte-Carlo Tree Search decoding}, author={Jiacheng Liu and Andrew Cohen and Ramakanth Pasunuru and Yejin Choi and Hannaneh Hajishirzi and Asli Celikyilmaz}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=kh9Zt2Ldmn} }
Inference-time search algorithms such as Monte-Carlo Tree Search (MCTS) may seem unnecessary when generating natural language text based on state-of-the-art reinforcement learning such as Proximal Policy Optimization (PPO). In this paper, we demonstrate that it is possible to get extra mileage out of PPO by integrating MCTS on top. The key idea is not to throw out the *value network*, a byproduct of PPO training for evaluating partial output sequences, when decoding text out of the *policy network*. More concretely, we present a novel *value-guided* decoding algorithm called **PPO-MCTS**, which can integrate the value network from PPO to work closely with the policy network during inference-time generation. Compared to prior approaches based on MCTS for controlled text generation, the key strength of our approach is to reduce the fundamental mismatch of the scoring mechanisms of the partial outputs between training and test. Evaluation on four text generation tasks demonstrate that PPO-MCTS greatly improves the preferability of generated text compared to the standard practice of using only the PPO policy. Our results demonstrate the promise of search algorithms even on top of the aligned language models from PPO, and the under-explored benefit of the value network.
Don't throw away your value model! Generating more preferable text with Value-Guided Monte-Carlo Tree Search decoding
[ "Jiacheng Liu", "Andrew Cohen", "Ramakanth Pasunuru", "Yejin Choi", "Hannaneh Hajishirzi", "Asli Celikyilmaz" ]
Conference
Poster
2309.15028
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
79
null
https://openreview.net/forum?id=kWnlCVcp6o
@inproceedings{ tao2024crystal, title={Crystal: Illuminating {LLM} Abilities on Language and Code}, author={Tianhua Tao and Junbo Li and Bowen Tan and Hongyi Wang and William Marshall and Bhargav M Kanakiya and Joel Hestness and Natalia Vassilieva and Zhiqiang Shen and Eric P. Xing and Zhengzhong Liu}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=kWnlCVcp6o} }
Large Language Models (LLMs) specializing in code generation (which are also often referred to as code LLMs), e.g., StarCoder and Code Llama, play increasingly critical roles in various software development scenarios. It is also crucial for code LLMs to possess both code generation and natural language abilities for many specific applications, such as code snippet retrieval using natural language or code explanations. The intricate interaction between acquiring language and coding skills complicates the development of strong code LLMs. Furthermore, there is a lack of thorough prior studies on the LLM pretraining strategy that mixes code and natural language. In this work, we propose a pretraining strategy to enhance the integration of natural language and coding capabilities within a single LLM. Specifically, it includes two phases of training with appropriately adjusted code/language ratio. The resulting model, CRYSTAL, demonstrates remarkable capabilities in both domains. Specifically, it has natural language and coding performance comparable to that of Llama 2 and Code Llama, respectively. CRYSTAL exhibits better data efficiency, using 1.4 trillion tokens compared to the more than 2 trillion tokens used by Llama 2 and Code Llama. We verify our pretraining strategy by analyzing the training process and observe consistent improvements in most benchmarks. We also adopted a typical application adaption phase with a code-centric data mixture, only to find out that it did not lead to enhanced performance or training efficiency, underlining the importance of a carefully designed data recipe. To foster research within the community, we commit to open-sourcing every detail of the pretraining, including our training datasets, code, loggings and 136 checkpoints throughout the training.
Crystal: Illuminating LLM Abilities on Language and Code
[ "Tianhua Tao", "Junbo Li", "Bowen Tan", "Hongyi Wang", "William Marshall", "Bhargav M Kanakiya", "Joel Hestness", "Natalia Vassilieva", "Zhiqiang Shen", "Eric P. Xing", "Zhengzhong Liu" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
80
null
https://openreview.net/forum?id=kLH4ccaL21
@inproceedings{ davani2024genil, title={GeniL: A Multilingual Dataset on Generalizing Language}, author={Aida Mostafazadeh Davani and Sagar Gubbi Venkatesh and Sunipa Dev and Shachi Dave and Vinodkumar Prabhakaran}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=kLH4ccaL21} }
Generative language models are increasingly transforming our digital ecosystem, but they often inherit societal biases learned from their training data, for instance stereotypes associating certain attributes with specific identity groups. While whether and how these biases are mitigated may depend on the specific use cases, being able to effectively detect instances of stereotype perpetuation is a crucial first step. Current methods to assess presence of stereotypes in generated language rely on simple template or co-occurrence based measures, without accounting for the variety of sentential contexts they manifest in. We argue that the sentential context is crucial to determine if the co-occurrence of an identity term and an attribute is an instance of generalization. We distinguish two types of generalizations ---(1) where the language merely mentions the presence of a generalization (e.g., "people think the French are very rude"), and (2) where the language reinforces such a generalization (e.g., "as French they must be rude"---, from a non-generalizing context (e.g., "My French friends think I am rude"). For meaningful stereotype evaluations, we need scalable ways to reliably detect and distinguish such instances of generalizations. To address this gap, we introduce the new task of detecting generalization in language, and build GeniL, a multilingual dataset of over 50K sentences from 9 languages ---English, Arabic, Bengali, Spanish, French, Hindi, Indonesian, Malay, and Portuguese--- annotated for instances of generalizations and their types. We demonstrate that the likelihood of a co-occurrence being an instance of generalization is usually low, and varies across different languages, identity groups, and attributes, underscoring the inadequacy of simplistic co-occurrence based approaches. We also build classifiers that can detect generalization in language with an overall PR-AUC of 58.7, with varying degrees of performance across languages. Our research provides data and tools to enable a nuanced understanding of stereotype perpetuation, a crucial step towards more inclusive and responsible language technologies.
GeniL: A Multilingual Dataset on Generalizing Language
[ "Aida Mostafazadeh Davani", "Sagar Gubbi Venkatesh", "Sunipa Dev", "Shachi Dave", "Vinodkumar Prabhakaran" ]
Conference
Oral
2404.05866
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
81
null
https://openreview.net/forum?id=kIoBbc76Sy
@inproceedings{ hsieh2024ruler, title={{RULER}: What{\textquoteright}s the Real Context Size of Your Long-Context Language Models?}, author={Cheng-Ping Hsieh and Simeng Sun and Samuel Kriman and Shantanu Acharya and Dima Rekesh and Fei Jia and Boris Ginsburg}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=kIoBbc76Sy} }
The needle-in-a-haystack (NIAH) test, which examines the ability to retrieve a piece of information (the “needle”) from long distractor texts (the “haystack”), has been widely adopted to evaluate long-context language models (LMs). However, this simple retrieval-based test is indicative of only a superficial form of long-context understanding. To provide a more comprehensive evaluation of long-context LMs, we create a new synthetic benchmark RULER with flexible configurations for customized sequence length and task complexity. RULER expands upon the vanilla NIAH test to encompass variations with diverse types and quantities of needles. Moreover, RULER introduces new task categories multi-hop tracing and aggregation to test behaviors beyond searching from context. We evaluate 17 long-context LMs with 13 representative tasks in RULER. Despite achieving nearly perfect accuracy in the vanilla NIAH test, almost all models exhibit large performance drops as the context length increases. While these models all claim context sizes of 32K tokens or greater, only half of them can maintain satisfactory performance at the length of 32K. Our analysis of Yi-34B, which supports context length of 200K, reveals large room for improvement as we increase input length and task complexity. We open source RULER to spur comprehensive evaluation of long-context LMs.
RULER: What’s the Real Context Size of Your Long-Context Language Models?
[ "Cheng-Ping Hsieh", "Simeng Sun", "Samuel Kriman", "Shantanu Acharya", "Dima Rekesh", "Fei Jia", "Boris Ginsburg" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
82
null
https://openreview.net/forum?id=kHO2ZTa8e3
@inproceedings{ huang2024the, title={The N+ Implementation Details of {RLHF} with {PPO}: A Case Study on {TL};{DR} Summarization}, author={Shengyi Huang and Michael Noukhovitch and Arian Hosseini and Kashif Rasul and Weixun Wang and Lewis Tunstall}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=kHO2ZTa8e3} }
This work is the first to openly reproduce the Reinforcement Learning from Human Feedback (RLHF) scaling behaviors reported in OpenAI's seminal TL;DR summarization work. We create an RLHF pipeline from scratch, enumerate over 20 key implementation details, and share key insights during the reproduction. Our RLHF-trained Pythia models demonstrate significant gains in response quality that scale with model size, with our 2.8B, 6.9B models outperforming OpenAI's released 1.3B checkpoint. Our results highlight best practices in data, training, and evaluation for RLHF. We publicly release the trained model checkpoints and code to facilitate further research and accelerate progress in the field at https://github.com/vwxyzjn/summarize_from_feedback_details
The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization
[ "Shengyi Huang", "Michael Noukhovitch", "Arian Hosseini", "Kashif Rasul", "Weixun Wang", "Lewis Tunstall" ]
Conference
Poster
2403.17031
[ "https://github.com/vwxyzjn/summarize_from_feedback_details" ]
https://huggingface.co/papers/2403.17031
0
0
0
6
[]
[]
[]
1
83
null
https://openreview.net/forum?id=kGa4fMtP9l
@inproceedings{ shi2024can, title={Can Language Models Solve Olympiad Programming?}, author={Ben Shi and Michael Tang and Karthik R Narasimhan and Shunyu Yao}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=kGa4fMtP9l} }
Olympiad programming is one of the hardest reasoning challenges for humans, yet it has been understudied as a domain to benchmark language models (LMs). In this paper, we introduce the USACO benchmark with 307 problems from USA Computing Olympiad contests, along with high-quality unit tests, reference code, and official analysis for each problem. These resources enable us to construct and test a range of LM inference methods beyond zero-shot prompting for competitive programming. We find state-of-the-art models in code generation, such as GPT-4, achieve only a 8.7\% pass@1 accuracy with zero-shot chain-of-thought prompting, with our best inference method almost \textit{doubling} zero-shot accuracy using a novel combination of retrieval augmentation and self-reflection. However, this is still far from solving the benchmark. To better understand the remaining challenges, we perform a novel human-in-the-loop study, and surprisingly find that a small number of targeted hints enable GPT-4 to solve 13 out of 15 problems previously unsolvable by any model and method. Our benchmark, baseline methods, quantitative results, and qualitative analysis thus serve as an initial step towards LMs with grounded, creative, and algorithmic reasoning.
Can Language Models Solve Olympiad Programming?
[ "Ben Shi", "Michael Tang", "Karthik R Narasimhan", "Shunyu Yao" ]
Conference
Poster
2404.10952
[ "https://github.com/princeton-nlp/USACO" ]
https://huggingface.co/papers/2404.10952
0
1
0
4
[]
[]
[ "agentharbor/agenta" ]
1
84
null
https://openreview.net/forum?id=kEVcNxtqXk
@inproceedings{ rafailov2024from, title={From \$r\$ to \$Q{\textasciicircum}*\$: Your Language Model is Secretly a Q-Function}, author={Rafael Rafailov and Joey Hejna and Ryan Park and Chelsea Finn}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=kEVcNxtqXk} }
Reinforcement Learning From Human Feedback (RLHF) has been a critical component of the success of the latest generation of generative AI models, including the GPT series. However, this is an involved and complex process and direct alignment algorithms, such as DPO have recently emerged as an alternative approach to the classical RLHF pipeline. Although DPO solves the same objective as the standard RLHF setup, there is a mismatch between the two approaches. Standard RLHF deploys reinforcement learning in a specific token-level MDP, while DPO is derived as a bandit problem in which the whole response of the model is treated as a single arm. In this work we rectify this difference, first we theoretically show that we can derive DPO in the token-level MDP as a general inverse Q-learning algorithm, which satisfies the Bellman equation. Using our theoretical results, we provide three concrete empirical insights. First, we show that because of its token level interpretation, DPO is able to perform some type of credit assignment. Next, we prove that under the token level formulation, classical search-based algorithms, such as MCTS, which have recently been applied to the language generation space, are equivalent to likelihood-based search on a DPO policy and empirically we show that a simple beam search yields meaningful improvement over the base DPO policy. Finally, we show how the choice of SFT policy causes implicit rewards to decline during training. We conclude by discussing applications of our work, including information elicitation in multi-tun dialogue, reasoning, agentic applications and end-to-end training of multi-model systems.
From r to Q^*: Your Language Model is Secretly a Q-Function
[ "Rafael Rafailov", "Joey Hejna", "Ryan Park", "Chelsea Finn" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
85
null
https://openreview.net/forum?id=k8KS9Ps71d
@inproceedings{ yuan2024probelm, title={{PR}ob{ELM}: Plausibility Ranking Evaluation for Language Models}, author={Moy Yuan and Eric Chamoun and Rami Aly and Chenxi Whitehouse and Andreas Vlachos}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=k8KS9Ps71d} }
This paper introduces PRobELM (Plausibility Ranking Evaluation for Language Models), a benchmark designed to assess language models' ability to discern more plausible from less plausible scenarios through their parametric knowledge. While benchmarks such as TruthfulQA emphasise factual accuracy or truthfulness, and others such as COPA explore plausible scenarios without explicitly incorporating world knowledge, PRobELM seeks to bridge this gap by evaluating models' capabilities to prioritise plausible scenarios that leverage world knowledge over less plausible alternatives. This design allows us to assess the potential of language models for downstream use cases such as literature-based discovery where the focus is on identifying information that is likely but not yet known. Our benchmark is constructed from a dataset curated from Wikidata edit histories, tailored to align the temporal bounds of the training data for the evaluated models. PRobELM facilitates the evaluation of language models across multiple prompting types, including statement, text completion, and question-answering. Experiments with 10 models of various sizes and architectures on the relationship between model scales, training recency, and plausibility performance, reveal that factual accuracy does not directly correlate with plausibility performance and that up-to-date training data enhances plausibility assessment across different model architectures.
PRobELM: Plausibility Ranking Evaluation for Language Models
[ "Moy Yuan", "Eric Chamoun", "Rami Aly", "Chenxi Whitehouse", "Andreas Vlachos" ]
Conference
Poster
2404.03818
[ "https://github.com/zhangdiey/probelm" ]
-1
-1
-1
-1
[]
[]
[]
0
86
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
39
Edit dataset card

Spaces using COLMConference/COLM2024-papers 2