uid
stringclasses 2
values | text
stringclasses 2
values | authors
stringclasses 2
values | abstract
stringclasses 2
values | title
stringclasses 2
values |
---|---|---|---|---|
airesearcher | 4
2
0
2
p
e
S
6
]
L
C
.
s
c
[
1
v
9
0
1
4
0
.
9
0
4
2
:
v
i
X
r
a
Can LLMs Generate Novel Research Ideas?
A Large-Scale Human Study with 100+ NLP Researchers
Chenglei Si, Diyi Yang, Tatsunori Hashimoto
Stanford University
{clsi, diyiy, thashim}@stanford.edu
Abstract
Recent advancements in large language models (LLMs) have sparked optimism about their potential to
accelerate scientific discovery, with a growing number of works proposing research agents that autonomously
generate and validate new ideas. Despite this, no evaluations have shown that LLM systems can take the very
first step of producing novel, expert-level ideas, let alone perform the entire research process. We address this by
establishing an experimental design that evaluates research idea generation while controlling for confounders
and performs the first head-to-head comparison between expert NLP researchers and an LLM ideation agent.
By recruiting over 100 NLP researchers to write novel ideas and blind reviews of both LLM and human ideas,
we obtain the first statistically significant conclusion on current LLM capabilities for research ideation: we
find LLM-generated ideas are judged as more novel (p < 0.05) than human expert ideas while being judged
slightly weaker on feasibility. Studying our agent baselines closely, we identify open problems in building and
evaluating research agents, including failures of LLM self-evaluation and their lack of diversity in generation.
Finally, we acknowledge that human judgements of novelty can be difficult, even by experts, and propose
an end-to-end study design which recruits researchers to execute these ideas into full projects, enabling us to
study whether these novelty and feasibility judgements result in meaningful differences in research outcome. 1
1 Introduction
The rapid improvement of LLMs, especially in capabilities like knowledge and reasoning, has enabled
many new applications in scientific tasks, such as solving challenging mathematical problems (Trinh
et al., 2024), assisting scientists in writing proofs (Collins et al., 2024), retrieving related works (Ajith
et al., 2024, Press et al., 2024), generating code to solve analytical or computational tasks (Huang et al.,
2024, Tian et al., 2024), and discovering patterns in large text corpora (Lam et al., 2024, Zhong et al.,
2023). While these are useful applications that can potentially increase the productivity of researchers,
it remains an open question whether LLMs can take on the more creative and challenging parts of
the research process.
We focus on this problem of measuring the research ideation capabilities of LLMs and ask: are current
LLMs capable of generating novel ideas that are comparable to expert humans? Although ideation
is only one part of the research process, this is a key question to answer, as it is the very first step to the
scientific research process and serves as a litmus test for the possibility of autonomous research agents
that create their own ideas. Evaluating expert-level capabilities of LLM systems is challenging (Bakhtin
1Interested researchers can sign up for this end-to-end study at: https://tinyurl.com/execution-study. We
release our agent implementation and all human review scores at: https://github.com/NoviScl/AI-Researcher.
∗The last two authors advised this project equally.
1
Figure 1: Overview of our study: we recruit 79 expert researchers to perform blind review of 49 ideas
from each of the three conditions: expert-written ideas, AI-generated ideas, and AI-generated ideas
reranked by a human expert. We standardize the format and style of ideas from all conditions before
the blind review. We find AI ideas are judged as significantly more novel than human ideas (p < 0.05).
Figure 2: Comparison of the three experiment conditions across all review metrics. Red asterisks
indicate that the condition is statistically better than the Human baseline with two-tailed Welch’s t-tests
and Bonferroni correction. All scores are on a 1 to 10 scale. More detailed results are in Section 5.
et al., 2022, Collins et al., 2024), and research ideation takes this to an extreme. Qualified expert re-
searchers are difficult to recruit at scale, evaluation criteria can be highly subjective, and it is difficult
for even the best experts to judge the quality of an idea (Beygelzimer et al., 2021, Simsek et al., 2024).
We address these challenges directly, recognizing that for important, high-stakes tasks like research
ideation, there is no substitute for a large-scale expert evaluation. We design a carefully controlled
comparison of human and LLM ideas that overcomes sample size and baseline problems present
in earlier small-scale evaluation studies. Our study recruited a large pool of over 100 highly qualified
NLP researchers to produce human baseline ideas and perform blind reviews of human and LLM
ideas. To reduce the possibility that confounding variables affect our outcome measures, we enforce
strict controls that standardize the styles of human and LLM ideas and match their topic distribution.
We compare our human expert baseline with a simple and effective LLM agent that incorporates
retrieval augmentation and adopts recent ideas in inference-time scaling, such as overgenerating and
reranking LM outputs. These measures allow us to make statistically rigorous comparisons between
human experts and state-of-the-art LLMs (Figure 1).
2
7 NLP Topics
Bias
Coding
Safety
Multilingual
Factuality
Math
UncertaintyHuman ExpertsAI AgentCondition 1 : Human Ideas (N=49)Condition 2 : AI Ideas (N=49)Condition 3 : AI Ideas + Human Rerank (N=49)Blind Review by Experts (N=79)Novelty Score: 4.84 Novelty Score: 5.64Novelty Score: 5.81Idea GenerationHumanAIAI+Rerank34567Score**NoveltyHumanAIAI+Rerank34567**ExcitementHumanAIAI+Rerank34567FeasibilityHumanAIAI+Rerank34567EffectivenessHumanAIAI+Rerank34567*OverallOur evaluation-centric approach complements many recent methods-centric works that attempt to
instantiate research agents (Baek et al., 2024, Li et al., 2024, Lu et al., 2024, Wang et al., 2024, Yang et al.,
2024). The majority of these works rely on fast and lower-cost evaluation surrogates – either by decreas-
ing the number of expert reviewers (Baek et al., 2024, Li et al., 2024, Wang et al., 2024, Yang et al., 2024),
constraining the length and detailedness of the ideas (Wang et al., 2024, Yang et al., 2024), or relying on
LLM-as-a-judge (Lu et al., 2024). They do not perform the large-scale human comparison studies that
are needed to answer the motivating question of our work. Our work takes the opposite approach,
performing a year-long and high-cost evaluation that provides human expert baselines and a stan-
dardized evaluation protocol to serve as a foundation for future follow-up studies and methods work.
Through nearly 300 reviews across all our conditions, we find that AI-generated ideas are judged
as more novel than human expert ideas (p < 0.05), which holds robustly under multiple hypothesis
correction and across different statistical tests. We find some signs that these gains are correlated
with excitement and overall score, and may come at the slight expense of feasibility, but our study
size did not have sufficient power to conclusively identify these effects (Figure 2).
Qualitative analysis of free-text responses in our review corroborates these findings on novelty and
feasibility. Apart from evaluating the ideas, we also analyze the LLM agent, showing limitations and
open problems – despite excitement about inference-time scaling of LLMs, we find that they lack idea
diversity when we scale up idea generation, and they cannot currently serve as reliable evaluators.
2 Problem Setup
The central experiment of our work is a comparison of human- and LLM-generated ideas. While
this goal is simple, there is no existing consensus on how to formulate the task of research ideation
and evaluation, and we begin by defining the key aspects of our experiment design.
We think of research idea evaluation as consisting of three separate components: 1). the idea itself,
generated in response to our instructions, 2). the writeup which communicates the idea, and 3). the
evaluation of the writeup by experts. We outline our experiment design in each of these three parts
with particular focus on potential confounders, such as the area of research, the format of a research
idea, and the evaluation process.
Ideation Scope and Instructions Research ideas can take many different forms. They can be simple
tricks to improve model performance, or they may be large-scale research programs that form the
basis of a Ph.D. thesis. Any experiment on ideation must carefully balance the realisticness and
interestingness of a research idea with the practical realities of eliciting ideas from a large population.
In our case, these tradeoffs are even more pronounced, as we have designed our ideation experiments
so that the resulting ideas can be executed by experts in a follow-up set of experiments.
These constraints have led us to study prompting-based NLP research as a testbed for our study.
Prompting research has been popular in recent years of NLP and AI research (e.g., Chen et al., 2023,
Diao et al., 2024, Madaan et al., 2023, Qin et al., 2024, Schulhoff et al., 2024, Si et al., 2023, Wang et al.,
2023, Wei et al., 2022, Yao et al., 2023, Yasunaga et al., 2024, Zhou et al., 2023, inter alia). This class
of projects strikes a reasonable trade-off among our constraints. The most impactful prompting
projects like chain-of-thought have had a major influence on LLM performance (Wei et al., 2022), and
prompting projects are executable with minimal computing hardware.
We further structure our ideation process to avoid selection-bias-based confounders in ideation. If
we simply ask LLMs and humans to produce ideas on ‘prompting topics’, we may find that LLMs
and humans differ in the types of research ideas they produce (for example, LLMs may naturally
suggest more projects on safer topics, which might be judged as less exciting by humans). This would
3
lead us to simply measure misalignment in research topic preference between LLMs and humans,
which is not the goal of our study. To address this possibility, we define a set of seven specific research
topics extracted from the Call For Papers page of recent NLP conferences such as COLM. 2 Specifically,
our topics include: Bias, Coding, Safety, Multilinguality, Factuality, Math, and Uncertainty (see
Appendix A for a complete description of these topics).
Each human and LLM participant of the ideation experiment receives the same set of natural language
instructions including the same topic description, idea template, and demonstration example to
ensure a fair comparison. For human participants, we additionally allow them to select a preferred
topic from the list, and for each selected topic, we generate a corresponding LLM idea. This exactly
matches the idea topic distribution between the LLM and human participants, while ensuring that
human experts are able to select topics according to their expertise.
Idea Writeup An idea can only be evaluated if it is written up to be communicated, but this writing
process introduces many additional potential confounders. Human researchers may write in ways
that subtly signal quality research, such as including more examples and implementation details.
The format of the writeup functions as a way to scaffold what contents should be included and the
level of detailedness. Ideally, we want both human and LLM participants to provide all the necessary
implementation details for their generated ideas.
We take inspiration from guidelines used in grant submissions and introduce a template to specify the
structure and detailedness of idea proposals. Specifically, we construct a template that includes fields
for the title, problem statement, motivation, proposed method, step-by-step experiment plan, test
case examples, and the fallback plan. Both the LLM agent and the human idea writers are instructed
to follow this template and our provided demonstration examples to produce a project proposal as
the output (see Appendix B for the full template and Appendix C for the demo example).
Even with these templates, there may be subtle writing style cues that affect the outcome measure. For
example, humans may tend to write in a more engaging and informal tone. To reduce this possibility
further, we developed a style normalization module that uses an LLM to convert all ideas into the
same writing and formatting style without changing the original content. Our small-scale human
study shows that such a normalization approach leads to a 50% accuracy for expert human judges
who are asked to distinguish AI ideas from human ideas. Finally, the use of an LLM style anonymizer
has the possibility of substantively changing the content of the ideas. To rule this out, the first author
of this paper manually verified each human idea proposal to ensure all contents of the original ideas
were preserved. We present the full prompt used in Appendix D.
Review and Evaluation Reviewing research ideas is notoriously subjective, so we want to design
a review form that defines all review criteria clearly to standardize and anchor the evaluations as
much as possible. At the same time, we want our review criteria and measured variables to capture
all the desiderata of high-quality research ideas.
We follow best practices from AI conference reviewing (e.g., ICLR and ACL) when designing the
review form, where we define four breakdown metrics including novelty, excitement, feasibility, and
expected effectiveness, apart from the overall score. For each metric, we ask for a numerical score
on a 1-10 scale along with a free-text rationale. We provide clear definitions and grounding for each
numerical scale to calibrate all reviewers’ standards (see Appendix E for the full review form).
Our blind review evaluation will compare ideas from three different conditions:
1. Human Ideas: Idea proposals written by our recruited expert researchers.
2https://colmweb.org/cfp.html
4
2. AI Ideas: Idea proposals generated by our LLM agent. We directly take the top-ranked ideas
from the agent’s output.
3. AI Ideas + Human Rerank: Idea proposals generated by our LLM agent. The first author of
this paper manually selected the top-ranked ideas out of all the LLM agent’s generations rather
than relying on the LLM ranker in order to better estimate the upper-bound quality of AI ideas.
In the next two sections, we instantiate how our LLM agent generates ideas and how our expert
participants generate and review the ideas.
3 Idea Generation Agent
We build a simple but effective LLM ideation agent to compare with the human expert baseline.
Rather than focusing on innovating the agent itself, we adhere to a minimalist design principle,
aiming to understand the current capabilities of LLMs in idea generation. Our research ideation agent
has three essential components: paper retrieval, idea generation, and idea ranking, which we will
describe in detail below.
3.1 Paper Retrieval for RAG
To ground idea generation, the agent needs to retrieve papers related to the given research
topic, so that it will be aware of related works when generating new ideas. To do so, we lever-
age retrieval-augmented generation (RAG), which has demonstrated effectiveness on many
knowledge-intensive tasks (Lewis et al., 2020, Shi et al., 2024). Concretely, given a research topic
(e.g., “novel prompting methods that can improve factuality and reduce hallucination of large
language models"), we prompt an LLM to generate a sequence of function calls to the Semantic
Scholar API. We use claude-3-5-sonnet-20240620 as the backbone model for our agent but
the pipeline should generalize to other LLMs as well. The paper retrieval action space includes:
{KeywordQuery(keywords), PaperQuery(paperId), GetReferences(paperId)}. Each
action generation is grounded on the previous actions and executed results. We keep the top k = 20
papers from each executed function call and stop the action generation when a max of N = 120 papers
have been retrieved. We then use the LLM to score and rerank all retrieved papers based on three
criteria: 1) the paper should be directly relevant to the specified topic; 2) the paper should be an
empirical paper involving computational experiments;3 3) the paper is interesting and can inspire
new projects. The LLM is prompted to score each retrieved paper on a scale of 1 to 10 based on these
criteria and we use the top-ranked papers for the next step of idea generation.
3.2
Idea Generation
Our key insight for idea generation is to generate as many candidate ideas as possible. Our intuition
is that only a small fraction of all generated ideas might be high-quality, and we should be willing
to expend inference-time compute to generate more candidates so that we can later use a reranker to
discover the "diamond in the rough". This aligns with existing results showing that scaling inference
compute with repeated sampling can boost LLM performance on various coding and reasoning
tasks (Brown et al., 2024, Li et al., 2022). Specifically, we prompt the LLM to generate 4000 seed
ideas on each research topic. The idea generation prompt includes the demonstration examples and
the retrieved papers. We craft k = 6 demonstration examples by manually summarizing exemplar
3Note that we exclude position papers, survey papers, and analysis papers throughout this study since their evaluation
tends to be very subjective.
5
papers (Dhuliawala et al., 2023, Madaan et al., 2023, Weller et al., 2023, Weston and Sukhbaatar, 2023,
Yasunaga et al., 2024, Zheng et al., 2024) into our desired idea format. For retrieval augmentation,
we randomly select k = 10 papers from the top-ranked retrieved papers and concatenate their titles
and abstracts to prepend to the idea generation prompt. We also append the titles of all previously
generated ideas to the prompt to explicitly ask the LLM to avoid repetitions.
To remove duplicated ideas from this large pool of candidate ideas, we first perform a round of dedu-
plication by encoding all seed ideas with all-MiniLM-L6-v2 from Sentence-Transformers (Reimers
and Gurevych, 2020) and then computing pairwise cosine similarities. We set a similarity threshold
of 0.8 for the idea deduplication based on manual inspection. 4 This leaves about 5% non-duplicated
ideas out of all the generated seed ideas. We expand more on this duplication issue later in Section 7.1.
3.3
Idea Ranking
N Top-10
1
2
3
4
5
6
The next step is for our ideation agent to rank all the remaining ideas so that we can find the best
ones among them. To build such an automatic idea ranker, we use public review data as a proxy.
Specifically, we scraped 1200 ICLR 2024 submissions related to LLMs (with keyword filtering) along
with their review scores and acceptance decisions. We explored multiple ways of predicting the
scores and decisions of these submissions and found that LLMs are poorly calibrated when asked
directly to predict the final scores or decisions, but can achieve non-trivial accuracy when asked to
judge which paper is better in pairwise comparisons.
We converted the ICLR submissions into our stan-
dard project proposal format and randomly paired
up accepted and rejected papers and asked LLMs
to predict which one is accepted. On this task,
Claude-3.5-Sonnet achieves an accuracy of 71.4%
with zero-shot prompting. For comparison, GPT-4o
achieves 61.1% and Claude-3-Opus achieves 63.5%,
and we do not observe significant gains from addi-
tional prompting techniques like few-shot or chain-
of-thought prompting. We therefore choose the
Claude-3.5-Sonnet zero-shot ranker.
In order to obtain reliable scores for all project proposals
based on pairwise comparisons, we adopt a Swiss system
tournament where all project proposals are paired with
those whose accumulated scores are similar, and if the proposals are judged to be better, they gain an
additional point. We repeat this for N rounds so the total score of each project proposal will be within
the [0, N ] range. As a sanity check, we use the Claude-3.5-Sonnet ranker to rank the 1.2K ICLR
LLM-related submissions and compare the average review scores of the top 10 ranked papers and the
bottom 10 ranked papers in Table 1. We see a clear separation between the top and bottom ranked
papers, indicating the effectiveness of the LLM ranker. We choose N = 5 for all our experiments since
it gives the best ranking result on this validation set. The top-ranked project proposals from the agent
will be directly used for the AI Ideas condition of the human study.
Since our AI ranker is still far from perfect, we also introduce another experiment condition where
the first author of this paper manually reranked the generated project proposals instead of relying
on the LLM ranker, and we call this the AI Ideas + Human Rerank condition. As we show in
Table 1: Average ICLR review scores of
top- and bottom-10 papers ranked by our
LLM ranker, with different rounds (N ) of
pairwise comparisons.
Bottom-10 Gap
0.56
0.90
0.97
0.95
1.73
1.30
6.28
6.14
5.83
5.94
6.42
6.11
5.72
5.24
4.86
4.99
4.69
4.81
4We provide randomly sampled idea pairs and their similarities in Appendix H. We also provide additional implementa-
tion details about the ideation agent in Appendix F.
6
Table 12, 17 out of the 49 ideas in the AI Ideas + Human Rerank condition overlap with the AI
Ideas condition, while the other 32 are different, indicating the discrepancy between the LLM ranker
and the human expert reranking.
4 Expert Idea Writing and Reviewing
In this section, we shift focus to the human branch of idea generation comparison. We present the
details of our human study, including information about the recruited experts, the human idea
generation task, and the subsequent review process.
4.1 Expert Recruitment
We recruit our expert participants (including for idea writing and reviewing) by sending sign-up
forms to several channels, including: 1) the OpenNLP Slack channel with 1426 NLP researchers from
71 institutions (with consent from the channel manager); 2) Twitter (X); 3) Slack channels of various
NLP groups by direct communication with the group members; and 4) official chat app of the NAACL
2024 conference. We also conducted in-person recruitment by giving out name cards and wearing
T-shirts 5 with sign-up links at the NAACL 2024 conference as well as various other local NLP social
events. Our study has been approved by the Stanford IRB (ID 74246).
We performed screening on all the US participants 6 based on their provided Google Scholar profiles.
We set a minimum requirement of having published at least one paper at a major AI venue. 7 We
reached out to all participants who satisfied this requirement with the consent form and followed up
with the annotation documents for those who consented to participate.
In the end, we recruited N = 49 experts for writing ideas, and N = 79 experts for reviewing ideas. Note
that 24 out of the 79 reviewers also participated in the idea writing, and we made sure no reviewer
would review their own idea. This results in N = 104 total participants across the two tasks. Each
idea writer is asked to write one idea within 10 days and we compensate $300 for each, with a $1000
bonus for the top 5 ideas as scored by the expert reviewers. Each idea reviewer is assigned 2 to 7 ideas
to review and we collected N = 298 unique reviews in total. They are given one week to finish the
reviews and we compensated $25 for each review written by the idea reviewers.
4.2 Expert Qualifications
Our pool of participants is highly qualified and diverse.
The 49 idea writers come from 26 different institutions (Ta-
ble 15) and the majority of them are current PhD students
(Figure 3 left). The 79 reviewers come from 32 institutions
(Table 16) and are mostly PhD students and Postdocs (Fig-
ure 3 right). We use their Google Scholar profiles to extract
several proxy metrics, including the number of papers,
citations, h-index, and i10-index at the time of their submis-
sion. Table 2 shows that our idea writers have an average
of 12 papers and 477 citations, while every reviewer has
published at least two papers and has an average citation
of 635 and h-index of 7. Moreover, based on their survey
Figure 3: Positions of our idea writer
(left) and reviewer (right) participants.
5https://x.com/ChengleiSi/status/1804273510656749649
6We have to recruit participants located in the US due to logistical reasons.
7E.g., *ACL, NeurIPS, ICLR, ICML, AAAI.
7
PhD73%Master18%Other8%PhD79%Master6%Other5%Postdoc8%Idea Writing Participants (N=49)
Metric
papers
citations
h-index
i10-index
Mean Median Min Max
52
4553
21
32
12
477
5
5
10
125
4
4
2
2
1
0
Idea Reviewing Participants (N=79)
SD
10
989
4
6
SD Mean Median Min Max
52
13
9
7276
327
861
21
7
4
6
32
5
15
635
7
7
2
0
0
0
Table 2: Research profile metrics of the idea writing and reviewing participants. Data are extracted
from Google Scholar at the time of idea or review submission.
Metric
Human Ideas
Familiarity (1-5)
Difficulty (1-5)
Time (Hours)
Length (Words)
AI Ideas
Length (Words)
AI + Human Rerank Ideas
Length (Words)
Mean Median Min
Max
SD
3.7
3.0
5.5
901.7
4.0
3.0
5.0
876.0
1.0
1.0
2.0
444.0
5.0
5.0
15.0
1704.0
1.0
0.7
2.7
253.5
1186.3
1158.0
706.0
1745.0
233.7
1174.0
1166.0
706.0
1708.0
211.0
Table 3: Statistics of the 49 ideas from each condition.
responses, 72 out of the 79 reviewers have previously reviewed for major AI conferences or journals.
These statistics indicate that our participants are highly qualified and have substantial research
experience. 8
4.3
Idea Writing
We report statistics of our idea writers’ ideas to measure their
quality. As shown in Table 3, idea writers indicate a moderately
high familiarity with their selected topic (3.7 on a 1 to 5 scale),
and indicate the task as moderately difficult (3 on a 1 to 5 scale).
They spent an average of 5.5 hours on the task and their ideas
are 902 words long on average. These indicate that participants
are putting substantial effort into this task. 9 We also show the
distribution of their selected topics in Table 4.
4.4
Idea Reviewing
Topic
Bias
Coding
Safety
Multilingual
Factuality
Math
Uncertainty
Total
Count
4
9
5
10
11
4
6
49
Review Assignment We let all reviewer participants select their
top two preferred topics as well as their preferred reviewing
load (from 2 to 7). We then randomly assign them to ideas within
their selected topics and all ideas are anonymized. In the assignment, we balance the number of ideas
from each condition for each reviewer and ensure that each reviewer gets at least one human idea
and one AI idea. Every idea is reviewed by 2 to 4 different reviewers. We also avoid assigning ideas
written by authors from the same institution to avoid any potential contamination. Table 5 shows that
each reviewer wrote an average of 3.8 reviews from 2 or 3 conditions, across 1 to 3 topics.
Table 4: Idea topic distribution.
8Detailed breakdown of participant positions is in Appendix K.
9See Appendix J for more details on the quality control of human ideas.
8
Metric
Ours
Familiarity (1-5)
Confidence (1-5)
Time (Minutes)
Length (Word)
ICLR 2024
Confidence (1-5)
Length (Word)
Length (Word; Strengths & Weaknesses)
Mean Median Min Max
SD
3.7
3.7
31.7
231.9
3.7
421.5
247.4
3.0
4.0
30.0
208.0
4.0
360.0
207.0
1.0
1.0
5.0
41.0
1.0
14.0
2.0
5.0
5.0
120.0
771.0
5.0
2426.0
2010.0
0.9
0.7
16.8
112.1
0.8
236.4
176.4
Table 6: Statistics of our collected reviews, with ICLR 2024 reviews as a baseline (for the 1.2K submis-
sions that mentioned the keyword “language models").
3.8
2.5
1.5
Metric
# Reviews
# Conditions
# Topics
Mean Min Max
7.0
2.0
3.0
2.0
3.0
1.0
Review Quality Check Apart from ensuring reviewer
qualifications, we also compute statistics to measure
the quality of the reviews in Table 6. On average, the
reviewers indicated a familiarity of 3.7 (out of 5) in
their selected topic and a confidence of 3.7 (out of 5) in
their reviews. This is comparable with the 1.2K ICLR
2024 submissions related to language models, where
the reviewers also have an average confidence of 3.7 out of 5. Moreover, reviewers spent an average of
32 minutes on each review, with each review being about 232 words long.
Since our review forms are different from the ICLR review forms, we compare them with the ICLR
reviews where we remove the summary and question sections and only count the lengths of the
strengths and weaknesses sections. This way, the ICLR reviews have an average length of 247, similar
to our collected reviews. As an additional measure of review quality, out of the 298 unique reviews
that we have collected, 80 of them provided links to existing papers in their rationales to justify why
the proposed method is not novel. These results further validate the high quality of our review data.
Table 5: Statistics of the review assignment.
SD
1.3
0.5
0.6
5 Main Result: AI Ideas Are Rated More Novel Than Expert Ideas
In this section, we present our main finding on whether LLMs can generate better research ideas than
experts. Consistently across three different statistical tests accounting for the possible confounders,
we find that AI ideas have higher novelty scores than human ideas while being comparable on all
other metrics.
5.1 Test 1: Treating Each Review as an Independent Datapoint
In Test 1, we treat each review as an independent datapoint and aggregate all reviews from the same
condition. We treat the Human Ideas as the baseline condition and compare it with AI Ideas and AI
Ideas + Human Rerank using two-tailed Welch’s t-tests with Bonferroni correction. We show the
barplot in Figure 2 and the detailed numerical results in Table 7. Both AI Ideas (µ = 5.64±σ = 1.76)
and AI Ideas + Human Rerank (µ = 5.81 ± σ = 1.66) are significantly better than Human Ideas
(µ = 4.84 ± σ = 1.79) on the novelty score (p < 0.01). In this particular test, the AI ideas in both
conditions are also significantly better than human ideas on the excitement score (p < 0.05), and the
AI Ideas + Human Rerank condition is also significantly better than Human Ideas in terms of
9
Condition
Novelty Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Excitement Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Feasibility Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Expected Effectiveness Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Overall Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Size Mean Median
SD
SE Min Max p-value
119
109
109
119
109
109
119
109
109
119
109
109
119
109
109
4.84
5.64
5.81
4.55
5.19
5.46
6.61
6.34
6.44
5.13
5.47
5.55
4.68
4.85
5.34
5
6
6
5
6
6
7
6
6
5
6
6
5
5
6
1.79
1.76
1.66
1.89
1.73
1.82
1.99
1.88
1.63
1.76
1.58
1.52
1.90
1.70
1.79
0.16
0.17
0.16
0.17
0.17
0.17
0.18
0.18
0.16
0.16
0.15
0.15
0.17
0.16
0.17
1
1
2
1
1
1
1
2
1
1
1
1
1
1
1
8
10
10
8
9
9
10
10
10
8
10
9
9
9
9
–
0.00**
0.00***
–
0.04*
0.00**
–
1.00
1.00
–
0.67
0.29
–
1.00
0.04*
Table 7: Scores across all conditions by treating each review as an independent datapoint (Test
1). Size is the number of reviews for each condition and the p-values are computed with two-
tailed Welch’s t-tests with Bonferroni correction. We bold results that are statistically significant
(∗p < 0.05;∗∗p < 0.01;∗∗∗p < 0.001). AI ideas are judged as significantly better than human ideas in
terms of novelty and excitement while being comparable on all other metrics.
the overall score (p < 0.05). We do not observe significant differences between AI-generated ideas and
human-written ideas on the other metrics.
5.2 Test 2: Treating Each Idea as an Independent Datapoint
Since we collect multiple reviews for each idea, one could argue that we should not treat each review
as an independent datapoint. To account for this potential confounder, we perform Test 2 where
we average the scores of each idea and treat each idea as one datapoint. This way, the sample
size for every condition will be N = 49, namely the number of ideas. We treat the Human Ideas
as the baseline condition and compare it with AI Ideas and AI Ideas + Human Rerank using
two-tailed Welch’s t-tests with Bonferroni correction. As shown in Table 8, we still see significant
results (p < 0.05) where both AI Ideas (µ = 5.62 ± σ = 1.39) and AI Ideas + Human Rerank
(µ = 5.78±σ = 1.07) have higher novelty scores than Human Ideas (µ = 4.86±σ = 1.26).
5.3 Test 3: Treating Each Reviewer as an Independent Datapoint
Another possible confounder is that different reviewers might have different biases, for example,
some reviewers may be more lenient than others. To account for such reviewer biases, we perform Test
10
Condition
Novelty Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Excitement Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Feasibility Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Expected Effectiveness Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Overall Score
Human Ideas
AI Ideas
AI Ideas + Human Rerank
Size Mean Median
SD
SE Min Max p-value
49
49
49
49
49
49
49
49
49
49
49
49
49
49
49
4.86
5.62
5.78
4.56
5.18
5.45
6.53
6.30
6.41
5.10
5.48
5.57
4.69
4.83
5.32
5.00
5.50
6.00
4.33
5.50
5.50
7.00
6.00
6.50
5.33
5.50
5.50
4.67
5.00
5.50
1.26
1.39
1.07
1.16
1.33
1.36
1.50
1.27
1.06
1.14
1.23
0.99
1.16
1.34
1.24
0.18
0.20
0.15
0.17
0.19
0.19
0.21
0.18
0.15
0.16
0.18
0.14
0.17
0.19
0.18
1.50
1.50
3.00
2.00
2.50
1.00
3.00
2.50
4.00
3.00
2.00
3.00
2.00
1.50
2.00
7.00
8.33
8.33
7.00
7.33
7.33
9.00
8.50
9.00
7.00
7.50
7.50
6.67
7.50
7.50
–
0.03*
0.00**
–
0.08
0.00**
–
1.00
1.00
–
0.58
0.17
–
1.00
0.06
Table 8: Scores across all conditions by averaging the scores for each idea and treating each idea as one
data point (Test 2). Size is the number of ideas for each condition, and the p-values are computed with
two-tailed Welch’s t-tests with Bonferroni correction. We bold results that are statistically significant
(∗p < 0.05;∗∗p < 0.01). AI ideas are judged as significantly better than human ideas in terms of novelty
while being comparable on all other metrics.
3 where we treat each reviewer as one datapoint and compute their average score on each condition.
Then for each reviewer, we get their mean score difference between the AI Ideas condition and
the Human Ideas condition, as well as the difference between the AI Ideas + Human Rerank
condition and the Human Ideas condition. This way, we only analyze the differences among the
different conditions. That is, if the differences are significantly higher than zero under the one-sample
t-test, that indicates reviewers are giving higher scores to one condition compared to the other. The
results are shown in Table 9, and we see significant results (p < 0.05) that AI ideas in both the AI Ideas
and AI Ideas + Human Rerank conditions are rated more novel than Human Ideas. Therefore,
we conclude that AI ideas generated by our ideation agent are judged as more novel than human
expert generated ideas, consistently across all three different statistical tests. 10
6 In-Depth Analysis of the Human Study
While the above main results highlight the promise of LLMs in generating novel research ideas, there
are some additional nuances. In this section, we move beyond the statistical comparisons and dive
10We also include results of fitting linear mixed-effects models in Appendix N, which reinforces our conclusions. Addi-
tionally, we plot the breakdown of all metrics by topic in Appendix O.
11
N Mean Diff p-value
Novelty Score
AI Ideas vs Human Ideas
70
AI Ideas + Human Rerank vs Human Ideas 65
Excitement Score
AI Ideas vs Human Ideas
70
AI Ideas + Human Rerank vs Human Ideas 65
Feasibility Score
AI Ideas vs Human Ideas
70
AI Ideas + Human Rerank vs Human Ideas 65
Effectiveness Score
AI Ideas vs Human Ideas
70
AI Ideas + Human Rerank vs Human Ideas 65
Overall Score
AI Ideas vs Human Ideas
70
AI Ideas + Human Rerank vs Human Ideas 65
0.94
0.86
0.73
0.87
-0.29
-0.08
0.42
0.39
0.24
0.66
0.00**
0.00**
0.01*
0.00**
0.36
0.74
0.16
0.16
0.36
0.01*
Table 9: Mean score differences between AI ideas and human ideas by treating each reviewer as a data
point (Test 3). All p-values are computed with one-sample t-tests with Bonferroni correction. We bold
results that are statistically significant (∗p < 0.05;∗∗p < 0.01).
into other aspects of our collected data. Specifically, we focus on the quality of human ideas, reviewer
preferences, and the extent of reviewer agreement.
6.1 Human Experts May Not Be Giving Their Best Ideas
We first investigate whether human experts are submitting their best ideas to us. We did a post-
study survey to understand how idea-writing participants came up with their ideas. Out of the 49
participants, 37 of them came up with the idea on the spot, while the other 12 already had the idea
before the study. Furthermore, we asked the survey question: “How does this idea compare to your past
research ideas (ideas that you actually worked on)? Please answer with a percentile. E.g., this idea is one of my
top 10% ideas.” Our participants indicated that on average their submitted ideas are about the top 43%
of all their past ideas. This implies that our collected ideas are likely the median-level ideas from these
expert researchers, which is reasonable given that most of them came up with the idea within the
10-day time constraint of the task.
6.2 Reviewers Tend to Focus More on Novelty and Excitement
To gain a deeper understanding of the dynamics between the different metrics in the review process,
we explore whether reviewers focus on specific aspects when evaluating the ideas. We compute the
pairwise correlation between different metrics in Table 10. The overall score mostly correlates with the
novelty score (r = 0.725) and excitement score (r = 0.854), while having almost no correlation (r < 0.1)
with the feasibility score. This implies that reviewers might be paying more attention to the novelty
and excitement aspects of the ideas when they are reviewing.
12
Overall Novelty
Overall
Novelty
Excitement
Feasibility
Effectiveness
–
0.725
0.854
0.097
0.642
0.725
–
0.719
-0.073
0.357
Excitement
0.854
0.719
–
-0.031
0.565
Feasibility
0.097
-0.073
-0.031
–
0.251
Effectiveness
0.642
0.357
0.565
0.251
–
Table 10: Pairwise correlation between different metrics (symmetric matrix).
6.3 Reviewing Ideas is Inherently Subjective
Finally, we acknowledge that reviewing is inherently subjective, and reviewing based on ideas
rather than executed papers might be even more subjective. We investigate this using inter-reviewer
agreement. Specifically, we randomly split reviewers of each paper into half, use one half to rank the
top and bottom 25% of all ideas, and then measure agreement with the held-out set of reviewers. 11 As
shown in the first block of Table 11, reviewers have a relatively low agreement (56.1%) despite the
fact that we have provided detailed explanations for each metric in our review form. As a baseline
comparison, the NeurIPS 2021 reviewer consistency experiment found 66.0% accuracy using this
reviewer agreement metric in the balanced setting (Beygelzimer et al., 2021, Lu et al., 2024). We also
computed the reviewer agreement using the same metric on the 1.2K ICLR 2024 submissions related
to language models, which has a balanced accuracy of 71.9%. While our reviewer agreement is higher
than random (50%), it is generally lower than conference reviewing, most likely due to the higher
subjectivity involved when evaluating ideas without seeing the actual experiment results.
7 Limitations of LLMs
With our findings from the human study in mind, we now turn to LLM performance to provide
insights that could inform future methods for improving idea generation systems. Our ideation agent
is motivated by two potential strengths of LLMs: their ability to scale by generating a vast number
of ideas - far more than any human could - and the possibility of filtering these ideas to extract the
best ones from the large pool. In theory, this approach could lead to high-quality ideas by leveraging
inference scaling. However, we present empirical evidence that this naive assumption about scaling
idea generation has significant limitations.
7.1 LLMs Lack Diversity in Idea Generation
We adopted an over-generate and rank paradigm in idea generation. This raises the question: is there
an upper limit to how many new ideas LLMs can generate? To answer this question, we take a closer
look at 4000 generated seed ideas for each topic.
We encode all raw ideas with all-MiniLM-L6-v2 from Sentence-Transformers. For each idea, we
compute its cosine similarity with all previously generated ideas on the same topic. We consider
an idea as a duplicate if it has a similarity of above 0.8 with any of the previously generated ideas.
In Figure 4, we show that as the agent keeps generating new batches of ideas, the percentage of
non-duplicates in newly generated batches keeps decreasing, and the accumulated non-duplicate
ideas eventually plateau. In fact, out of the 4000 generated seed ideas, there are only 200 non-duplicate
11This metric follows the balanced accuracy metric as used in Lu et al. (2024) and avoids the limitations of other agreement
metrics like Krippendorff’s alpha, which require overlapping reviews and would result in a sparse matrix due to the
non-overlapping nature of our reviewer assignments. We do the random splitting 20 times and report the average to reduce
variances.
13
Figure 4: Measuring duplication of AI-generated ideas: the left figure plots the percentage of non-
duplicate ideas in each new bucket of generated ideas; the right figure plots the accumulated non-
duplicate ideas as the agent keeps generating new ideas. All data points are averaged across all topics.
unique ideas. This sets a bottleneck on our inference-time scaling since increasing the number of
generated ideas simply leads to repeating duplicate ideas.
7.2 LLMs Cannot Evaluate Ideas Reliably
Consistency
50.0
66.0
71.9
56.1
50.0
45.0
51.7
53.3
43.3
Random
NeurIPS’21
ICLR’24
Ours
GPT-4o Direct
GPT-4o Pairwise
Claude-3.5 Direct
Claude-3.5 Pairwise
“AI Scientist” Reviewer
Most prior works have adopted LLM-as-a-judge for evaluating
research ideas (Lu et al., 2024) motivated by the observation that
LLMs can have a higher agreement with human evaluators than
the inter-human agreement. However, we offer some empirical
evidence that LLMs cannot evaluate ideas reliably yet.
Concretely, we use the average review score of each idea to rank
the top and bottom 25% of all our collected human and AI ideas,
and use this to benchmark various LLM evaluators. Specifically,
we obtain the LLM predicted scores of all ideas and set the median
score as the threshold to measure their accuracy on our balanced
idea ranking data.
In the second block of Table 11, we compare several different LLM
evaluators: 1) directly giving the review criteria and prompting
for a final score (Baek et al., 2024, Li et al., 2024, Yang et al., 2024);
2) our pairwise ranker as described in Section 3.3; and 3) the
“AI Scientist” reviewer agent (Lu et al., 2024). All of these LLM
evaluators have a lower agreement than our expert reviewers’ scores. Even the best LLM evaluator
— our own Claude-3.5 pairwise ranker — only achieves an accuracy of 53.3%, lower than our inter-
reviewer consistency of 56.1%.
Even if AI-human agreement eventually matches or exceeds human-human agreement, simply
meeting this baseline does not imply that AI-as-a-reviewer is meaningful, since we may be trading
variance for bias, where AI reviewers are more consistent but rely on spurious correlations (Durmus
et al., 2022). Our findings in Table 11 are consistent with these brittleness concerns, as we find a
significant drop in AI-human agreement scores under our study compared to the original studies.
Finally, even though Claude-3.5 pairwise agreements may seem close to human agreement, many
other pieces of evidence throughout the paper leads us to be cautious about the use of LLM-as-a-judge
Table 11: Review score consis-
tency among human reviewers
(first block) and between humans
and AI (second block).
14
05001000150020002500300035004000Total Number of Ideas Generated0102030405060708090100Non-Duplicate Percentage (%)Evolution of Non-Duplicates (%) Across Generations% Non-Duplicates05001000150020002500300035004000Total Number of Ideas Generated0255075100125150175200Accumulated Non-Duplicate IdeasAccumulation of Non-Duplicate Ideas Across GenerationsAccumulated Non-Duplicatesin such a complex and subjective task. These include our findings on the significant discrepancy
between the agent’s top-ranked ideas and the human expert’s top-ranked ideas (Appendix I) and how
the AI Ideas + Human Rerank condition tends to score higher than the AI Ideas condition on
all metrics in Section 5. These limitations of LLM auto-evaluation not only constrain the effectiveness
of our over-generate-and-rank paradigm for idea generation but also raise concerns about trusting
conclusions that are based primarily on LLM evaluators.
8 Qualitative Analysis and Examples
In this section, we offer some qualitative analysis of human- and AI-generated ideas based on our
collected reviews and present four pairs of randomly sampled human and AI ideas as case studies.
8.1 Analysis of Free-Text Reviews
Following recent practices of using LLMs to extract patterns from text corpora (Zhong et al., 2022,
2023), we use Claude-3.5 to extract and cluster the main points from all reviews. We then manually
verified and labeled each cluster.
Many reviews reinforce our quantitative finding that AI ideas tend to be more novel. For example,
reviewers noted: “The idea of [...] is quite novel in an in-context learning setting.”, “The idea of
exploring [...] using an LLM-based iterative approach is novel.”, “The idea of [...] when constructing
prompts to improve cross-lingual transfer is one that I have not heard of before.”, “I like the idea to
[...], and think it will be helpful for other researchers in the community.”, “Combining [...] is a unique
way of attempting to preserve the gist of the information while likely losing specific identifiers.”, and
“Safeguarding using [...] is clearly novel. Similar ideas have not been seen in the related work.”.
Next, we summarize some common failure modes of AI ideas:
1. Being too vague on implementation details. For example, one reviewer noted: “I’m not
super clear on the details of this lattice and how the model will be prompted, so I’m not super
sure how well the model will complete these subtasks and how well-suited this particular
structure is to completing the overall task.” and another reviewer noted: “"For analyzing the
effectiveness of the method, the proposal only provides a very ad-hoc + hand-wavey suggestion
to compare responses across predefined questions.” In another case, the AI idea is criticized for
not considering practical implementation details: “I think in each of the steps, there is something
hard to execute. For example, in step Constellation Formation, how do we do the weighted
sum?” Similarly, other reviews noted: “It’s unclear how CLIP is connected to the language model
and how training a CLIP model would enable the LM to understand images.”, and “There’s
no mentioning on how to prompt the model to generate defensive strategies and refine the
model’s responses using these strategies.” Such vagueness often makes it difficult for reviewers
to make confident judgments: “Because this idea is too general and vague, I can’t really answer
the previous question. An idea needs a certain level of details to be determined if it fits for a
conference/journal but this one misses them.”
2. Misuse of datasets. For example: “I’m not sure about the datasets picked. StereoSet is not a QA
dataset; it simply contains statements. Also, I don’t understand why Dialogue NLI responses
require empathy.”, “I’m concerned the datasets proposed are the right test cases for security of the
code (since they are really just ML/programming problems, not system-level programming).”,
and “the choice of datasets might not be the best to show the effect of incorporating multiple
perspectives, especially TruthfulQA and ScienceQA, which seems to have a single correct
interpretation and answer.” In another example, the benchmark datasets chosen are considered
15
too easy by the reviewer: “none of the chosen datasets (MATH, GSM8K, and MMLU) uses
complicated math concepts”.
3. Missing or inappropriate baselines. For example: “The proposed method needs to be compared
to simply asking the model to think of one (or several) facts about the question before answering
using more turns. This could be an additional baseline to verify the scoring process is meaningful.”
and “Although the proposal includes some baselines that should be compared to, it does not
mention some methods which seem to do quite well with LLMs.” Sometimes, “the chosen
baselines may not be suitable”, for example, because they are not directly comparable with the
proposed method.
4. Making unrealistic assumptions. For example: “The assumption that model can (mostly)
accurately flag its own hallucinations is quite tricky.”, “there is a presupposed assumption that
hallucinations in LLMs are ungrounded and independent of the data they are trained on, which
is generally not considered true”, “The big issue for the effectiveness of the proposed method is
that, it asserts very strong assumptions on downstream tasks, such as there must exist only two
extremes.”, “Some assumptions (e.g., [...]) are unlikely to be true in practice, especially when
low-resource languages and less represented cultures are included in the study.”, and “A major
assumption in this approach is that the model is able to [...]. However, [...]”.
5. Being too resource-demanding. Despite the fact that we explicitly prompted the agent to
consider feasibility when generating ideas, some of the generated ideas are still too resource-
demanding. For example, one reviewer noted: “The biggest issue to feasibility I see is that the
project calls for fine-tuning BLOOM (See step 5). BLOOM has 176B parameters so it’s going to
take quite a lot of GPUs to fine-tune. From a systems perspective, I see this as causing delays.”
In some other cases, manual data annotation is being criticized for feasibility: “The bottleneck
seems to be the dataset collection process if there are no existing datasets that fit the requirements
of the paper.”, and “the manual evaluation by native speakers or cultural experts could be
time-consuming and resource-intensive”.
6. Not well-motivated. For example: “Not well-motivated and there is not a clear intuition that
this work can work to increase the factuality.”, “And in general the method is not well-motivated
and needs reasons why retrieving from model itself is meaningful by use cases or specific tasks.”,
and “The idea simply doesn’t make sense to me. Given current LLMs’ ability, I’m pretty sure
they can simply recite code like inserting data to a binary search tree.”
7. Not adequately following existing best practices. For example: “The proposal does not seem
to include awareness of what has been previously tried, or more strategic ways to evaluate
success/failures...”
We contrast these with some of the unique strengths and weaknesses of human ideas:
1. Human ideas are generally more grounded in existing research and practical considerations,
but may be less innovative. For example, these ideas might be applying existing techniques
to new problems: “Multilinguality as a debiasing method has already been considered in the
literature, although not necessarily in the prompt engineering framework.” Sometimes people
apply incremental changes to existing techniques: “The overall idea shares quite a similar idea
with program-of-thought (PoT). The only difference is that there is an additional step where an
LLM is prompted to decide whether to use code or not.” Some ideas try to combine existing
techniques: “Query decomposition and RAG separately are well studied, if there is no existing
work that combines both (which I’m not aware of), then it’s reasonably novel.” As some reviewers
16
noted, human ideas tend to build on known intuitions and results: “There are already existing
works on using available lexicons to improve the translation capabilities of LLMs in general.”
2. Human ideas tend to be more focused on common problems or datasets in the field. For
example: “The problem of models not handling negation properly is a very common problem,
especially among propriety LMs such as claude-3-5-sonnet.”, “The data exist. This project mainly
entails plugging in these datasets to a prompt template and finetuning for a bit. There is little left
unspecified, and it should be quite simple to execute on.”, “I haven’t found any work using this
idea to solve this specific problem, but [...] is definitely not new.”, and “While existing works
have explored the problem of calibration in long-form answers (e.g. [...]), the specific method for
calibration is different.”
3. Human ideas sometimes prioritize feasibility and effectiveness rather than novelty and
excitement. For example, reviewers noted: “I don’t think this will be a groundbreaking finding,
but it will probably work.” and “while the idea is promising and could lead to significant
improvements, it may not be groundbreaking enough to be considered transformative or worthy
of a best paper award”.
8.2 Randomly Sampled Human and AI Ideas with Reviews
We randomly sample four pairs of ideas from different topics to ground our numerical results with
actual examples. In each pair, there is one AI idea and one human idea. To save space, we include the
full project proposal of each idea along with the full set of reviews in the Appendix, but we list their
titles, topics, and average scores here for quick reference (we reveal whether each idea is AI-generated
or human-written in Appendix X):
1. Modular Calibration for Long-form Answers: Appendix P
Topic: Uncertainty; Average Overall Score: 5.5
2. Semantic Resonance Uncertainty Quantification: Calibrating LLM Confidence through Multi-
Path Reasoning: Appendix Q
Topic: Uncertainty; Average Overall Score: 6
3. Translation with LLMs through Prompting with Long-Form Context: Appendix R
Topic: Multilingual; Average Overall Score: 4
4. Linguistic Pivot Constellation: Enhancing Cross-Lingual Transfer for Low-Resource Languages
and Dialects: Appendix S
Topic: Multilingual; Average Overall Score: 6.7
5. LLM Directed Retrieval Querying for Improving Factuality: Appendix T
Topic: Factuality; Average Overall Score: 4.7
6. Semantic Divergence Minimization: Reducing Hallucinations in Large Language Models
through Iterative Concept Grounding: Appendix U
Topic: Factuality; Average Overall Score: 3.3
7. Autoprompting: Generate Diverse Few-shot Examples for Any Application: Appendix V
Topic: Coding; Average Overall Score: 5
8. Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful Systems:
Appendix W
Topic: Coding; Average Overall Score: 6.7
17
9 Related Work
Research idea generation and execution. Several prior works explored methods to improve idea
generation, such as iterative novelty boosting (Wang et al., 2024), multi-agent collaboration (Baek et al.,
2024), and multi-module retrieval and revision (Yang et al., 2024). While some of them share similar
components as our ideation agent, these works focus on improving the idea generation methods over
vanilla prompting baselines, without comparisons to any human expert baselines. Beyond ideation,
another line of work uses LLMs for executing experiments by generating code given the research
problems (Huang et al., 2024, Tian et al., 2024), or combining idea generation with code generation
to directly implement AI-generated ideas (Li et al., 2024, Lu et al., 2024). These works either use
automatic evaluation on a pre-defined set of problems and benchmarks, setting a constrained problem
space; or rely on proxy metrics like LLM evaluators, which are often unreliable.
LLM for other research-related tasks. LLMs have also been used for several other research-related
tasks, such as generating code to perform data-driven discovery (Gu et al., 2024, Guo et al., 2024, Hu
et al., 2024, Ifargan et al., 2024, Majumder et al., 2024), automatic review generation (D’Arcy et al.,
2024, Liang et al., 2024), related work curation (Ajith et al., 2024, Kang and Xiong, 2024, Lehr et al.,
2024, Press et al., 2024), experiment outcome prediction (Hewitt et al., 2024, Lehr et al., 2024, Manning
et al., 2024, Zhang et al., 2024), and future work recommendation (Zhang et al., 2024). Unlike these
works, we tackle the more creative and open-ended task of research ideation.
Computational creativity. Our work also connects to the line of work on examining AI’s novelty
and diversity in creative tasks. Chakrabarty et al. (2024) found that AI writings are less creative
than professional writers, while we show LLM-generated ideas can be more novel than experts
on the task of research ideation. Another line of work found that LLM generations lack collective
diversity (Anderson et al., 2024, Zhou et al., 2024), which matches our findings on idea generation.
Lastly, several other works conducted human evaluation to study the impact of AI exposure or human-
AI collaboration on novelty and diversity (Ashkinaze et al., 2024, Liu et al., 2024, Padmakumar and He,
2024) with mixed conclusions. While we also conduct a human evaluation of idea novelty, we focus
on the human-AI comparison on the challenging task of research ideation with expert participants.
10 Discussion
To summarize, we compared research ideas generated by our AI agent with ideas written by expert
researchers, and observed the robust finding that expert reviewers rate AI ideas as statistically more
novel than expert ideas. In this section, we discuss some high-level questions readers might have and
suggest some ways to address them.
Question 1: Do these collected expert ideas represent their best ideas? One might argue that these
ideas submitted by our idea-writing participants might not represent their best ideas as we discussed
in Section 6.1, since most of them came up with the idea on the spot within a short period. In order to
address this concern, we have designed an experiment where we will compare AI ideas with papers
accepted at top-tier AI conferences. To avoid any possible contamination, we target the upcoming
EMNLP 2024 conference, which will release the accepted papers in October 2024. We have generated
AI ideas with our agent on 23 topics from the EMNLP Call For Papers page in July 2024 and cached
them. We pre-registered our analysis plan which also includes the link to the cached ideas. 12 Apart
from comparing the quality of these ideas, we will also compute the overlap between AI-generated
ideas and accepted papers on the same topics.
12https://osf.io/z6qa4
18
Question 2: Are evaluations based solely on ideas subjective? In this current study, we focused
solely on evaluating the ideas themselves. Ideas that sound novel and exciting might not necessarily
turn into successful projects, and our results indeed indicated some feasibility trade-offs of AI ideas.
We view the current study as a preliminary evaluation of AI-generated ideas. In the next phase, we
will recruit researchers to execute some AI and human-generated ideas into full projects. This will
enable reviewers to assess the complete experimental outcomes, providing a more reliable basis for
evaluation. Furthermore, it will allow us to analyze whether our initial idea evaluations align with
the assessments of the actual project outcomes.
Question 3: Why do you focus only on prompting-based research in NLP? The scope of our study
is limited to prompting research ideas within NLP. We chose this design to facilitate the next phase
of our execution experiment, where we prefer research ideas that are less resource-demanding and
can be executed relatively quickly. We believe that the evaluation protocols we established should be
applicable to other research domains as well, although the conclusions could be different depending
on the research fields. Future work should consider extending such human study to other research
domains and it would be interesting to compare how the conclusions differ.
Question 4: Can you automate idea execution as well? It is tempting to envision an end-to-end
automated research pipeline where AI agents can implement AI-generated ideas to directly evaluate
their effectiveness. Apart from speeding up scientific discovery, one could also imagine using such
execution agents to automatically verify experiment results in existing papers or new submissions.
We have also explored building an LLM agent to generate code to implement the generated ideas.
Specifically, we provide a template codebase that consists of: (1) loading datasets from Huggingface
or generating synthetic test examples; (2) implementing baseline methods; (3) implementing the
proposed method; (3) loading or implementing the evaluation metrics; (4) running experiments on
the testset with the baselines and the proposed method, so that the output of the agent will be a report
of the baseline performance as well as the proposed method’s performance. While this agent can
generate code that compiles and executes, we find that the automated experiments can be misleading
because the agent often skips or modifies steps in the baselines or proposed methods. In some cases,
the metric functions are also not correctly defined. This highlights the core challenge: just comparing
the final experiment results is not enough; we have to verify the faithfulness of the implementations
as well. Performing such implementation verification is not a trivial task, and we leave it to future
work. We provide detailed description of our idea execution agent in Appendix Y.
11 Ethical Considerations
Publication Policy. The growing use of AI to generate research ideas raises serious concerns about the
potential abuse of these technologies by students or researchers who may flood academic conferences
with low-quality or poorly thought-out submissions. The availability of LLM-generated content
could lead to a decline in the overall quality of academic discourse, as some individuals might take a
lazy approach, relying on AI to both generate ideas and review submissions. This would undermine
the credibility and integrity of the review process. The risks are real. Without proper oversight, we
could see a deluge of submissions that lack depth or intellectual merit. To prevent this, it is essential to
hold researchers accountable for the outputs generated through AI tools. Rigorous standards must be
applied equally to both AI-assisted and human-generated research to ensure that the use of LLMs
does not result in misleading, superficial, or unethical academic contributions.
Intellectual Credit. The use of LLMs to generate research ideas introduces significant ambiguity
around the concept of intellectual credit. Traditional frameworks for attributing credit in research,
based on human authorship and contribution, become less clear when AI plays a significant role
19
in idea generation. Questions arise around how to distribute credit between the developers of the
LLM, the researchers who designed the frameworks for its use, and the researchers who integrate
AI-generated ideas into their work. Furthermore, it becomes increasingly difficult to trace the origins
of AI-generated contributions, especially when they draw from vast datasets composed of numerous
sources. This complexity calls for a broader rethinking of how intellectual credit is assigned in
AI-driven research. While a complete overhaul of legal and academic norms is beyond the scope
of this project, we advocate for the adoption of transparent documentation practices. Researchers
should clearly disclose the role AI played in the idea generation process, specifying which models,
data sources, and frameworks were used, and outlining the level of human involvement. This could
ensure that the credit distribution in AI-supported research is as transparent and fair as possible.
Potential for Misuse. AI-generated research ideas, especially those that introduce novel concepts,
have the potential to be misused in ways that could lead to harmful or destabilizing outcomes. For
instance, ideation agents could be leveraged to generate adversarial attack strategies or other unethical
applications. This concern aligns with broader arguments from those focused on existential risk
(X-risk), who argue that AI-driven innovation could be a primary route to destabilizing the status
quo, posing risks at a societal or even global level. Our stance is that such discussions on safety
should be evidence-based to the extent that it is possible, and careful evaluation work is an important
component of keeping these discussions grounded in actual, measured capabilities of these systems.
We advocate for continued safety research specifically targeting these types of concerns—such as the
development of Reinforcement Learning from Human Feedback (RLHF) systems or anti-jailbreak
mechanisms for research ideation agents. Additionally, we believe it would be meaningful to create
safety benchmarks that assess the ethical and safe application of AI-generated ideas.
Idea Homogenization. Our analysis showed that current LLMs lack diversity in idea generation.
This raises important concerns that wide adoption of LLMs can result in idea homogenization, where
the generated ideas only reflect a narrow set of perspectives or have systematic biases. Over time,
this could lead to a reduction in the richness and diversity of research outputs globally. Future work
should develop ways to either improve LLMs themselves or refine our idea generation methods to
promote idea diversity. It’s also important to note that our evaluation primarily assesses the quality of
the typical ideas being generated, and may not fully capture the long tail of unique or novel ideas that
would be truly transformative.
Impact on Human Researchers. The integration of AI into research idea generation introduces a
complex sociotechnical challenge, as research is fundamentally a community-driven, collaborative
effort. By introducing AI, particularly LLMs, into this social system, we risk unforeseen consequences.
Overreliance on AI could lead to a decline in original human thought, while the increasing use of
LLMs for ideation might reduce opportunities for human collaboration, which is essential for refining
and expanding ideas. To mitigate these risks, future works should explore new forms of human-AI
collaboration, and our results on human reranking of AI ideas show that even naive human-AI
collaboration approaches can be effective. Beyond reranking, humans can play a critical role in the
ideation process by providing intermediate feedback, taking AI-generated ideas as inspiration for
further development, and bringing their unique expertise into the process. Understanding how to
integrate LLMs into this collaborative process without disrupting the social fabric of research will
be an important ongoing problem, requiring careful consideration of the broader sociotechnical
implications.
20
Positionality Statement
We disclose the authors’ anticipated outcomes of the human study before the experiment was con-
ducted to be transparent about experimenter biases. Among the three authors, Tatsu and Diyi were
expecting a null result from the study while Chenglei expected AI to be better than humans.
Acknowledgement
We thank all participants who wrote and reviewed ideas for us. Many of them also provided insightful
feedback on various aspects of this study. This project would not have been possible without their
support. To ensure the integrity and fairness of phase II of our study, we leave our participants
anonymous but will update this manuscript with a detailed acknowledgment of all participants in the
project’s final report.
We thank Rose Wang, Dora Zhao, Irena Gao, Isabel Gallegos, Ken Liu, Aryaman Arora, Harshit Joshi,
Shi Feng, Tianyu Gao, Xinran Zhao, Yangjun Ruan, Xi Ye, Mert Yuksekgonul, and members of Tatsu
Lab and SALT Lab for their helpful feedback on the early version of this draft.
We thank our undergraduate intern Isha Goswami and faculty administrator Eric Alejandro Pineda
for assisting with review data collection and financial logistics.
This work was supported by gifts from Open Philanthropy, Tianqiao and Chrissy Chen Institute,
Meta, IBM, and Amazon, and grants from ONR, NSF IIS-2247357, and CNS-2308994.
References
Anirudh Ajith, Mengzhou Xia, Alexis Chevalier, Tanya Goyal, Danqi Chen, and Tianyu Gao. LitSearch:
A Retrieval Benchmark for Scientific Literature Search. ArXiv, abs/2407.18940, 2024.
Barrett R Anderson, Jash Hemant Shah, and Max Kreminski. Homogenization Effects of Large
Language Models on Human Creative Ideation. In Proceedings of the 16th Conference on Creativity &
Cognition, 2024.
Joshua Ashkinaze, Julia Mendelsohn, Qiwei Li, Ceren Budak, and Eric Gilbert. How AI Ideas Affect the
Creativity, Diversity, and Evolution of Human Ideas: Evidence From a Large, Dynamic Experiment.
ArXiv, abs/2401.13481, 2024.
Jinheon Baek, Sujay Kumar Jauhar, Silviu Cucerzan, and Sung Ju Hwang. ResearchAgent: Iter-
ative Research Idea Generation over Scientific Literature with Large Language Models. ArXiv,
abs/2404.07738, 2024.
Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew
Goff, Jonathan Gray, Hengyuan Hu, Athul Paul Jacob, Mojtaba Komeili, Karthik Konath, Minae
Kwon, Adam Lerer, Mike Lewis, Alexander H. Miller, Sandra Mitts, Adithya Renduchintala,
Stephen Roller, Dirk Rowe, Weiyan Shi, Joe Spisak, Alexander Wei, David J. Wu, Hugh Zhang, and
Markus Zijlstra. Human-level play in the game of diplomacy by combining language models with
strategic reasoning. Science, 378:1067 – 1074, 2022.
Alina Beygelzimer, Yann Dauphin, Percy Liang, and Jennifer Wortman Vaughan.
The
https://blog.neurips.cc/2021/12/08/
neurips 2021 consistency experiment.
the-neurips-2021-consistency-experiment, 2021. Neural Information Processing
Systems blog post.
21
Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V. Le, Christopher R’e, and Azalia
Mirhoseini. Large Language Monkeys: Scaling Inference Compute with Repeated Sampling. ArXiv,
abs/2407.21787, 2024.
Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, and Chien-Sheng Wu.
Art or Artifice? Large Language Models and the False Promise of Creativity. In CHI, 2024.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of Thoughts Prompting:
Disentangling Computation from Reasoning for Numerical Reasoning Tasks. TMLR, 2023.
Katherine M. Collins, Albert Qiaochu Jiang, Simon Frieder, Li Siang Wong, Miri Zilka, Umang Bhatt,
Thomas Lukasiewicz, Yuhuai Wu, Joshua B. Tenenbaum, William Hart, Timothy Gowers, Wenda
Li, Adrian Weller, and Mateja Jamnik. Evaluating language models for mathematics through
interactions. Proceedings of the National Academy of Sciences of the United States of America, 121, 2024.
Mike D’Arcy, Tom Hope, Larry Birnbaum, and Doug Downey. MARG: Multi-Agent Review Genera-
tion for Scientific Papers. ArXiv, abs/2401.04259, 2024.
Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, and
Jason Weston. Chain-of-Verification Reduces Hallucination in Large Language Models. ArXiv,
abs/2309.11495, 2023.
Shizhe Diao, Pengcheng Wang, Yong Lin, Xiang Liu, and Tong Zhang. Active Prompting with
Chain-of-Thought for Large Language Models. In ACL, 2024.
Esin Durmus, Faisal Ladhak, and Tatsunori B. Hashimoto. Spurious Correlations in Reference-Free
Evaluation of Text Generation. In Annual Meeting of the Association for Computational Linguistics, 2022.
URL https://api.semanticscholar.org/CorpusID:248300077.
Ken Gu, Ruoxi Shang, Ruien Jiang, Keying Kuang, Richard-John Lin, Donghe Lyu, Yue Mao, Youran
Pan, Teng Wu, Jiaqian Yu, Yikun Zhang, Tianmai M. Zhang, Lanyi Zhu, Mike A. Merrill, Jeffrey
Heer, and Tim Althoff. BLADE: Benchmarking Language Model Agents for Data-Driven Science.
ArXiv, abs/2408.09667, 2024.
Siyuan Guo, Cheng Deng, Ying Wen, Hechang Chen, Yi Chang, and Jun Wang. DS-Agent: Automated
Data Science by Empowering Large Language Models with Case-Based Reasoning. In ICML, 2024.
Luke Hewitt, Ashwini Ashokkumar, Isaias Ghezae, and Robb Willer. Predicting Results of Social
Science Experiments Using Large Language Models. Preprint, 2024. URL https://docsend.
com/view/ity6yf2dansesucf.
Xueyu Hu, Ziyu Zhao, Shuang Wei, Ziwei Chai, Guoyin Wang, Xuwu Wang, Jing Su, Jingjing Xu,
Ming Zhu, Yao Cheng, Jianbo Yuan, Kun Kuang, Yang Yang, Hongxia Yang, and Fei Wu. InfiAgent-
DABench: Evaluating Agents on Data Analysis Tasks. In ICML, 2024.
Qian Huang, Jian Vora, Percy Liang, and Jure Leskovec. MLAgentBench: Evaluating Language
Agents on Machine Learning Experimentation. In ICML, 2024.
Tal Ifargan, Lukas Hafner, Maor Kern, Ori Alcalay, and Roy Kishony. Autonomous LLM-driven
research from data to human-verifiable research papers. ArXiv, abs/2404.17605, 2024.
Hao Kang and Chenyan Xiong. ResearchArena: Benchmarking LLMs’ Ability to Collect and Organize
Information as Research Agents. ArXiv, abs/2406.10291, 2024.
22
Michelle S. Lam, Janice Teoh, James Landay, Jeffrey Heer, and Michael S. Bernstein. Concept Induction:
Analyzing Unstructured Text with High-Level Concepts Using LLooM. In CHI, 2024.
Steven A. Lehr, Aylin Caliskan, Suneragiri Liyanage, and Mahzarin R. Banaji. ChatGPT as Research
Scientist: Probing GPT’s Capabilities as a Research Librarian, Research Ethicist, Data Generator and
Data Predictor. Proceedings of the National Academy of Sciences of the United States of America, 121 35,
2024.
Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal,
Heinrich Kuttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela.
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. In NeurIPS, 2020.
Ruochen Li, Teerth Patel, Qingyun Wang, and Xinya Du. MLR-Copilot: Autonomous Machine
Learning Research based on Large Language Models Agents. ArXiv, abs/2408.14033, 2024.
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom,
Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien
de, Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven
Gowal, Alexey, Cherepanov, James Molloy, Daniel Jaymin Mankowitz, Esme Sutherland Robson,
Pushmeet Kohli, Nando de, Freitas, Koray Kavukcuoglu, and Oriol Vinyals. Competition-level
code generation with AlphaCode. Science, 378:1092 – 1097, 2022.
Weixin Liang, Yuhui Zhang, Hancheng Cao, Binglu Wang, Daisy Yi Ding, Xinyu Yang, Kailas Vo-
drahalli, Siyu He, Daniel Scott Smith, Yian Yin, Daniel A. McFarland, and James Zou. Can Large
Language Models Provide Useful Feedback on Research Papers? A Large-Scale Empirical Analysis.
NEJM AI, 1(8), 2024.
Yiren Liu, Si Chen, Haocong Cheng, Mengxia Yu, Xiao Ran, Andrew Mo, Yiliu Tang, and Yun Huang.
How AI Processing Delays Foster Creativity: Exploring Research Question Co-Creation with an
LLM-based Agent. In CHI, 2024.
Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, and David Ha. The AI Scientist:
Towards Fully Automated Open-Ended Scientific Discovery . ArXiv, abs/2408.06292, 2024.
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Majumder,
Shashank Gupta, Amir Yazdanbakhsh, and Peter Clark. Self-Refine: Iterative Refinement with
Self-Feedback. In NeurIPS, 2023.
Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal, Bhavana Dalvi, Abhijeetsingh
Meena, Aryan Prakhar, Tirth Vora, Tushar Khot, Ashish Sabharwal, and Peter Clark. Discovery-
Bench: Towards Data-Driven Discovery with Large Language Models. ArXiv, abs/2407.01725,
2024.
Benjamin S. Manning, Kehang Zhu, and John J. Horton. Automated Social Science: Language Models
as Scientist and Subjects. SSRN Electronic Journal, 2024.
Vishakh Padmakumar and He He. Does Writing with Language Models Reduce Content Diversity?
In ICLR, 2024.
Ori Press, Andreas Hochlehnert, Ameya Prabhu, Vishaal Udandarao, Ofir Press, and Matthias Bethge.
CiteME: Can Language Models Accurately Cite Scientific Claims? ArXiv, abs/2407.12861, 2024.
23
Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu,
Donald Metzler, Xuanhui Wang, and Michael Bendersky. Large Language Models are Effective Text
Rankers with Pairwise Ranking Prompting. In ACL, 2024.
Nils Reimers and Iryna Gurevych. Making Monolingual Sentence Embeddings Multilingual using
Knowledge Distillation. In EMNLP, 2020.
Sander Schulhoff, Michael Ilie, Nishant Balepur, Konstantine Kahadze, Amanda Liu, Chenglei Si, Yin-
heng Li, Aayush Gupta, HyoJung Han, Sevien Schulhoff, Pranav Sandeep Dulepet, Saurav Vidyad-
hara, Dayeon Ki, Sweta Agrawal, Chau Pham, Gerson C. Kroiz, Feileen Li, Hudson Tao, Ashay
Srivastava, Hevander Da Costa, Saloni Gupta, Megan L. Rogers, Inna Goncearenco, Giuseppe Sarli,
Igor Galynker, Denis Peskoff, Marine Carpuat, Jules White, Shyamal Anadkat, Alexander Miserlis
Hoyle, and Philip Resnik. The Prompt Report: A Systematic Survey of Prompting Techniques.
ArXiv, abs/2406.06608, 2024.
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer,
and Wen tau Yih. REPLUG: Retrieval-Augmented Black-Box Language Models. In NAACL, 2024.
Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuohang Wang, Jianfeng Wang, Jordan L. Boyd-Graber, and
Lijuan Wang. Prompting GPT-3 To Be Reliable. In ICLR, 2023.
Müge Simsek, Mathijs de Vaan, and Arnout van de Rijt. Do grant proposal texts matter for funding
decisions? a field experiment. Scientometrics, 129:2521–2532, 2024.
Minyang Tian, Luyu Gao, Shizhuo Dylan Zhang, Xinan Chen, Cunwei Fan, Xuefei Guo, Roland
Haas, Pan Ji, Kittithat Krongchon, Yao Li, Shengyan Liu, Di Luo, Yutao Ma, Hao Tong, Kha Trinh,
Chenyu Tian, Zihan Wang, Bohao Wu, Yanyu Xiong, Shengzhu Yin, Min Zhu, Kilian Lieret, Yanxin
Lu, Genglin Liu, Yufeng Du, Tianhua Tao, Ofir Press, Jamie Callan, E. A. Huerta, and Hao Peng.
SciCode: A Research Coding Benchmark Curated by Scientists. ArXiv, abs/2407.13168, 2024.
Trieu H. Trinh, Yuhuai Wu, Quoc V. Le, He He, and Thang Luong. Solving olympiad geometry without
human demonstrations. Nature, 625:476 – 482, 2024.
Qingyun Wang, Doug Downey, Heng Ji, and Tom Hope. SciMON: Scientific Inspiration Machines
Optimized for Novelty. In ACL, 2024.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Huai hsin Chi, and Denny Zhou. Self-
Consistency Improves Chain of Thought Reasoning in Language Models. In ICLR, 2023.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc Le,
and Denny Zhou. Chain of Thought Prompting Elicits Reasoning in Large Language Models. In
NeurIPS, 2022.
Orion Weller, Marc Marone, Nathaniel Weir, Dawn J Lawrie, Daniel Khashabi, and Benjamin Van
Durme. “According to . . . ”: Prompting Language Models Improves Quoting from Pre-Training
Data. In EACL, 2023.
Jason Weston and Sainbayar Sukhbaatar. System 2 Attention (is something you might need too).
ArXiv, abs/2311.11829, 2023.
Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Soujanya Poria, and E. Cambria. Large Language
Models for Automated Open-domain Scientific Hypotheses Discovery. ACL Findings, 2024.
24
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. In
NeurIPS, 2023.
Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong Pasupat, Jure Leskovec, Percy Liang, Ed Huai
hsin Chi, and Denny Zhou. Large Language Models as Analogical Reasoners. In ICLR, 2024.
Xingjian Zhang, Yutong Xie, Jin Huang, Jinge Ma, Zhaoying Pan, Qijia Liu, Ziyang Xiong, Tolga Ergen,
Dongsub Shim, Honglak Lee, and Qiaozhu Mei. MASSW: A New Dataset and Benchmark Tasks for
AI-Assisted Scientific Workflows. ArXiv, abs/2406.06357, 2024.
Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed Huai hsin Chi, Quoc V.
Le, and Denny Zhou. Take a Step Back: Evoking Reasoning via Abstraction in Large Language
Models. In ICLR, 2024.
Ruiqi Zhong, Charles Burton Snell, Dan Klein, and Jacob Steinhardt. Describing Differences between
Text Distributions with Natural Language. In ICML, 2022.
Ruiqi Zhong, Peter Zhang, Steve Li, Jinwoo Ahn, Dan Klein, and Jacob Steinhardt. Goal Driven
Discovery of Distributional Differences via Language Descriptions. In NeurIPS, 2023.
Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Huai hsin Chi. Least-to-Most Prompting Enables Complex
Reasoning in Large Language Models. In ICLR, 2023.
Yilun Zhou, Caiming Xiong, Silvio Savarese, and Chien-Sheng Wu. Shared Imagination: LLMs
Hallucinate Alike. ArXiv, abs/2407.16604, 2024.
25
A List of Research Topics
We selected the following list of research topics for our research ideation task:
1. Bias: novel prompting methods to reduce social biases and stereotypes of large language models
2. Coding: novel prompting methods for large language models to improve code generation
3. Safety: novel prompting methods to improve large language models’ robustness against adver-
sarial attacks or improve their security or privacy
4. Multilingual: novel prompting methods to improve large language models’ performance on
multilingual tasks or low-resource languages and vernacular languages
5. Factuality: novel prompting methods that can improve factuality and reduce hallucination of
large language models
6. Math: novel prompting methods for large language models to improve mathematical problem
solving
7. Uncertainty: novel prompting methods that can better quantify uncertainty or calibrate the
confidence of large language models
We use these topics descriptions to elicit ideas from both human participants and our LLM agent.
26
B Project Proposal Template
We give the following project proposal template to both the AI agent and human idea writers.
1. Title: A concise statement of the main research question to be used as the paper title.
2. Problem Statement: Clearly define the problem your research intends to address. Explain clearly
why this problem is interesting and important.
3. Motivation: Explain why existing methods are not good enough to solve the problem, and explain
the inspiration behind the new proposed method. You should also motivate why the proposed
method would work better than existing baselines on the problem.
4. Proposed Method: Explain how the proposed method works, describe all the essential steps.
5. Step-by-Step Experiment Plan: Break down every single step of the experiments, make sure every
step is executable. Cover all essential details such as the datasets, models, and metrics to be used. If
the project involves prompting, give some example prompts for each step.
6. Test Case Examples: Give at least two concrete examples. The first example should show how the
baseline method fails on the test case. If there are multiple baselines, give examples for all of them.
The second example should show how the proposed method succeeds on the test case. For each test
case, include the input (test example and the full prompt) and the expected output. You should also
provide an explanation for why the outputs from the proposed prompt are better. If the proposed
method has multiple steps, break them down into intermediate steps.
7. Fallback Plan: Propose some alternative plans for what should the students do if the proposed
method doesn’t manage to satisfy the success criteria. For example, you can suggest additional
analysis to help debug why the proposed method didn’t work, which could inform alternative new
methods, or just turn the project into an analysis paper instead by offering some interesting ablation
and insights.
27
C Project Proposal Demo Example
We present a manually written demonstration example used for project proposal generation. The
example is summarized from an existing paper (Dhuliawala et al., 2023). This same example is given
to both the AI agent as well as the idea-writing experts.
1. Title:
Chain-of-Verification Reduces Hallucination in Large Language Models
2. Problem Statement:
Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue
in large language models.
3. Motivation:
A majority of the methods for reducing hallucination can be divided into roughly three categories:
training-time correction, generation-time correction, and via augmentation (tool-use). We want to
take a simpler approach that fully leverages the power of LLM itself. Our key motivation is that large
language models, when suitably prompted, can both generate and execute a plan of how to verify
themselves in order to check their own work, and finally incorporate this analysis into an improved
response.
4. Proposed Method:
Our overall process, which we call Chain-of-Verification (CoVe), thus performs four core steps:
(1) Generate Baseline Response: Given a query, generate the response using the LLM.
(2) Plan Verifications: Given both query and baseline response, generate a list of verification
questions that could help to self-analyze if there are any mistakes in the original response.
(3) Execute Verifications: Answer each verification question in turn, and hence check the answer
against the original response to check for inconsistencies or mistakes.
(4) Generate Final Verified Response: Given the discovered inconsistencies (if any), generate a
revised response incorporating the verification results.
Each of these steps is performed by prompting the same LLM in different ways to obtain the desired
response.
5. Step-by-Step Experiment Plan:
1: Gather Datasets: We choose datasets that evaluate factual correctness, including the Multi-
SpanQA dataset on closed-book QA and the FactScore dataset on generating biographies.
2: Construct Prompts: For the baseline, we use direct prompting where, given a query, we generate
left-to-right as usual using the LLM, with no special tricks. Given that such baseline generations
are typically prone to hallucination, CoVe attempts to identify these hallucinations and correct
them in the following steps:
(1) Plan Verifications: Conditioned on the original query and the baseline response, the
model is prompted to generate a series of verification questions that test the factual claims
in the original baseline response.
(2) Execute Verifications: Given the planned verification questions, the next step is to answer
them in order to assess if any hallucinations exist. The planning prompt conditions on the
baseline response in the first step. The verification questions generated from planning are
answered in the second step, where crucially the context given to the LLM prompt only
contains the questions and not the original baseline response, hence preventing the LLM
from repeating those answers directly.
28
(3) Generate Final Verified Response: Finally, the improved response that takes verifica-
tion into account is generated. This is executed by a final few-shot prompt where the
context takes into account all of the previous reasoning steps, the baseline response, and
verification question-answer pairs, so that the corrections can take place.
3: Select Models: We test GPT-3.5 (Text-Davinci-003) and GPT-4 from the OpenAI API, as well as
the open-source LLaMA-2-70B-chat.
4: Get Results: Get answer predictions from the models on these datasets with both the baselines
and the proposed method.
5: Analyze Results: Compare whether the new method improves the performance of LLMs in
these tasks as compared to the baselines.
6. Test Case Examples:
• Baseline Prompt Input (Direct Prompting): Name some politicians who were born in NY, New
York.
• Baseline Prompt Expected Output (Direct Prompting): Here are some politicians who were
born in NY, New York:
1. Hillary Clinton - former secretary of state and former Democratic presidential nominee.
2. Donald Trump - former president of the United States.
3. Michael Bloomberg - former Mayor of New York City and former Democratic presidential
candidate.
• Proposed Prompt Input (Chain-of-Verification Prompting; Step 1: Plan Verifications): Generate
a set of verification questions to verify each answer.
• Proposed Prompt Expected Output (Chain-of-Verification Prompting; Step 1: Plan Verifica-
tions):
– Where was Hillary Clinton born?
– Where was Donald Trump born?
– Where was Michael Bloomberg born?
• Proposed Prompt Input (Chain-of-Verification Prompting; Step 2: Execute Verifications):
Answer each of the verification questions.
• Proposed Prompt Expected Output (Chain-of-Verification Prompting; Step 2: Execute Verifica-
tions):
1. Hillary Clinton was born in Chicago, Illinois, United States on October 26, 1947.
2. Donald Trump was born on June 14, 1946, in Queens, New York City, New York, United
States.
3. Michael Bloomberg was born on February 14, 1942, in Boston, Massachusetts, United States.
• Proposed Prompt Input (Chain-of-Verification Prompting; Step 3: Final Response): (Prepend
the original question and the baseline response and the verification QA pairs) Given the verifica-
tion answers, generate a revised response incorporating the verification results.
• Proposed Prompt Expected Output (Chain-of-Verification Prompting; Step 3: Final Response):
Here are some politicians who were born in NY, New York:
1. Donald Trump - former president of the United States.
2. Alexandria Ocasio-Cortez - Democratic member of the U.S. House of Representatives.
• Explanation: Given a user query, a large language model with direct prompting generates a
baseline response that may contain inaccuracies, e.g., factual hallucinations. To improve this,
Chain-of-Verification first generates a plan of a set of verification questions to ask, and then
29
executes that plan by answering them and hence checking for agreement. We find that individual
verification questions are typically answered with higher accuracy than the original accuracy of
the facts in the original longform generation. Finally, the revised response takes into account the
verifications.
7. Fallback Plan:
If the proposed method does not help as compared to the baseline, analyze each step of the CoVe
process to see if the verification questions are relevant, if the answers to the verification questions
are correct, and whether the generated final verified response is indeed improved over the baseline
response by considering the verification QA pairs. This can help us debug the proposed method or
turn this into interesting analysis on the model’s ability to verify and correct its own responses.
30
D Style Standardization Prompt
Style Standardization Prompt
You are a writing assistant specialized in editing academic writing. I will give you a student’s research idea and an
idea template. Your task is to edit the student’s idea to follow the template’s format.
Student idea: (Insert the student’s idea here)
Template: (Insert the template idea here)
Make sure that you only edit the wording and formatting, including things like punctuation, capitalization,
linebreaks, and bullet points. Also make sure to edit any informal wording and phrasing to use vocabulary that
sounds like the template’s writing style. No other changes are allowed beyond these.
The main sections should be indexed clearly without indentation at the beginning. The title section does not need
indexing; other sections, including problem statement, motivation, proposed method, step-by-step experiment
plan, test case examples, and fallback plan, should be indexed 1 to 6. Each section can then have sub-bullets for
sub-sections if applicable. Leave an empty line after each section.
You should use tab as indentation and make sure to use appropriate nested indentation for sub-bullets. All bullets
should have a clear hierarchy so people can easily differentiate the sub-bullets. Only leave empty lines between
sections and remove any extra line breaks. If many bullet points are clustered together in a paragraph, separate
them clearly with indentation and appropriate bullet point markers. Change to a new line for each new bullet point.
For the fallback plan, do not list a bunch of bullet points. Instead, condense them into one coherent paragraph.
For line breaks, avoid Raw String Literals or Double Backslashes when using "\n", and change them to spaces or
tabs.
For in-line citations, if the citation mentioned the author’s last name (like "(Si et al., 2023)" or "(An et al., 2024)"), you
should keep them there; but if the citation is just a number (like "[1]" or "[3,4,5]"), you should just remove it and do
some necessary rephrasing to make the sentence still sound coherent without the references.
Apart from minor rephrasing and changing formatting, do not change any content of the idea. You must preserve
the exact meaning of the original idea, do not change, remove, or add any other details. Do not drop any sections
(including test case examples). Do not rename any models, datasets, or methods. Do not drop clarification or
examples in brackets and do not drop any data source mentions (e.g., Chatbot Arena or Wildchat)! Note that
when indexing test case examples, each test case example could have multiple steps of inputs and outputs and you
shouldn’t give separate indices to them. Each test case example should be a whole set of input-output pairs for the
baseline(s) and proposed method.
For the proposed method section, avoid any big changes. If the section comes in as a coherent paragraph, you don’t
have to break it down into bullet points. If the section is already in bullet points, you should keep it that way. If the
section is a mix of both, you should keep the bullet points and the coherent paragraph as they are.
Keep all the clarification and examples mentioned in all the sections and do not remove any of them (including those
in brackets).
For model selection, if any version of Claude is mentioned, change it to the latest version of Claude (Claude-3.5); if
any version of LLaMA is mentioned, change it to the latest version LLaMA-3. Do not make any other model changes.
Now directly generate the edited student idea to match the format of the template.
31
E Idea Review Form
We use the following review form to elicit reviews from all expert reviewers. Reviewers have one
week of time to finish each review.
1. Name
2. Institution
3. Email
4. Consent
5. Honor Code: I confirm that I will not use ChatGPT, Claude, Gemini, or any other AI tools when
writing my reviews.
6. Familiarity: Before reviewing the idea, please indicate how familiar you are with the given topic on
a scale of 1 - 5 (this is just for us to understand potential confounders).
1. You have never read about this topic before
2. You have read at least one paper on this topic
3. You have read multiple papers on this topic but have not published any paper on it
4. You have co-authored at least one paper on this topic
5. You have co-authored multiple papers on this topic or have published at least one first-author
paper on this topic
7. Experience: Have you reviewed for major NLP or AI conferences before (e.g., *ACL, COLING,
NeurIPS, ICLR, ICML, AAAI)?
8. Full Research Idea Proposal
9. Novelty Score: Whether the idea is creative and different from existing works on the topic, and
brings fresh insights. You are encouraged to search for related works online. You should consider all
papers that appeared online prior to July 2024 as existing work when judging the novelty.
1. Not novel at all - there are many existing ideas that are the same
2.
3. Mostly not novel - you can find very similar ideas
4.
5. Somewhat novel - there are differences from existing ideas but not enough to turn into a new
paper
6. Reasonably novel - there are some notable differences from existing ideas and probably enough
to turn into a new paper
7.
8. Clearly novel - major differences from all existing ideas
9.
10. Very novel - very different from all existing ideas in a very interesting and clever way
10. Novelty Rationale: Short justification for your score. If you give a low score, you should specify
similar related works. (Your rationale should be at least 2-3 sentences.)
32
11. Feasibility Score: How feasible it is to implement and execute this idea as a research project?
Specifically, how feasible the idea is for a typical CS PhD student to execute within 1-2 months of time.
You can assume that we have abundant OpenAI / Anthropic API access, but limited GPU compute.
1. Impossible: the idea doesn’t make sense or the proposed experiments are flawed and cannot be
implemented
2.
3. Very challenging: there are flaws in the proposed method or experiments, or the experiments
require compute/human resources beyond any academic lab
4.
5. Moderately feasible: It can probably be executed within the given time frame but would require
careful planning, efficient use of APIs or some advanced computational strategies to overcome
the limited GPU resources, and would require some modifications to the original proposal to
make it work
6. Feasible: Can be executed within the given constraints with some reasonable planning
7.
8. Highly Feasible: Straightforward to implement the idea and run all the experiments
9.
10. Easy: The whole proposed project can be quickly executed within a few days without requiring
advanced technical skills
12. Feasibility Rationale: Short justification for your score. If you give a low score, you should specify
what parts are difficult to execute and why. (Your rationale should be at least 2-3 sentences.)
13. Expected Effectiveness Score: How likely the proposed idea is going to work well (e.g., better
than existing baselines).
1. Extremely Unlikely: The idea has major flaws and definitely won’t work well
2.
3. Low Effectiveness: The idea might work in some special scenarios but you don’t expect it to
work in general
4.
5. Somewhat ineffective: There might be some chance that the proposed idea can work better than
existing baselines but the improvement will be marginal or inconsistent
6. Somewhat effective: There is a decent chance that the proposed idea can beat existing baselines
by moderate margins on a few benchmarks
7.
8. Probably Effective: The idea should offer some significant improvement over current methods
on the relevant benchmarks
9.
10. Definitely Effective: You are very confident that the proposed idea will outperform existing
methods by significant margins on many benchmarks
33
14. Expected Effectiveness Rationale: Short justification for your score. (Your rationale should be at
least 2-3 sentences.)
15. Excitement Score: How exciting and impactful this idea would be if executed as a full project.
Would the idea change the field and be very influential.
1. Poor: You cannot identify the contributions of this idea, or it’s not interesting at all and you
would fight to have it rejected at any major AI conference
2.
3. Mediocre: this idea makes marginal contributions and is very incremental
4.
5. Leaning negative: it has interesting bits but overall not exciting enough
6. Learning positive: exciting enough to be accepted at a major AI conference, but still has some
weaknesses or somewhat incremental
7.
8. Exciting: would deepen the community’s understanding or make major progress in this research
direction
9.
10. Transformative: would change the research field profoundly and worth a best paper award at
major AI conferences
16. Excitement Rationale: Short justification for your score. (Your rationale should be at least 2-3
sentences.)
17. Overall Score: Overall score: Apart from the above, you should also give an overall score for
the idea on a scale of 1 - 10 as defined below (Major AI conferences in the descriptions below refer to
top-tier NLP/AI conferences such as *ACL, COLM, NeurIPS, ICLR, and ICML.):
1. Critically flawed, trivial, or wrong, would be a waste of students’ time to work on it
2. Strong rejection for major AI conferences
3. Clear rejection for major AI conferences
4. Ok but not good enough, rejection for major AI conferences
5. Decent idea but has some weaknesses or not exciting enough, marginally below the acceptance
threshold of major AI conferences
6. Marginally above the acceptance threshold of major AI conferences
7. Good idea, would be accepted by major AI conferences
8. Top 50% of all published ideas on this topic at major AI conferences, clear accept
9. Top 15% of all published ideas on this topic at major AI conferences, strong accept
10. Top 5% of all published ideas on this topic at major AI conferences, will be a seminal paper
18. Overall Rationale: You should also provide a rationale for your overall score. (Your rationale
should be at least 2-3 sentences.)
34
19. Confidence: Additionally, we ask for your confidence in your review on a scale of 1 to 5 defined as
following:
1. Your evaluation is an educated guess
2. You are willing to defend the evaluation, but it is quite likely that you did not understand central
parts of the paper
3. You are fairly confident that the evaluation is correct
4. You are confident but not absolutely certain that the evaluation is correct
5. You are absolutely certain that the evaluation is correct and very familiar with the relevant
literature
20. Time: How many minutes did you spend on this task?
35
F Idea Generation Agent: Additional Implementation Details
Seed Idea Generation Due to the max output length limit of the LLM API, we first generate a large
number of shorter seed ideas. We keep the seed ideas short so that we can explore more different
ideas given the same output token budget. We provide a demonstration example of the seed idea in
Appendix G. Then, we perform duplication and expand each remaining seed idea into a full project
proposal following our standard template in Appendix B.
Retrieval Augmentation We apply retrieval augmentation to the idea generation prompt in order to
increase diversity in the idea generation. To maximize diversity, we apply retrieval augmentation
half of the time when generating seed ideas, and we randomly select k = 10 papers from the top 20
retrieved papers when applying retrieval augmentation.
Idea Filtering After expanding seed ideas into full project proposals, we did some basic filtering to
remove any project proposals that failed the novelty and feasibility checks:
1. Novelty: We use the literature review module to retrieve the top 10 most relevant papers to the
generated idea and ask the LLM to compare each of them to the generated idea. The idea will be
filtered as long as any one of the retrieved papers is judged as equivalent.
2. Feasibility: The idea will be filtered if it requires extensive manual labor or hardware resources
beyond the capacity of a typical academic lab. The idea will also be filtered if it involves any
inconsistency in the experimental setups or assumptions. For example, if the idea assumes
only black-box API access of the LLMs, then it shouldn’t involve experiments that need internal
weight access.
This filtered out about 1% of the generated project proposals.
36
G Demonstration Example: Seed Idea Generation
We present a demonstration example used for seed idea generation. The example is summarized from
an existing paper (Dhuliawala et al., 2023).
Title:
Chain-of-Verification Prompting
Problem:
Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue
in large language models.
Existing Methods:
A majority of the methods for reducing hallucination can be divided into roughly three categories:
training-time correction; generation-time correction; and via augmentation (tool-use).
Motivation:
A key observation is that large language models, when suitably prompted, can both generate and
execute a plan of how to verify themselves in order to check their own work, and finally incorporate
this analysis into an improved response.
Proposed Method:
Our overall process, which we call Chain-of-Verification (CoVe), thus performs four core steps:
(1) Generate Baseline Response: Given a query, generate the response using the LLM.
(2) Plan Verifications: Given both query and baseline response, generate a list of verification
questions that could help to self-analyze if there are any mistakes in the original response.
(3) Execute Verifications: Answer each verification question in turn, and hence check the answer
against the original response to check for inconsistencies or mistakes.
(4) Generate Final Verified Response: Given the discovered inconsistencies (if any), generate a
revised response incorporating the verification results.
Each of these steps is performed by prompting the same LLM in different ways to obtain the desired
response.
Experiment Plan:
Compare with zero-shot prompting, Chain-of-Thought, and few-shot prompting on the MultiSpanQA
dataset on closed-book QA and FactScore dataset on generating biographies.
37
H Generated Seed Ideas and Their Nearest Neighbors
We present several randomly sampled generated seed ideas (see Appendix F for the definition of seed
ideas) on the topic of “novel prompting methods that can better quantify uncertainty or calibrate
the confidence of large language models”. For each idea, we show the most similar idea (nearest
neighbor) based on the embedding similarity, along with the similarity score. In practice, we set a
threshold threshold of 0.8 for determining whether two ideas are duplicates.
Idea 1:
Title: Adaptive Precision Boundary Probing
Problem: LLMs often provide uncertainty estimates that are either too coarse-grained or inappropri-
ately precise, failing to adapt to the inherent ambiguity or precision requirements of different queries.
Existing Methods: Existing uncertainty quantification methods typically use fixed precision scales
or calibration techniques that don’t adapt to the specific context and precision requirements of each
query.
Motivation: Human experts adjust the precision of their uncertainty estimates based on the nature of
the question and the available evidence. We can incorporate this adaptive approach to improve LLM
uncertainty quantification.
Proposed Method: We introduce Adaptive Precision Boundary Probing (APBP), a dynamic prompt-
ing technique that iteratively refines the precision of uncertainty estimates. Given a query, APBP
starts with a coarse-grained confidence interval. It then prompts the model to assess whether this
interval is appropriately precise given the query’s context and the model’s knowledge. If the model
determines that greater precision is warranted, APBP iteratively narrows the interval, prompting
the model at each step to justify the increased precision. Conversely, if the model recognizes high
ambiguity or limited knowledge, APBP widens the interval. Throughout this process, the model is
asked to explicitly reason about the factors influencing the appropriate level of precision, such as the
specificity of the query, the reliability of relevant knowledge, and potential sources of ambiguity. The
final output is an uncertainty estimate with a precision level tailored to the specific query and the
model’s knowledge state.
Experiment Plan: We will evaluate APBP on a diverse set of tasks with varying inherent precision
requirements, including numerical estimation, date prediction, and open-ended text generation. We’ll
compare APBP against fixed-precision uncertainty estimation methods, measuring both calibration
accuracy and the appropriateness of precision levels as judged by human experts.
Nearest Neighbor of Idea 1:
Title: Contextual Confidence Oscillation
Problem: Current methods for quantifying uncertainty in large language models often fail to capture
the dynamic nature of confidence across different contexts within a single query.
Existing Methods: Most existing approaches use static confidence scores or calibration techniques
that don’t account for intra-query contextual shifts.
Motivation: Human confidence often fluctuates as we process different parts of a complex question
or task. By mimicking this oscillation, we can potentially capture a more nuanced and accurate
representation of model uncertainty.
Proposed Method: We propose Contextual Confidence Oscillation (CCO), a novel prompting tech-
nique that encourages the model to continuously re-evaluate and express its confidence as it processes
a query. The prompt is structured as a series of checkpoints, where the model must pause its reasoning,
reflect on its current confidence level, and explain any changes since the last checkpoint. This creates a
confidence trajectory that can be analyzed for patterns, sudden drops, or gradual increases. Addi-
38
tionally, we introduce ’confidence disruptors’ - intentionally ambiguous or challenging sub-queries
inserted at various points to test the model’s ability to recognize and express increased uncertainty
when appropriate.
Experiment Plan: We will evaluate CCO against standard uncertainty quantification methods on
a range of tasks, including multi-step reasoning problems, ambiguous queries, and long-form text
analysis. We’ll measure not just overall accuracy of uncertainty estimates, but also the correlation
between confidence oscillations and human-annotated difficulty levels of different parts of each
query. We’ll also analyze how well the model’s expressed confidence trajectory aligns with its actual
performance across different segments of complex tasks.
Similarity: 0.70
Idea 2:
Title: Quantum Superposition Confidence Prompting
Problem: Current LLMs struggle to accurately quantify uncertainty across multiple possible answers,
often defaulting to overconfidence in a single response.
Existing Methods: Existing approaches typically involve single-path reasoning or limited branching,
failing to capture the full spectrum of uncertainty.
Motivation: Inspired by quantum mechanics, where particles can exist in multiple states simultane-
ously, we propose a method that allows LLMs to consider multiple answer possibilities concurrently.
Proposed Method: We introduce Quantum Superposition Confidence Prompting (QSCP), where the
LLM is instructed to generate multiple potential answers simultaneously, assigning confidence scores
to each. The prompt encourages the model to ’exist in multiple states,’ exploring contradictory an-
swers and their implications concurrently. For example: ’Imagine you are in a quantum superposition
of multiple expert personas. Each persona will provide an answer to the following question, along
with a confidence score (0-100%). Ensure the personas explore contradictory viewpoints. Question:
[INSERT QUESTION]’. The LLM then generates responses from multiple personas, each with its own
confidence score. The final uncertainty is derived from the distribution of these scores, providing a
more nuanced understanding of the model’s confidence across possible answers.
Experiment Plan: Compare QSCP against standard prompting, chain-of-thought, and other uncer-
tainty quantification methods on diverse question-answering datasets. Evaluate using metrics such
as calibration error, Brier score, and a novel ’quantum uncertainty score’ that measures the spread and
coherence of the generated answer superposition.
Nearest Neighbor of Idea 2:
Title: Quantum Superposition Prompting
Problem: Traditional methods for uncertainty quantification in large language models often fail to
capture the full range of possible interpretations and outcomes, especially for queries with inherent
ambiguity or multiple valid perspectives.
Existing Methods: Current approaches typically focus on generating a single response with an
associated confidence score, or at best, a small set of discrete alternatives.
Motivation: Drawing inspiration from the principle of superposition in quantum mechanics, we pro-
pose a method to represent and reason about multiple possible outcomes simultaneously, providing a
richer and more nuanced uncertainty quantification.
Proposed Method: We present Quantum Superposition Prompting (QSP), a novel framework for
exploring and quantifying uncertainty in language model outputs. QSP begins by prompting the
model to generate a ’superposition’ of possible interpretations or approaches to the given query. Each
element in this superposition is assigned a complex amplitude, representing both its probability
39
and its relationship to other elements. The model is then guided through a series of ’measurement’
prompts, designed to collapse this superposition along different bases of interpretation. These mea-
surements yield probability distributions over outcomes, capturing different facets of uncertainty.
QSP employs techniques inspired by quantum computing, such as interference and entanglement, to
model how different interpretations interact and influence each other. The final uncertainty quantifi-
cation is derived from the full set of measurements, providing a multi-dimensional representation of
the model’s uncertainty that captures ambiguity, conflicting evidence, and the interdependence of
different interpretations.
Experiment Plan: We will evaluate QSP on tasks that inherently involve multiple valid perspectives
or ambiguous interpretations, such as ethical dilemmas, creative writing prompts, and open-ended
analytical questions. Metrics will include the diversity and coherence of generated superpositions, the
ability to capture human-judged ambiguities, and improvements in uncertainty calibration compared
to classical methods.
Similarity: 0.77
Idea 3:
Title: Fractal Uncertainty Decomposition
Problem: LLMs often provide overly simplistic uncertainty estimates that fail to capture the hierarchi-
cal and nested nature of uncertainty in complex knowledge domains.
Existing Methods: Current uncertainty quantification methods typically produce flat, single-
dimensional confidence scores that don’t reflect the multi-layered structure of knowledge and uncer-
tainty.
Motivation: By recursively decomposing a query into sub-components and assessing uncertainty
at multiple levels of granularity, we can construct a more comprehensive and structurally informed
uncertainty estimate.
Proposed Method: We introduce Fractal Uncertainty Decomposition (FUD), a prompting technique
that recursively breaks down a query into a hierarchical structure of sub-queries, assessing uncertainty
at each level. Given an initial query, FUD prompts the model to identify key sub-components or
aspects of the question. For each sub-component, the model provides an answer and a confidence
estimate. If the confidence for a sub-component is below a certain threshold, FUD recursively applies
the same decomposition process to that sub-component. This continues until either a maximum
depth is reached or all sub-components have high confidence. The resulting structure is a tree of
nested confidence estimates. FUD then aggregates these estimates bottom-up, using a combination
of statistical methods and prompted meta-analysis by the model. The final output is both an overall
uncertainty estimate and a detailed map of the uncertainty structure, showing how confidence varies
across different aspects and levels of the query.
Experiment Plan: We will evaluate FUD on complex, multi-faceted tasks such as scientific explanation,
historical analysis, and technical troubleshooting. We will compare its performance to flat confidence
estimation methods and other hierarchical approaches. Evaluation metrics will include traditional
calibration measures, as well as new metrics designed to assess the quality and informativeness of the
uncertainty decomposition. We will also conduct case studies to demonstrate how FUD can provide
more actionable and interpretable uncertainty information in real-world scenarios.
Nearest Neighbor of Idea 3:
Title: Semantic Fractal Decomposition
Problem: Current uncertainty quantification methods for large language models often fail to capture
the hierarchical and self-similar nature of conceptual understanding, leading to inconsistent confi-
40
dence estimates across different levels of abstraction.
Existing Methods: Existing approaches typically focus on flat, single-level uncertainty estimates or
simple hierarchical decompositions that don’t fully capture the complex, nested nature of semantic
understanding.
Motivation: Drawing inspiration from fractal geometry, where patterns repeat at different scales, we
propose a method that recursively decomposes concepts and queries into self-similar sub-components,
allowing for a more nuanced and scale-invariant approach to uncertainty quantification.
Proposed Method: We present Semantic Fractal Decomposition (SFD), a prompting technique that
guides the model to recursively break down a given query or concept into smaller, self-similar com-
ponents. At each level of decomposition, the model is asked to provide a confidence estimate. The
process continues until a predefined depth is reached or the model indicates it can no longer meaning-
fully decompose the concept. The final uncertainty estimate is then constructed by aggregating these
multi-level confidence scores using a novel fractal dimension-inspired algorithm. This approach
allows for capturing uncertainty that may be present at different semantic scales and provides a more
robust and consistent measure of the model’s confidence across varying levels of abstraction.
Experiment Plan: We will evaluate SFD on a diverse set of tasks ranging from simple factual queries
to complex, multi-faceted questions in domains like philosophy, science, and law. We will compare
its performance against traditional flat confidence estimation techniques and simpler hierarchical
methods. Key metrics will include the consistency of uncertainty estimates across related queries at
different levels of abstraction, the correlation between fractal-aggregated confidence scores and actual
model performance, and the interpretability of the decomposition process.
Similarity: 0.81
41
I Overlap Between AI Ranking and Expert Reranking
We show the overlap between the AI Ideas condition and the AI Ideas + Human Rerank condi-
tions in Table 12. We note that 17 out of the 49 ideas in the AI Ideas + Human Rerank condition
are also ranked as top ideas in the AI Ideas condition by the AI ranker, while the other 32 are not.
Topic
Bias
Coding
Safety
Multilingual
Factuality
Math
Uncertainty
Total
Overlap New
2
4
2
5
2
2
1
18
2
5
3
5
9
2
5
31
Table 12: Overlap of ideas between AI + Human Rerank and AI conditions, broken down by topic.
J Quality Control of Human Expert Ideas
Each expert is instructed to choose one of the seven specified topics and write one idea on it within 10
days, following the given template in the annotation document. We included an honor code statement
to ask the participants to not use any AI tools in their idea writing. We collected N = 50 ideas originally
and manually checked all of them for quality control. We filtered out one of them as being essentially a
paraphrase of an existing paper’s abstract. We compensated the participant nevertheless but excluded
them from the review task.
42
K Breakdown of Participant Positions
We show the detailed position breakdown of our 49 idea-writing participants in Table 13 and the
positions of our 79 reviewer participants in Table 14.
Position
Postdoc
PhD
Master
Undergraduate
Research Scientist
Machine Learning Engineer
Count
1
36
9
1
1
1
Table 13: Positions of the 49 idea writing participants.
Position
Postdoc
PhD
Master
Research Scientist
Machine Learning Engineer
Count
7
63
5
3
1
Table 14: Positions of the 79 idea reviewing participants.
43
L Institutions of the Idea Writing Participants
Institution
Stanford University
University of Southern California
University of Maryland
University of Illinois Urbana-Champaign
Johns Hopkins University
Columbia University
Carnegie Mellon University
University of Pennsylvania
Princeton University
Penn State University
Portland State University
Stony Brook University
University of Chicago
University of Washington
UC Berkeley
UCSD
Massachusetts Institute of Technology
George Washington University
Yale University
University of Toronto
Georgia Institute of Technology
National University of Singapore
Peking University
Tsinghua University
LinkedIn
Norm AI
Count
11
6
3
3
3
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Table 15: Institutions of the 49 idea writing participants.
44
M Institutions of the Idea Reviewing Participants
Institution
Stanford University
UC Berkeley
UT Austin
University of Maryland
Princeton University
University of Washington
University of Southern California
Carnegie Mellon University
University of Chicago
Johns Hopkins University
UCLA
Georgia Institute of Technology
University of Illinois Urbana-Champaign
Tsinghua University
Stony Brook University
Ohio State University
National University of Singapore
University of Michigan
Dartmouth College
Massachusetts Institute of Technology
University of Pennsylvania
University of Toronto
Portland State University
Penn State University
New York University
Columbia University
UC Santa Barbara
Brown University
Amazon
LinkedIn
Norm AI
AMD
Count
25
4
4
4
3
3
3
3
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Table 16: Institutions of the 79 reviewer participants.
45
N Mixed-Effects Models
One way to combine all the statistical tests above is to fit a linear mixed-effects model where we treat
the condition as the fixed effect and other factors including reviewer and idea as random effects, while
also accounting for the differences among different topics. This way, we can rely on the regression to
account for all the possible confounders as the random effects. Specifically, for each metric, we fit the
following linear mixed-effects model:
model = smf.mixedlm("Score ~ Condition", df,
groups=df["Topic"],
re_formula="~Condition",
vc_formula={"ReviewerID": "0 + C(ReviewerID)",
"IdeaID": "0 + C(IdeaID)"})
This mixed-effects model analyzes the relationship between Score and Condition, while accounting
for the hierarchical structure of the data. Fixed effects estimate the average effect of Condition on
Score. Random intercepts for Topic allow for varying baseline scores across topics, and random slopes
for Condition within each topic allow the effect of Condition to vary by topic. Additionally, variance
components for ReviewerID and IdeaID account for variability in scores specific to individual reviewers
and ideas, respectively.
The results are shown in Table 17. The intercepts in the mixed-effects models represent the estimated
mean score of the baseline condition, which in this context is the Human Ideas. The coefficients for
Condition[AI Ideas] and Condition[AI Ideas + Human Rerank] in the mixed-effects models
represent the difference in the mean score for each metric between the AI ideas and the baseline
(human ideas). For example, a positive coefficient of 0.761 for the novelty score means that AI Ideas,
on average, score 0.761 points higher than Human Ideas on the novelty score metric; conversely, a
negative coefficient of -0.330 for the feasibility score means that AI Ideas, score 0.330 points lower
than Human Ideas on feasibility on average. The topic (group) variance in the mixed-effects model
represents the variability in the outcome metric that can be attributed to differences between the
topics, which is relatively small in general. Similarly, the idea variance and reviewer variance in
the mixed-effects model represent the variability in the outcome metric that can be attributed to
differences between individual ideas and between reviewers, respectively. The reviewer variances
are high in general, suggesting that there is substantial variability in how different reviewers rate the
same ideas. This implies that reviewer differences play a significant role in the observed scores, with
some reviewers consistently giving higher or lower ratings.
Overall, the results from the mixed-effects models confirm our main conclusion that AI ideas are rated
as significantly more novel than human ideas.
46
Novelty Score
Intercept
Condition[AI Ideas]
Condition[AI Ideas + Human Rerank]
Idea Var
Reviewer Var
Excitement Score
Intercept
Condition[AI Ideas]
Condition[AI Ideas + Human Rerank]
Idea Var
Reviewer Var
Feasibility Score
Intercept
Condition[AI Ideas]
Condition[AI Ideas + Human Rerank]
Idea Var
Reviewer Var
Expected Effectiveness Score
Intercept
Condition[AI Ideas]
Condition[AI Ideas + Human Rerank]
Idea Var
Reviewer Var
Overall Score
Intercept
Condition[AI Ideas]
Condition[AI Ideas + Human Rerank]
Idea Var
Reviewer Var
Coef.
SE
p
0.000***
0.023*
0.003**
0.000***
0.039*
0.003**
0.000***
0.307
0.561
0.000***
0.027*
0.114
0.000***
0.640
0.056
4.826
0.756
0.902
0.412
0.803
4.493
0.626
0.879
0.495
0.782
6.595
-0.300
-0.183
0.476
1.035
5.156
0.310
0.383
0.200
0.469
4.660
0.137
0.610
0.262
1.071
0.217
0.331
0.305
0.178
0.202
0.212
0.303
0.298
0.227
0.167
0.224
0.294
0.314
0.188
0.261
0.211
0.140
0.242
0.151
0.141
0.242
0.294
0.320
0.154
0.225
Table 17: Results of linear mixed-effects models. We bold results that are statistically significant
(∗p < 0.05;∗∗p < 0.01;∗∗∗p < 0.001). Our main conclusion on AI ideas being more novel than human
ideas still holds here.
47
O Score Breakdown by Topic
We show the breakdown of all scores across all conditions by topic. Note that due to the smaller
sample sizes for the per-topic breakdown, most results are not statistically significant and only offer
an intuitive understanding of the trends.
Figure 5: Breakdown of all scores by topic.
48
HumanAIAI+Rerank02468Multilingual**NoveltyHumanAIAI+Rerank**ExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallHumanAIAI+Rerank02468FactualityNoveltyHumanAIAI+RerankExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallHumanAIAI+Rerank02468BiasNoveltyHumanAIAI+RerankExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallHumanAIAI+Rerank01234567UncertaintyNoveltyHumanAIAI+RerankExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallHumanAIAI+Rerank02468SafetyNoveltyHumanAIAI+RerankExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallHumanAIAI+Rerank02468MathNoveltyHumanAIAI+RerankExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallHumanAIAI+Rerank01234567CodingNoveltyHumanAIAI+RerankExcitementHumanAIAI+RerankFeasibilityHumanAIAI+RerankEffectivenessHumanAIAI+RerankOverallP Example Idea: Modular Calibration for Long-form Answers
Modular Calibration for Long-form Answers (Part 1)
1. Problem Statement: Calibrating the confidence of Large Language Models (LLMs) when generating long-form
answers, such as essays and code, remains an open challenge in the field of natural language processing.
2. Motivation: While numerous methods have been developed to calibrate the performance of LLMs on
multiple-choice questions or open-domain questions with short answers, extending these approaches to tasks
requiring lengthy responses presents significant difficulties. For instance, in code generation tasks (e.g., the
HumanEval dataset), traditional confidence extraction methods like perplexity may prove inadequate due to
the substantial variation in answer length across questions. Verbalized confidence can be affected by instruction
tuning artifacts or unclear scope, while the reliability of metrics such as Expected Calibration Error (ECE) and
Macro-averaged Calibration Error (MacroCE) may be compromised by differences in task settings. Our aim is
to propose a novel pipeline for confidence extraction and calibration of LLMs for long-form answers, drawing
inspiration from methods used for short or fixed-set answers. This approach will enable us to monitor the model’s
long-form answer generation process and apply targeted external augmentation when necessary, thereby enhancing
both performance and efficiency.
3. Proposed Method: We introduce Modular Calibration, a process comprising four core steps:
1. Extend: Prompt the model to elaborate on the original question in relation to the answer, identifying which
components of the question are addressed in the long-form response.
2. Decompose: Instruct the LLM to break down the extended question and long-form answer into multiple
modules.
3. Extract Confidence: Utilize verbalized confidence or perplexity to determine the confidence level for each
module.
4. Merge: Based on the relationships between the modular questions/answers and the overall questions/an-
swers, prompt the model to combine the modular confidence scores into an overall score representing the
confidence in the long-form answer.
Each of these steps is executed by prompting the same LLM in different ways to elicit the desired response.
4. Step-by-Step Experiment Plan:
1. Gather Datasets: Select datasets featuring long answers with correctness annotations. Potential candidates
include GSM8K, Code Gen, and Essay Writing.
2. Construct Prompts:
(a) Establish a baseline using direct prompting, where a query is presented without special techniques.
(b) Analyze outputs to refine prompts for the Extend and Decompose steps.
(c) For the Confidence step, employ vanilla perplexity or verbalized confidence extraction. If performance
is unsatisfactory, explore advanced methods built upon these techniques, such as those presented in
recent research (e.g., FaR paper).
3. Select Models: Evaluate GPT-3.5 (Text-Davinci-003) and GPT-4 from the OpenAI API, as well as the open-
source LLaMA-3-70B-chat.
4. Get Results: Obtain confidence predictions from the models on the selected datasets using both baseline
methods and the proposed Modular Calibration approach.
5. Analyze Results: Compare the calibration performance of LLMs using the new method against the baselines
(e.g., the perplexity of the entire long-form answer). Conduct qualitative and quantitative analyses on each
component of the Modular Calibration process.
49
Modular Calibration for Long-form Answers (Part 2)
5. Test Case Examples:
• Test Case 1: Verbalized Confidence Prompting
– Input: <Q> <A> Confidence (0-1)
– Output: [Model generates a confidence score between 0 and 1]
• Test Case 2: Modular Calibration Step 1 (Extend)
– Input: Given the answer, can you extend the question and elaborate on what points are covered in the
answer?
– Output: The answer covers these points of the question: (1) how fast A runs; (2) how fast B runs; (3) if A
is faster than B.
• Test Case 3: Modular Calibration Step 2 (Decompose)
– Input: Please decompose the above extended question and answers into modules.
– Output:
* How fast A runs: [relevant excerpt from the original answer]
* How fast B runs: [relevant excerpt from the original answer]
[Additional modules as needed]
• Test Case 4: Modular Calibration Step 3 (Extract)
– Input: How fast A runs: [relevant excerpt from the original answer] Confidence (0-1)
– Output: 1. 0.9; 2. 0.6 [Additional confidence scores for other modules]
• Test Case 5: Modular Calibration Step 4 (Merge)
– Input: For each of these points related to question X, the confidence is: 0.9, 0.6, ... What is the overall
confidence for the whole problem?
– Output: [Model generates an overall confidence score]
6. Fallback Plan: If the proposed Modular Calibration method does not demonstrate improvement over the baseline,
we will execute each sub-question and module individually to assess whether calibration is enhanced for each
component. This approach will facilitate debugging of the proposed method and potentially yield interesting
insights into the relationships between performance/calibration of decomposed modules and overall problems.
Alternatively, we may analyze the model’s ability to effectively decompose questions and answers into appropriate
modules. These analyses will inform potential refinements to the method or provide valuable insights into the
limitations and capabilities of LLMs in handling complex, long-form responses.
50
Reviewer 1
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn
into a new paper)
Rationale: Focus on the long-form setting is novel at the moment. The idea of obtaining modular confidence
estimates for different claims in a long-form output, and synthesizing them into a single uncertainty estimate is not
that complicated, but it does seem to be underexplored.
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: The only part of the project that seems challenging is obtaining correctness annotations for one of the
datasets (e.g., Essay Writing). GSM8K and code datasets like HumanEval seem like very natural long-form output
settings to try out the idea. Other than this, iterating on the prompts for decomposition / verbalized UQ for each of
the modules will be important, but the author mentions this.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing
baselines by moderate margins on a few benchmarks.)
Rationale: It’s possible that first obtaining verbalized uncertainty estimates for each module, and then synthesizing
into a single score, will outperform the standard baselines of self-consistency over the entire long-form output (using
majority vote as the confidence score). However, I don’t expect this to be dramatically better. If the paper instead set
out with the goal of actually producing the UQ estimates for each claim, then almost no prior work does this, and the
baselines would be less strong.
Excitement: 5 (Leaning negative: it has interesting bits but overall not exciting enough)
Rationale: This seems like the most straightforward possible way to obtain uncertainty estimates for a long-form
generation with an LLM. This means the project could produce some useful engineering artifacts, but it doesn’t
really push the idea to its logical conclusion. Therefore I don’t consider it "exciting enough". There is some mention
of "using the uncertainty estimates to possibly condition on more information" but this is not fleshed out – it could
be more interesting. For example, studying how the fine-grained uncertainty estimates could be used to selectively
retrieve factual information from Wikipedia etc. on a knowledge-intensive task.
Overall Score: 5 (Decent idea but has some weaknesses or not exciting enough, marginally below the acceptance
threshold of major AI conferences)
Rationale: I like the focus on long-form generations. However, this proposal is a very straightforward baseline
and extension of existing work to the long-form generation setting (just produce the long generation, decompose it,
apply verbalized uncertainty on each claim, and finally aggregate them). I could see the paper being well-cited, but I
don’t see an interesting/novel angle here.
Confidence: 5 (You are absolutely certain that the evaluation is correct and very familiar with the relevant literature)
51
Reviewer 2
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn
into a new paper)
Rationale: While existing works have explored the problem of calibration in long-form answers (e.g.
https://arxiv.org/abs/2402.06544), the specific method for calibration is different. Also seems related to FactScore
(https://arxiv.org/abs/2305.14251) where the task was different (getting a factuality score) but the idea of breaking
long form generations into smaller units, evaluating each separately and then combing does seem related.
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: The idea seems simple enough to implement with API access, considering all the steps involved in the
method can be done via prompting with API. The proposal does mention using LLaMA3-70B as an additional
model, which would require GPUs I guess.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing
baselines by moderate margins on a few benchmarks.)
Rationale: Since it has been shown that LLMs are quite well calibrated when asked to verbalize the confidence
for short answers, I’m guessing the calibration scores would be pretty good for individual modules. Also LLMs
might be decent at combining confidence scores (especially with detailed instructions and some examples in the
prompt), so overall the method might work well. But it’s unclear if it would do better than the methods proposed in -
https://arxiv.org/abs/2402.06544.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some
weaknesses or somewhat incremental)
Rationale: If the method does work well in getting calibration for long-form answers, I think that would be pretty
exciting. One thing which is missing from the proposal (and why the score was not higher) was that it does not touch
upon the issue that for long-form answers we won’t have a binary correct/incorrect decision but answers can be
partially correct.
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: The overall idea makes sense to me, but the score is not higher right now because: (a) it’s unclear what
exactly is meant by ’modules’ especially for essay writing which the proposal mentions as one of the tasks ; (b) the
issue for partial correctness which was mentioned above.
Confidence: 3 (You are fairly confident that the evaluation is correct)
52
Q Example Idea: Semantic Resonance Uncertainty Quantification
Semantic Resonance Uncertainty Quantification (SRUQ) (Part 1)
1. Problem Statement: Current uncertainty quantification methods for Large Language Models (LLMs) often
rely on simple statistical measures or model-specific attributes, which may not capture the nuanced semantic
uncertainties in complex reasoning tasks. This limitation can lead to overconfident or poorly calibrated model
outputs, potentially resulting in unreliable decision-making in critical applications.
2. Motivation: Existing approaches typically use softmax probabilities, entropy measures, or ensemble
disagreement to quantify uncertainty. However, these methods often fail to capture the semantic nuances and
reasoning complexities in tasks that require deep understanding and multi-step reasoning. Human experts, on
the other hand, gauge their uncertainty by considering how well their reasoning ’resonates’ with their broader
knowledge and experience. By mimicking this process in LLMs, we can potentially develop a more robust and
semantically grounded approach to uncertainty quantification.
3. Proposed Method: We propose Semantic Resonance Uncertainty Quantification (SRUQ), which prompts the
LLM to generate multiple independent reasoning paths for a given problem, then quantifies uncertainty based on
the semantic coherence and mutual reinforcement among these paths. The process involves five key steps:
1. Generating diverse solution attempts using different prompting strategies.
2. Cross-evaluating each solution attempt against the others, assessing logical consistency and mutual support.
3. Constructing a ’resonance graph’ where nodes are solution attempts and edges represent semantic reinforce-
ment.
4. Computing a resonance score based on graph properties like connectivity and centrality.
5. Mapping the resonance score to a calibrated uncertainty estimate.
53
Semantic Resonance Uncertainty Quantification (SRUQ) (Part 2)
4. Step-by-Step Experiment Plan:
1. Dataset Preparation
• Utilize three datasets covering different reasoning tasks:
(a) GSM8K for mathematical problem-solving
(b) EntailmentBank for logical deduction
(c) HotpotQA for multi-hop question answering
• Split each dataset into train, validation, and test sets if not already done.
2. Baseline Implementation
• Implement three baseline uncertainty quantification methods:
(a) Softmax probabilities
(b) Monte Carlo Dropout
(c) Ensemble disagreement (using different few-shot prompts)
• Generate predictions and uncertainty estimates on the validation and test sets for each baseline.
3. SRUQ Implementation
(a) Generate 5 diverse solution attempts using different few-shot prompts and temperature settings.
(b) For each pair of solutions, prompt the LLM to evaluate their consistency and mutual support.
(c) Construct the resonance graph using the pairwise evaluations.
(d) Compute the resonance score using graph centrality measures (e.g., PageRank).
(e) Map the resonance score to a calibrated uncertainty estimate using isotonic regression on the validation
set.
4. Evaluation
• Compare SRUQ against the baselines using the following metrics:
(a) Expected Calibration Error (ECE)
(b) Brier score
(c) Area Under the Precision-Recall Curve (AUPRC) for uncertainty ranking
• Evaluate the correlation between uncertainty estimates and actual errors.
5. Analysis
• Visualize the resonance graphs for high and low uncertainty examples.
• Analyze the relationship between graph properties and prediction accuracy.
• Investigate cases where SRUQ significantly outperforms or underperforms compared to baselines.
6. Ablation Studies
• Vary the number of solution attempts.
• Compare different graph centrality measures.
• Evaluate the impact of the cross-evaluation step.
7. Generalization Test
• Test the generalization of SRUQ on out-of-distribution samples by applying the method trained on one
dataset to examples from the other datasets.
54
Semantic Resonance Uncertainty Quantification (SRUQ) (Part 3)
5. Test Case Examples:
• Baseline Example:
– Input: Q: If a train travels at 60 miles per hour, how far will it travel in 2.5 hours?
– Softmax Output: The train will travel 150 miles in 2.5 hours. (Confidence: 0.92)
– Explanation: The softmax probability is high, but it does not capture the reasoning process or potential
uncertainties in the calculation.
• SRUQ Example:
– Input: Q: If a train travels at 60 miles per hour, how far will it travel in 2.5 hours?
– Solution Attempts:
1. Distance = Speed × Time
* Distance = 60 miles/hour × 2.5 hours
* Distance = 150 miles
2. In 1 hour, the train travels 60 miles
* In 2 hours, it’s 120 miles
* In 0.5 hours, it’s 30 miles
* Total: 120 + 30 = 150 miles
3. Let’s break it down:
* 1 hour: 60 miles
* 1 hour: 60 miles
* 0.5 hour: 30 miles
* Sum: 60 + 60 + 30 = 150 miles
– Cross-Evaluation: All three solutions are consistent and mutually supportive. They use different
approaches but arrive at the same result.
– Resonance Graph: Fully connected graph with high edge weights
– Resonance Score: 0.95
– Calibrated Uncertainty: 0.05
– Final Output: The train will travel 150 miles in 2.5 hours. (Uncertainty: 0.05)
– Explanation: SRUQ generates multiple solution paths, evaluates their consistency, and quantifies
uncertainty based on their semantic resonance. The high resonance score indicates low uncertainty,
which is then calibrated to provide a final uncertainty estimate.
6. Fallback Plan: If SRUQ does not significantly outperform baselines, we can pivot to an analysis paper exploring
why semantic resonance might not capture uncertainty effectively. We could investigate the quality and diversity of
generated solution attempts, potentially improving the prompting strategies. Additionally, we could examine the
effectiveness of the cross-evaluation step, possibly incorporating external knowledge or more structured reasoning.
Furthermore, we could explore the relationship between graph properties and actual uncertainty, which might
reveal insights about how LLMs represent confidence internally. We could also consider combining SRUQ with
traditional uncertainty quantification methods, creating a hybrid approach that leverages both statistical and
semantic information.
55
Reviewer 1
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn
into a new paper)
Rationale: I haven’t seen (and couldn’t find) any prior work which exactly has the same idea as in this proposal.
The proposed idea is definitely related to using consistency among multiple solutions to estimate uncertainty (e.g.
https://arxiv.org/abs/2405.18711 does this across solutions decoded from different layers) but I have not seen the
idea of constructing resonance graph and using graph properties to estimate uncertainty.
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: The proposed method, SRUQ, should be pretty easy to implement given that LLM API access is
abundant. SRUQ involves multiple steps all of which can be done through prompting via API — getting multiple
solutions, prompting LLMs to get a consistency score between each pair of solutions etc. The parts which cannot
be implemented through API are the baselines e.g. Monte Carlo dropout, and would require GPUs. To do a fair
comparison to the baselines, I imagine SRUQ will also have to be done on open models which could also require
GPUs.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing
baselines by moderate margins on a few benchmarks.)
Rationale: Although the proposal includes some baselines that should be compared to, it does not mention
some methods which seem to do quite well with LLMs (especially getting better with scale) – e.g. methods like
P(True) (https://arxiv.org/abs/2207.05221) or verbalized confidence (https://arxiv.org/abs/2305.14975). It’s not
clear/obvious to me that the proposed method should do better than these baselines.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some
weaknesses or somewhat incremental)
Rationale: While the method is novel and feasible, I’m not too excited by it since some of the other existing methods
out there mentioned above (like https://arxiv.org/abs/2207.05221, https://arxiv.org/abs/2305.14975) are much
simpler and work quite well. Compared to that SRUQ is more complex, and hence maybe has less chance of being
very impactful (unless it works really better).
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: The above accept score is assuming the idea does work better than the baselines on some category of
tasks. Overall, given that the idea is novel, the proposal includes comparison to other baselines as well analysis &
ablations, I think that could be enough to get accepted into an AI conference.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
56
Reviewer 2
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn
into a new paper)
Rationale: The proposed approach shares some similar ideas with self-consistency (which suggests the consistency
of sampled LLMs outputs is relatively well calibrated). But the approach is more generalized and fine-grained than
existing work if the approach uses more advanced ‘mutual support evaluation‘ beyond simply comparing the final
answers.
Feasibility: 5 (Moderately feasible: It can probably be executed within the given time frame but would require
careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU
resources, and would require some modifications to the original proposal to make it work.)
Rationale: There lacks some important details in terms of the cross-evaluation part. How is the mutual support
evaluated (by prompting or some other methods?). This part is crucial for implementing the whole pipeline of this
approach.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing
baselines by moderate margins on a few benchmarks.)
Rationale: I think it has some chances to beat the proposed baselines. If the cross-evaluation part is properly
executed. Again, the success of this proposal is highly dependent on that part.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some
weaknesses or somewhat incremental)
Rationale: If this idea actually works, at least it tells something new about how to use multiple samples to provide
better confidence estimation than simple consistency. But the idea itself is still somewhat incremental given the
existence of current consistency-based calibrators.
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: Overall there are some incremental contributions, but not too exciting. The algorithm itself can be neat. I
think it can be worth a borderline acceptance.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
57
Reviewer 3
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn
into a new paper)
Rationale: I think the idea is reasonable and indeed identifies some limitations of current works on uncertainty
estimation. However, the consistency between reasoning paths is somehow similar to self-consistency reasoning
from Google and SelfCheckGPT.
Feasibility: 7
Rationale: I think it could be easy to implement and quickly be tried by PhD students or even undergrads. Also, in
the test case example, the setting is straightforward and well-defined.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing
baselines by moderate margins on a few benchmarks.)
Rationale: Based on my experience, the consistency-based methods, although not fully theoretically grounded, can
work pretty well in current uncertainty estimation questions. I believe working this on the reasoning path level
could also work to some extent.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some
weaknesses or somewhat incremental)
Rationale: Overall, this idea identified a good research question, although the method might not be very exciting to
me.
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: The novelty and the actual application of this method in the area is limited, but could be an inspiring idea.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
58
R Example Idea: Translation with LLMs through Prompting with
Long-Form Context
Translation with LLMs through Prompting with Long-Form Context (Part 1)
1. Problem Statement: Stable generation of text in low-resource languages is an unsolved issue in large language
models.
2. Motivation: While LLMs can often produce surprisingly good translations despite not being explicitly
trained for this task, this does not hold for lower-resource languages. LLMs are both more likely to gener-
ate off-target text (text in another language than intended) when prompted to translate to a lower-resource
language, and show increased instability in translation quality across prompt templates in lower-resource languages.
3. Proposed Method: Our proposed method investigates the use of long-form templates to improve generated
translation quality and reduce off-target translations in lower-resource languages. We propose to provide additional
prompt context by translating multi-sentence input, with additional views of the target language with the langid
template provided as context. We do so in multiple stages:
1. Querying the language model to first generate a paragraph containing the source sentence to be translated.
2. Prepending monolingual text in the target language, with langid: tags, above the translation prompt.
3. Presenting both these additional sources of content, prompting the LLM for a translation.
4. Step-by-Step Experiment Plan:
1. Choose datasets: Evaluate on the FLORES-200 datasets, which allow for wide language coverage on the
Wikipedia domain, as well as the WMT-21 test sets for news and law/medical domain.
2. Choose languages: Opt for English-centric translation with:
• 5 high-resource languages with different scripts (French, German, Russian, Chinese, Japanese)
• 5 mid-resource languages (Farsi, Vietnamese, Arabic, Korean, Hebrew)
• 5 low-resource languages with considerably lower likelihood of incidental bilingualism (Gujarati, Thai,
Tajik, Sindhi, Pashto)
3. Choose models: Include the API-based GPT-3.5 (Text-Davinci-003) and GPT-4 model from OpenAI and
Gemini from Google, as well as the open-weight LLaMA-3, Gemma, and Aya models which enable additional
analysis.
4. Gather translation results: Systematically compare standard MT prompt templates to our proposed method
across different models and language pairs. Additionally ablate the steps of the new method (removing langid
templates; replacing langid templates with endonymic langid tags; provide only the generated paragraph;
only the monolingual content).
5. Perform analysis: Evaluate whether the new method improves the performance of LLMs in these tasks as
compared to the baselines using multiple standard automatic metrics for MT (chrF, COMET, BLEU) and token-
level LID to measure off-target translations. Assess which component(s) are necessary for this improvement
and whether or not there are changes across language pair direction and language resource levels or scripts.
59
Translation with LLMs through Prompting with Long-Form Context (Part 2)
5. Test Case Example:
• Baseline Prompt Template:
[English] This is an example.
[Spanish]
• Baseline Prompt Expected Output:
[Spanish] Este es un ejemplo.
• Proposed Prompt Input (step 1):
Write a paragraph containing the following sentence:
[English] This is an example.
• Expected model output (step 1):
This is an example link. Italics and Bold: use asterisks and underscores
to indicate spans of emphasis. Use one asterisk (or underscore) for
italics and two asterisks (or two underscores) for bold. For example...
• Proposed Prompt Input (step 2):
[Spanish] Computadoras simples son lo suficientemente pequeñas
para residir en los dispositivos móviles.
[Spanish] Las computadoras portátiles, tales come tabletas,
netbooks, notebooks, ultrabooks, pueden ser alimentadas por
pequeñas baterías.
[Spanish] Las computadoras personales en sus diversas formas son
iconos de la llamada era de la información y son lo que la mayoría
de la gente considera como «computadora».
Translate this paragraph from English to Spanish:
[English] This is an example link. Italics and Bold: use asterisks
and underscores to indicate spans of emphasis. Use one asterisk
(or underscore) for italics and two asterisks (or two underscores)
for bold. For example...
[Spanish]
• Proposed Prompt Expected Output:
[Spanish] Este es un enlace de ejemplo. Cursiva y negrita: utilice
asteriscos y guiones bajos para indicar intervalos de énfasis.
Utilice un asterisco (o guión bajo) para cursiva y dos asteriscos
(o dos guiones bajos) para negrita. Por ejemplo...
60
Translation with LLMs through Prompting with Long-Form Context (Part 3)
5. Test Case Example (Continued):
• Proposed Prompt Input (step 3):
Which of the following sentences are translations of the following
English sentence? Multiple sentences can be chosen.
[English] This is an example.
1. Este es un enlace de ejemplo.
2. Cursiva y negrita: utilice asteriscos y guiones bajos para indicar
intervalos de énfasis.
3. Utilice un asterisco (o guión bajo) para cursiva y dos asteriscos
(o dos guiones bajos) para negrita.
4. Por ejemplo...
• Proposed Prompt Expected Output:
The sentence "This is an example." can be translated to Spanish as:
1. Este es un ejemplo.
2. Por ejemplo...
These two options correctly translate the meaning of
"This is an example." into Spanish.
6. Fallback Plan: If the proposed method does not help as compared to the baseline, analyzing the results of
step 3 would likely provide further insights into how the template should be modified. In addition to potentially
identifying off-target errors, it may be that the model is unable to identify correct translations even if they have
been generated, and results are likely to vary across languages based on their training data. Using the generated
paragraph as provided context and still querying the model to translate at only the sentence level could be compared.
Restricting monolingual text to be retrieved text within the domain of the source sentence could be explored. Adding
few-shot examples in the prompt and comparing other MT prompt templates may also help debug the proposed
method. Including an additional query where the model is first asked to label each generated token by langid and
then asked to re-translate the source including those tokens which are correctly labelled in target may reinforce
langid and guide generation in the target language. Performing layer-wise analyses of likelihood of generating the
next token in-language and in-script for open-weight models may also help debug where and why off-target issues
persist.
61
Reviewer 1
Novelty: 5 (somewhat novel - there are differences from existing ideas but not enough to turn into a new paper)
Rationale: While I’m not aware of papers that have used this exact prompting strategy, I don’t think that this
proposal will be enough to justify a publication. I think that there should be a variety of strategies suggested + an
analysis of multiple prompting strategies rather than suggesting one strategy. I think that a thorough analysis of the
effects of additional context / langids could potentially turn this into a paper.
Feasibility: 9
Rationale: Such a project that only uses LLM APIs could be executed very quickly without much expertise in
coding/architecture. The only time-consuming part might be iterating and adjusting the prompts in the ablation
studies.
Expected Effectiveness: 7
Rationale: I think that this proposal could work well to guide LLMs to translate in the desired target language, since
this is a known problem with current prompt-based MT strategies (as the writers have suggested).
Excitement: 5 (Leaning negative: it has interesting bits but overall not exciting enough)
Rationale: I’m not sure how well this method will transfer to future models, and this could be a limiting factor in the
longevity of this research. (But this is a limitation of all prompting research...)
Overall Score: 5 (Decent idea but has some weaknesses or not exciting enough, marginally below the acceptance
threshold of major AI conferences)
Rationale: I think that the work should focus on the ablation studies and comparison of multiple prompting
strategies / analysis, rather than focusing on one new strategy.
Confidence: 3 (You are fairly confident that the evaluation is correct)
62
Reviewer 2
Novelty: 1 (not novel at all - there are many existing ideas that are the same)
Rationale: There are multiple existing works on prompting LLMs on low-resource translation, usually using
few-shot demo. https://proceedings.mlr.press/v202/garcia23a/garcia23a.pdf https://arxiv.org/pdf/2305.14857
Also work explaining why few-shot prompt would work: https://arxiv.org/pdf/2305.10266
Feasibility: 5 (Moderately feasible: It can probably be executed within the given time frame but would require
careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU
resources, and would require some modifications to the original proposal to make it work.)
Rationale: The prompting experiment is mostly feasible given one can afford the API calls. The model, prompts,
and evaluation metrics are concrete, although unclear if the proposed experiment is useful for proving the research
idea, e.g., a few high-resource languages are listed for a research idea that focuses on low-resource languages.
Expected Effectiveness: 3 (Low Effectiveness: The idea might work in some special scenarios but you don’t expect it
to work in general.)
Rationale: The proposed experiment can help find a set of relatively high-performing prompts, but it is unclear
among the prompts proposed if any of them will bring any improvement.
Excitement: 3 (Mediocre: this idea makes marginal contributions and is very incremental)
Rationale: The ability to do prompting/few-shot translation is fundamentally tied to the training data, see
https://arxiv.org/pdf/2305.10266, so trying to solve this problem from the prompting space is inherently limited.
Overall Score: 3 (Clear rejection for major AI conferences)
Rationale: There is similar work on prompting LLMs to generate translation in low-resource languages, hence the
idea is not very novel. Moreover, in terms of the goal to generate high-quality low-resource translation, the gains
likely are not going to come from prompting.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
63
S Example Idea: Linguistic Pivot Constellation: Enhancing
Cross-Lingual Transfer for Low-Resource Languages and Dialects
Linguistic Pivot Constellation (LPC): Enhancing Cross-Lingual Transfer for Low-Resource
Languages and Dialects (Part 1)
1. Problem Statement: Large language models struggle with cross-lingual transfer, especially for low-resource
languages and dialects. This limitation hinders the models’ ability to perform well on multilingual tasks involving
these languages, potentially exacerbating digital language divides.
2. Motivation: Current approaches often rely on parallel data or multilingual pretraining, which are limited for
many language pairs. Inspired by how polyglots leverage similarities between known languages to learn new ones,
we propose creating a network of conceptual bridges across languages. This method could potentially overcome
the limitations of existing approaches by leveraging the model’s broad knowledge to create connections between
known and unknown linguistic territories.
3. Proposed Method: We introduce Linguistic Pivot Constellation (LPC), a novel prompting technique that
constructs a dynamic network of linguistic pivot points. For a given task, LPC first identifies conceptually similar
languages or dialects to the target language. It then generates a constellation of prompts in these pivot languages,
each capturing a different aspect of the task. The model is guided to ’triangulate’ the correct response by considering
these multiple perspectives. For example, to translate a rare dialect, LPC might use prompts in related languages,
regional lingua francas, and even etymologically connected languages.
4. Step-by-Step Experiment Plan:
1. Data Collection
• Gather datasets for translation and question-answering tasks across a diverse set of low-resource
languages and dialects.
• Utilize the FLORES-101 dataset for machine translation and the TyDi QA dataset for question answering.
2. Baseline Implementation
• Implement standard few-shot prompting and existing cross-lingual transfer methods (e.g., zero-shot
cross-lingual transfer) as baselines.
3. LPC Implementation
(a) Create a language similarity matrix based on language families and geographical proximity.
(b) Implement a function to select the most relevant pivot languages for a given target language.
(c) Design prompts for each pivot language that capture different aspects of the task.
4. Prompt Construction
(a) Select 3-5 pivot languages based on the similarity matrix.
(b) Generate task-specific prompts in each pivot language.
(c) Combine these prompts into a ’constellation’ prompt that includes the original task in the target language.
5. Model Selection
• Use GPT-4 as the primary model for experiments.
• Test with GPT-3.5-turbo for comparison.
6. Experiment Execution
(a) Run the baseline methods.
(b) Run the LPC method with varying numbers of pivot languages (1, 3, and 5).
(c) Record the model outputs and performance metrics.
64
Linguistic Pivot Constellation (LPC): Enhancing Cross-Lingual Transfer for Low-Resource
Languages and Dialects (Part 3)
4. Step-by-Step Experiment Plan (Continued):
7. Evaluation
• Evaluate the results using task-specific metrics:
– BLEU score for translation tasks
– F1 score for question answering tasks
8. Analysis
• Analyze the effectiveness of different pivot language combinations and the method’s scalability to
extremely low-resource scenarios.
• Compare LPC performance against baselines across different language families and resource levels.
5. Test Case Examples:
• Test Case 1:
– Baseline Prompt Input: Translate the following Sicilian sentence to English: ’Unni c’è fumu c’è focu.’
– Baseline Prompt Expected Output: Where there’s smoke, there’s fire.
– Proposed Prompt Input: We will translate a Sicilian sentence to English. To help with this task, consider
the following related phrases:
In Italian: ’Dove c’è fumo c’è fuoco.’
In Neapolitan: ’Addò ce sta ’o fummo ce sta ’o ffuoco.’
In Latin: ’Ubi fumus, ibi ignis.’
Now, translate the Sicilian sentence to English: ’Unni c’è fumu c’è focu.’
– Proposed Prompt Expected Output: Where there’s smoke, there’s fire.
– Explanation: The LPC method provides context from related languages (Italian, Neapolitan, and Latin),
which can help the model better understand and translate the Sicilian phrase. This is especially useful
for low-resource languages like Sicilian, where direct translation data might be limited.
6. Fallback Plan: If the LPC method does not significantly outperform baselines, we will pivot the project towards
an in-depth analysis of cross-lingual transfer mechanisms. We will investigate the relationship between language
similarity and transfer effectiveness, the impact of pivot language selection on performance, and how different
aspects of language (lexical, syntactic, semantic) transfer across the constellation. This analysis could provide
valuable insights into the strengths and limitations of large language models in cross-lingual tasks, potentially
informing future research directions in multilingual Natural Language Processing.
65
Reviewer 1
Novelty: 9
Rationale: The idea of using a linguistic similarity matrix to form conceptual bridges when constructing prompts
to improve cross-lingual transfer is one that I have not heard of before. I think this could be an interesting way of
leveraging existing information about related languages for NLP tasks in general.
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: I think the idea makes sense, but more details should be shared about how exactly this language similarity
matrix is constructed and what algorithms will be used for determining language similarity. More details should be
provided on how the prompts for different languages will be obtained and how the data will be collected, which
might be a time bottleneck.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing
baselines by moderate margins on a few benchmarks.)
Rationale: I think that this idea could work well just by providing more context in different languages. The
effectiveness sounds like it might be highly variable on the selection of pivot languages, though.
Excitement: 7
Rationale: I think that this could be interesting beyond the context of prompting, such as the use of pivot languages
in traditional machine translation.
Overall Score: 7 (Good idea, would be accepted by major AI conferences)
Rationale: I think that the idea is sufficiently novel, and if it is executed well with good results, could produce a
quality paper at a top NLP conference.
Confidence: 3 (You are fairly confident that the evaluation is correct)
66
Reviewer 2
Novelty: 8 (clearly novel - major differences from all existing ideas)
Rationale: The LPC method introduces a novel way of leveraging related languages and dialects to improve cross-
lingual transfer. While cross-lingual transfer and language similarity have been explored, the idea of dynamically
creating a constellation of prompts using pivot languages for specific tasks is a fresh and innovative approach.
Feasibility: 5 (Moderately feasible: It can probably be executed within the given time frame but would require
careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU
resources, and would require some modifications to the original proposal to make it work.)
Rationale: Implementing LPC could be challenging due to the complexities involved in selecting optimal pivot
languages and designing effective prompts for each. While the concept is sound, the practical execution—such as
building the language similarity matrix and dynamically generating prompts—may require substantial effort and
experimentation.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing
baselines by moderate margins on a few benchmarks.)
Rationale: The LPC method has the potential to improve cross-lingual performance, especially in low-resource
languages. By leveraging linguistic similarities, the model might better understand and translate languages with
limited training data.
Excitement: 7
Rationale: The LPC method is exciting because it tackles a critical challenge in multilingual NLP—improving
performance for low-resource languages. If successful, it could significantly enhance the accessibility and usability
of AI models across diverse linguistic contexts, particularly in underrepresented languages.
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: The idea is a promising candidate for exploration in the field of multilingual NLP. It introduces a novel
approach that could potentially improve cross-lingual transfer, particularly for low-resource languages and dialects.
However, the challenges in implementation and the uncertain effectiveness of the method warrant a cautious overall
rating.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
67
Reviewer 3
Novelty: 8 (clearly novel - major differences from all existing ideas)
Rationale: Leveraging language similarity is often quite well studied in machine translation, but there hasn’t been
one studying using similar language as demonstration in multilingual in-context learning. It would be interesting to
see how the model behavior change with different pivots.
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: The implementation will mostly involve building the similarity matrix and formatting the prompts. The
similarity matrix should be able to get from some existing works. The prompt formatting and experiments part
should be pretty straightforward with enough API quota.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing
baselines by moderate margins on a few benchmarks.)
Rationale: The idea is pretty interesting, but it’s not exactly sure whether similar languages are informative enough
for the model, since it still requires the model to understand the similarity between languages and reason over the
relationship between target language and the given languages.
Excitement: 8 (Exciting: would deepen the community’s understanding or make major progress in this research
direction)
Rationale: It would be informative to the community to see whether such demonstration can lead to good perfor-
mance for in-context learning. Even if this idea doesn’t work, the analysis will be quite informative.
Overall Score: 7 (Good idea, would be accepted by major AI conferences)
Rationale: This work studies an important problem for the multilingual community. The experiment results and
analysis will be quite informative for multilingual in-context learning.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
68
T Example Idea: LLM Directed Retrieval Querying for Improving
Factuality
LLM Directed Retrieval Querying for Improving Factuality (Part 1)
1. Problem Statement: Large language models can generate flexible, long-form language generations, but
LLM-generated responses often contain hallucinated or factually inconsistent content. Particularly in high-risk
settings, there is a need for methods to improve the factuality of LLMs.
2. Motivation: A common framework for improving the factuality of LLM generations is retrieval augmented
generation (RAG). In a RAG framework, a retriever takes a query as input and retrieves external knowledge from
a high-quality knowledge base from reliable sources. The retrieved content is incorporated into the prompt for
generating the response. One issue with this approach is that the quality of the generation can be bottlenecked by the
quality of the retrieved content. Retrieval can be challenging for tasks where the query objective is underspecified or
additional reasoning (or multi-step reasoning) on the query is required to retrieve content that supports the query.
3. Proposed Method: Our method refines the query by using an LLM to decompose the problem into sub-questions
and generate candidate answers to expand each sub-question. The key steps include:
1. Decomposing the original question into sub-questions using an LLM.
2. Generating candidate answers for each sub-question using the LLM.
3. Expanding each sub-question with generated candidate answers to create retrieval queries.
4. Retrieving passages for each expanded query.
5. Filtering retrieved passages based on retrieval model score.
6. Aggregating filtered passages across sub-questions.
7. Prompting the generative LLM with the aggregated passages as context to answer the original question.
4. Step-by-Step Experiment Plan:
1. Choose RAG datasets where the retrieval task has underspecified/unique objectives or requires multi-hop
reasoning, such as BIRCO and HotpotQA.
2. Select a retriever, such as an E5 or BGE model, and a generative LLM, such as GPT or LLaMA-3.
3. Establish Baseline:
(a) Use the example question as the query to the retriever to retrieve relevant content from the retrieval
passage pool.
(b) Construct a prompt that provides the retrieved context passages and the question.
(c) Prompt the generative LLM to answer the question using the context.
4. Implement Proposed Method:
(a) Prompt the generative LLM to decompose the question into sub-questions.
(b) For each sub-question, prompt the generative LLM to generate candidate answers.
(c) Use semantic similarity to cluster the generated candidate answers and sample for semantic diversity.
(d) Construct retrieval queries by expanding each sub-question with sampled candidate answers.
(e) Retrieve passages using each query and aggregate results for each sub-question.
(f) Deduplicate retrieved passages and filter based on retrieval model score.
(g) Prompt the generative LLM with filtered passages as context to answer the original question.
69
LLM Directed Retrieval Querying for Improving Factuality (Part 2)
5. Test Case Examples:
• Test Case 1:
– Original Question: In which region is the village after which lager "Fucking Hell" is named?
– Baseline:
* Retrieval Query: In which region is the village after which lager "Fucking Hell" is named?
* Retrieved Passage: Fucking Hell is a German pale lager, a Pilsner, with an alcohol content of
4.9%. It is named after Fucking, the previous name of the village of Fugging in Austria; hell is the
German word for ’pale’ and a typical description of this kind of beer. The beer’s name was initially
controversial. Both the local authorities in Fucking and the European Union’s Trade Marks and
Designs Registration Office initially objected to the name. It was eventually accepted and the lager
is sold internationally.
* Prompt: Given the retrieved passage(s) as context and the question, answer the question using the
context.
* Answer: The village after which the lager "Fucking Hell" is named is located in Austria.
– Proposed Method:
* Sub-Questions:
1. What village is the lager "Fucking Hell" named after?
2. In which country is this village located?
3. In which specific region or state within that country is the village located?
* Example Retrieval Query: What village is the lager "Fucking Hell" named after? The lager "Fucking
Hell" is named after the village previously known as Fucking, which is now called Fugging, in
Austria.
* Retrieved Passages:
1. Fucking Hell is a German pale lager, a Pilsner, with an alcohol content of 4.9%. It is named after
Fucking, the previous name of the village of Fugging in Austria; hell is the German word for
’pale’ and a typical description of this kind of beer. The beer’s name was initially controversial.
Both the local authorities in Fucking and the European Union’s Trade Marks and Designs
Registration Office initially objected to the name. It was eventually accepted and the lager is sold
internationally.
2. Fugging, spelled Fucking until 2021, is an Austrian village in the municipality of Tarsdorf, located
in the Innviertel region of western Upper Austria. It is 33 km (21 mi) north of Salzburg and 4 km
(2.5 mi) east of the Inn river, which forms part of the German border.
* Prompt: Given the retrieved passage(s) as context and the question, answer the question using the
context.
* Answer: The village after which the lager "Fucking Hell" is named is located in the Innviertel region
of western Upper Austria.
6. Fallback Plan: If the proposed method does not satisfy the success criteria, alternative approaches could be
explored. These may include quantifying the difficulty of various examples and analyzing whether this correlates
with method improvement. The method is likely to be more effective for questions about esoteric facts, where the
model is less likely to have internal knowledge of the answer, or its generated answers are more likely to disagree.
Additionally, the method may be more beneficial for questions requiring information from multiple passages.
Further analysis could help debug why the proposed method did not work, informing alternative new methods or
transforming the project into an analysis paper by offering interesting ablations and insights.
70
Reviewer 1
Novelty: 1 (not novel at all - there are many existing ideas that are the same)
Rationale: I find this idea is extremely similar to "GenDec: A robust generative Question-decomposition method for
Multi-hop reasoning" by Wu et al. (2024). Link: https://arxiv.org/html/2402.11166v1
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: Technically, this idea can be quickly re-produced based on the aforementioned paper. Though the
motivations and evaluations are different from the existing work, it shouldn’t take too long to figure them out.
Expected Effectiveness: 3 (Low Effectiveness: The idea might work in some special scenarios but you don’t expect it
to work in general.)
Rationale: Given that the idea is too similar to an existing one, the author may need to create a new but related idea as
a follow-up study of the aforementioned paper. This idea does have a different motivation from the aforementioned
one, so it uses different evaluation methods, though.
Excitement: 2
Rationale: Reviewers may argue the originality and novelty of this idea if it’s submitted to a venue. They may not
find it exciting, either.
Overall Score: 1 (Critically flawed, trivial, or wrong, would be a waste of students’ time to work on it)
Rationale: The students should probably think one-step-further of the existing study and they may eventually find
a way to improve the existing system.
Confidence: 5 (You are absolutely certain that the evaluation is correct and very familiar with the relevant literature)
Reviewer 2
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn
into a new paper)
Rationale: Query decomposition and RAG separately are well studied, if there is no existing work that combines
both (which I’m not aware of), then it’s reasonably novel.
Feasibility: 10 (Easy: The whole proposed project can be quickly executed within a few days without requiring
advanced technical skills.)
Rationale: It’s just a series of prompting which should be easy for a CS PhD student.
Expected Effectiveness: 8 (Probably Effective: The idea should offer some significant improvement over current
methods on the relevant benchmarks.)
Rationale: This method involves multiple fine-grained retrieval operations and should naturally outperform
existing retrieval methods without decomposition.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some
weaknesses or somewhat incremental)
Rationale: Although I believe in the effectiveness of the proposed method, the high latency compared to baselines is
a concern—training an end-to-end model to reduce latency might be a good add-on.
Overall Score: 7 (Good idea, would be accepted by major AI conferences)
Rationale: This is a good idea. If there is no identical existing work and the authors conduct comprehensive
experiments, it would be a good paper.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
71
Reviewer 3
Novelty: 5 (somewhat novel - there are differences from existing ideas but not enough to turn into a new paper)
Rationale: The idea aims to tackle a question by breaking it down and solving it one by one with RAG. But it seems
to be a more specialized way of CoT with RAG.
Feasibility: 5 (Moderately feasible: It can probably be executed within the given time frame but would require
careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU
resources, and would require some modifications to the original proposal to make it work.)
Rationale: The idea assumes a question can be broken down into subquestions where each subquestion is indepen-
dent of the others. In cases where they are not independent, the method might suffer from issues or inefficiency. But
maybe the distribution of these questions is more like a long tail and predominantly questions that can be easily
broken down. And is there a case where the question is high-level mathematics and difficult to the point where it
breaks down into a non-linear scale of the question text token?
Expected Effectiveness: 5 (Somewhat ineffective: There might be some chance that the proposed idea can work
better than existing baselines but the improvement will be marginal or inconsistent.)
Rationale: The main question is how the sub-questions are created. We can break the question into conditioned
parts from p(q0|q0,...qn)...p(qn|q0,...qn−1) where we assume them to be dependent, or we can use LLM to reason
about their dependency. We can also ask the question by asking leveled sub-questions like "where is this person
from" into "which country is this person from", "which city is this person from", "which district is this person from".
The concern is that different methods might affect the performance differently.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some
weaknesses or somewhat incremental)
Rationale: The idea seems exciting as it prevents LLM from shortcutting the question and hallucinating. But it needs
more method formulation on how the question should be broken down. The very baseline implementation will just
degrade to a CoT reasoning with RAG for each step. Because this could just be a subset of CoT methods in some
sense.
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: I believe there could be more comparison with CoT as motivation. Why should this be better with
prompting the model step by step using RAG, and why are they different? And for problem formulation, it would
be great if we can list more edgy examples of how questions can be divided to help pilot the prompting methods.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
72
U Example Idea: Semantic Divergence Minimization: Reducing
Hallucinations in Large Language Models through Iterative Concept
Grounding
Semantic Divergence Minimization: Reducing Hallucinations in Large Language Models
through Iterative Concept Grounding (Part 1)
1. Problem Statement: Large language models often generate hallucinations by diverging from the core semantic
content of the input, especially in complex reasoning tasks. This problem undermines the reliability and
trustworthiness of LLMs in critical applications that require accurate and factual responses.
2. Motivation: Current approaches like chain-of-thought prompting focus on generating intermediate steps but
do not explicitly constrain semantic drift. By continuously grounding generated content to the original semantic
space of the input, we can reduce hallucinations while preserving reasoning capabilities. This method leverages the
LLM’s own ability to extract and compare semantic concepts, creating a self-correcting mechanism that does not
require external knowledge bases or complex architectures.
3. Proposed Method: We introduce Semantic Divergence Minimization (SDM) prompting. For each reasoning step,
we prompt the model to:
1. Generate a candidate next step.
2. Extract key semantic concepts from the original input.
3. Measure semantic similarity between the candidate step and extracted concepts.
4. If similarity is below a threshold, regenerate the step with explicit instructions to incorporate more relevant
concepts.
5. Repeat until convergence or maximum iterations.
This creates a semantic ’gravity well’ that keeps reasoning tethered to the input’s conceptual core.
73
Semantic Divergence Minimization: Reducing Hallucinations in Large Language Models
through Iterative Concept Grounding (Part 2)
4. Step-by-Step Experiment Plan:
1. Dataset Preparation:
• Use two datasets: HotpotQA for multi-hop reasoning and GSM8K for complex math word problems.
• For HotpotQA, utilize the dev set (7,405 questions).
• For GSM8K, employ the test set (1,319 problems).
2. Baseline Implementation:
• Implement two baselines:
– Standard prompting: directly asking the model to answer the question.
– Chain-of-thought (CoT) prompting: asking the model to show its work step-by-step before giving
the final answer.
3. SDM Implementation:
• Implement the SDM method with the following sub-steps for each reasoning iteration:
– Generate next step.
– Extract key concepts from input.
– Measure semantic similarity.
– Regenerate if below threshold.
– Repeat until convergence or maximum iterations.
4. Prompt Engineering:
• Design prompts for each step of SDM. For example:
– "Generate the next step in solving this problem:"
– "Extract key concepts from the original question:"
– "Rate the semantic similarity between these concepts and the generated step on a scale of 0-10:"
– "Regenerate the step, focusing more on these key concepts:"
5. Hyperparameter Tuning:
• Experiment with different similarity thresholds (e.g., 5, 6, 7 out of 10) and maximum iteration limits (e.g.,
3, 5, 7) to find the optimal balance between performance and computational cost.
6. Model Selection:
• Use GPT-4 as the primary model due to its advanced reasoning capabilities.
• Test GPT-3.5-turbo for comparison.
7. Evaluation:
• For HotpotQA, use the official evaluation script to compute Exact Match (EM) and F1 scores.
• For GSM8K, use accuracy as the metric.
• Manually review a subset of responses to assess qualitative improvements in reasoning and reduction of
hallucinations.
8. Analysis:
• Compare SDM results with baselines.
• Analyze the correlation between semantic similarity scores and answer correctness.
• Examine cases where SDM significantly improves or fails to improve over baselines.
74
Semantic Divergence Minimization: Reducing Hallucinations in Large Language Models
through Iterative Concept Grounding (Part 3)
5. Test Case Examples:
• Example 1 (HotpotQA): Input: Who was born first, Alec Guinness or Billie Holiday?
Baseline Output: Alec Guinness was born first. He was born on April 2, 1914, while Billie Holiday was born
on April 7, 1915.
SDM Output: The SDM method provides a more structured and transparent reasoning process, reducing the
risk of hallucination by grounding each step in the key concepts (birth dates) from the original question.
• Example 2 (GSM8K): Input: Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and
sells the rest at the farmers’ market daily for $2 per egg. How much money does she make every week?
Baseline Output: Let’s solve this step by step: Janet makes $182 every week.
SDM Output: The SDM method not only provides a clear step-by-step solution but also explicitly verifies
that all key concepts from the original question are incorporated, reducing the risk of introducing irrelevant
information or hallucinating facts.
6. Fallback Plan: If the proposed SDM method does not significantly outperform baselines, we can pivot the project
in several ways. First, we could conduct an in-depth analysis of where and why SDM fails, potentially uncovering
insights about LLM reasoning processes. We might find that SDM works better for certain types of questions or
reasoning tasks, which could lead to a more nuanced application of the method. Second, we could explore variations
of SDM, such as using different prompts for concept extraction or similarity measurement, or incorporating a
dynamic threshold that adjusts based on the complexity of the question. Third, we could combine SDM with other
prompting techniques like chain-of-thought or self-consistency to create a hybrid approach. Finally, if the semantic
grounding aspect proves challenging, we could shift focus to analyzing how LLMs interpret and maintain semantic
consistency throughout multi-step reasoning, which could provide valuable insights for future work on reducing
hallucinations.
75
Reviewer 1
Novelty: 8 (clearly novel - major differences from all existing ideas)
Rationale: The use of semantic similarity to constrain CoT-styled generation is very new. I have not seen similar
work on it.
Feasibility: 5 (Moderately feasible: It can probably be executed within the given time frame but would require
careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU
resources, and would require some modifications to the original proposal to make it work.)
Rationale: The pipeline is feasible to me. The major challenge would be finding the similarity threshold for each
dataset.
Expected Effectiveness: 3 (Low Effectiveness: The idea might work in some special scenarios but you don’t expect it
to work in general.)
Rationale: I see some drawbacks in this pipeline. First, manually tuning the similarity threshold seems not the best
practice for scalable applications. The GSM8K math dataset contains pretty elementary math problems. In that case,
the semantic similarity threshold should be set very high, since these basic math concepts involved in the prompt
and the CoT breakdown would be determined as highly similar by most existing embedding methods. This brings
the question of whether this similarity threshold is non-trivial at all for some tasks.
Excitement: 6 (Learning positive: exciting enough to be accepted at a major AI conference, but still has some
weaknesses or somewhat incremental)
Rationale: Constraining CoT breakdowns is a novel idea and deserves more work and exploration. While the use of
semantic similarity has many drawbacks (such as tuning the threshold, task-sensitive, non-scalable), it can still show
us some valuable results about constraining CoT breakdowns.
Overall Score: 5 (Decent idea but has some weaknesses or not exciting enough, marginally below the acceptance
threshold of major AI conferences)
Rationale: There are some clear drawbacks inherent to the method, as discussed earlier. If the authors can overcome
these limitations, this idea could yield some interesting findings useful for our understanding of CoT behavior and
could pass above a major conference threshold.
Confidence: 3 (You are fairly confident that the evaluation is correct)
76
Reviewer 2
Novelty: 4
Rationale: Generally this method is a way of rejection sampling to improve factuality. It is somewhat not too
different from previous literature for "constrained decoding" for improving factuality: - Constrained Abstractive
Summarization: Preserving Factual Consistency with Constrained Generation - Don’t Say What You Don’t Know:
Improving the Consistency of Abstractive Summarization by Constraining Beam Search
Feasibility: 9
Rationale: Simple prompting approach that is easy to implement. Evaluation is simple.
Expected Effectiveness: 3 (Low Effectiveness: The idea might work in some special scenarios but you don’t expect it
to work in general.)
Rationale: 1. Right now most LLMs hallucinate in a subtle way: they say things in semantically correct or reasonable
ways, but the precise fact is incorrect. Using semantic similarity as a measurement to gauge/control hallucination
might not be able to solve the problem. 2. The rejection sampling is based on another LLM—what if the LLM also
hallucinates?
Excitement: 3 (Mediocre: this idea makes marginal contributions and is very incremental)
Rationale: The method is not that novel and I think the method is not that effective and might not solve the problem
at all.
Overall Score: 3 (Clear rejection for major AI conferences)
Rationale: The experiment design is kind of simple and the evaluation is not comprehensive. I think the idea is in
the range of 4 but the experiment plan further reduces my score.
Confidence: 5 (You are absolutely certain that the evaluation is correct and very familiar with the relevant literature)
77
Reviewer 3
Novelty: 3 (mostly not novel - you can find very similar ideas)
Rationale: The idea of extracting key semantic concepts, measuring the relevance of the candidate next step, and
possibly rejecting/revising the step is very similar to incorporating self-critique into multi-step reasoning problems.
Different versions of this are already commonly used, especially for solving math problems.
Feasibility: 8 (Highly Feasible: Straightforward to implement the idea and run all the experiments.)
Rationale: The proposed approach should be straightforward to implement: it only requires prompt engineering to
extract semantic concepts and evaluate the relevance of a candidate next step.
Expected Effectiveness: 3 (Low Effectiveness: The idea might work in some special scenarios but you don’t expect it
to work in general.)
Rationale: Compared to chain-of-thought prompting, there’s a reasonable chance this method could work better:
it could help identify when a reasoning step becomes irrelevant to the original question. However, since such
self-critique methods have already been explored, it’s unlikely that this instantiation will work significantly better
than previous ones. Also, the proposed idea of extracting relevant semantic concepts and measuring semantic
similarity seems a bit vague, and it’s not reflected in the provided examples.
Excitement: 2
Rationale: The proposed method is too similar to existing works; it doesn’t contain novel insights that would
meaningfully boost current LM performance or introduce new ideas worth building on. It would not be an exciting
paper.
Overall Score: 2 (Strong rejection for major AI conferences)
Rationale: Similar to the reasoning above: the proposal is too similar to existing works, it doesn’t introduce new
ideas or insights, and is unlikely to meaningfully improve current LM performance.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
78
V Example Idea: Autoprompting: Generate Diverse Few-shot Examples
for Any Application
Autoprompting: Generate Diverse Few-Shot Examples for Any Application (Part 1)
1. Problem Statement: Adding natural language capabilities to existing software requires manually crafting
few-shot prompts, which is tedious and does not guarantee high coverage.
2. Motivation: Integrating natural language capabilities into software applications often necessitates manually
creating few-shot prompts, a process that is time-consuming and may not ensure comprehensive coverage. An
"Autoprompting" system capable of automatically generating diverse and relevant few-shot examples tailored to
specific applications would significantly reduce manual effort, improve coverage and versatility, and enable rapid
prototyping and iteration of natural language capabilities. Large Language Models can iteratively test different
functionalities of an application and make adjustments to few-shot prompts akin to a human developer. This
approach would ultimately democratize the integration of such capabilities across a wide range of applications and
industries.
3. Proposed Method: This method leverages a Large Language Model (LLM) with coding capabilities. It involves
the following core steps:
1. Extract all user-facing functions and gather their documentation and unit tests, if available.
2. Generate diverse natural language prompts to utilize each function, defining the expected output.
3. Generate code from the natural language prompts and execute the corresponding functions.
4. If the code fails:
• Update the code and retry
• If the code runs but produces an incorrect result, update it using insights from unit tests or general
reasoning.
5. Once you have a few exemplar prompts for all (or desired) functions, generate prompts that compose multiple
functions together and repeat step 4.
By iteratively refining code generation from natural language and leveraging available documentation and tests, this
process aims to create an LLM capable of correctly implementing functions based on natural language instructions.
4. Step-by-Step Experiment Plan:
• Applications: When collecting applications from GitHub, prioritize those with clear, well-written documenta-
tion and comprehensive test suites. Include applications from different domains and with varying levels of
complexity to ensure a diverse dataset.
• Few shots and feasibility: Create manual few-shot examples to understand the complexity of the functions
and the quality of the documentation. Begin by creating 4-5 examples for any function, which could also serve
as a starting point for the LLM.
• Extract functions and metadata: Utilize static code analysis tools to ensure accurate and comprehensive
extraction of functions, documentation, and test cases. Consider extracting additional metadata, such as
function signatures, dependencies, and comments, as they can provide valuable context.
• NL Module: Generate diverse user utterances and incorporate techniques to handle variations in natural
language. For each user utterance, generate the expected outcome. Consider generating negative test cases to
improve the model’s ability to handle invalid or ambiguous inputs.
• Execution Module: Incorporate sandboxing or containerization techniques to ensure a secure and isolated
execution environment when executing the generated code. Implement logging and reporting mechanisms to
capture and analyze errors and unexpected behavior.
79
Autoprompting: Generate Diverse Few-Shot Examples for Any Application (Part 2)
4. Step-by-Step Experiment Plan (Continued):
• Exploration: Incorporate techniques such as code summarization, call graph analysis, and type inference
to provide more contextual information to the agent. Specifically, in any code snippet, if there are other
user-defined functions, retrieve their metadata and use it in the next iteration of prompt generation.
• Store: Utilize a vector database or other structured storage mechanism that supports efficient retrieval and
querying for storing few-shot examples and their outputs. Incorporate mechanisms for versioning and
updating the stored data as the codebase and the underlying models evolve.
• Experiments: Once few-shot examples for different functionalities and their compositions are obtained,
simulate different users with various intents and calculate goal completion and error rates using different
models. Initially, start with a strong model, and once few-shot examples are available, test with weaker and
open-source models.
5. Test Case Examples: Select a toy application from GitHub implemented in Python or JavaScript.
• Direct prompting: Provide the few-shot examples created and check the goal completion and error rates for
the following scenarios.
• Toy example: Calculator app and different utterances to try.
– Provide a complete user utterance with no ambiguity. For example:
* Can you add 4 to 8.
* Divide 6 by 9 and multiply it by 6.
– Provide a user utterance with some ambiguity. For example:
* Take 6 and 9, add them, and then subtract 8. Also, add 2 to the first one. – here the "first" one is
ambiguous as it could be 6 or the intermediate answer (6+9=15).
– Provide a user utterance that is not related to the function. For example:
* Please add A and J. The correct result would be refusing to answer instead of generating add("A",
"J").
6. Fallback Plan: If the proposed methodology does not yield satisfactory results, there are several areas to
investigate. First, examine the documentation to ensure it adequately explains the basic functionality of each
function. Then, assess the coding style to confirm it aligns with recommended practices. Subsequently, evaluate
each module separately. For the NL module, verify that the examples are diverse and that the generated test cases
are aligned. For the execution module, ensure that the correct error messages are being passed and explore ways
to enhance them. The exploration module is the most challenging aspect; if any function has a high dependency
on other functions, traversing it will be difficult. Therefore, initially focus on examples with limited to no function
dependency and gradually increase the complexity.
80
Reviewer 1
Novelty: 4
Rationale: The proposed method is similar to https://arxiv.org/abs/2210.03493;
https://aclanthology.org/2023.findings-acl.216/
Feasibility: 6 (Feasible: Can be executed within the given constraints with some reasonable planning.)
Rationale: The experiments can be done with sufficient API access. The dataset collection needs some planning but
is in general feasible to do. Setting up the vector database may take extra time.
Expected Effectiveness: 5 (Somewhat ineffective: There might be some chance that the proposed idea can work
better than existing baselines but the improvement will be marginal or inconsistent.)
Rationale: The proposal is vague as it doesn’t mention what’s the final evaluation metric, and does not provide
sufficient description of the compared baseline. The prompt in the direct prompt baseline is confusing to me as well.
Overall it’s hard to discuss the effectiveness.
Excitement: 4
Rationale: Given that the proposed method is vague, I am unsure about its contributions and effectiveness, and
therefore I feel less excited about it.
Overall Score: 4 (Ok but not good enough, rejection for major AI conferences)
Rationale: The descriptions are confusing and I’m not really sure what’s the focus or contribution. The title problem
statement mentioned ensuring "diversity"/"high coverage" as the goal but doesn’t describe how this is ensured in
later sections. The "Test Case Examples" doesn’t explain how the components in the "Step-by-Step Experiment Plan"
are used.
Confidence: 3 (You are fairly confident that the evaluation is correct)
81
Reviewer 2
Novelty: 7
Rationale: Mapping natural language to custom applications is a hugely impactful capability, and doing so
automatically is really interesting. I like the focus on autoprompting for these types of translations, as the task
is feasible since it builds off some of the "few-shot prompting" that developers might normally do to add NL
functionality, with a more automatic process that has real system checks/verifications (e.g., running the applications
through containers). A related work from HCI tries to enable individual developers to add such NL functionality
to their own applications via a DSL + NL program signatures (https://jackieyang.me/reactgenie/). This work is
distinguished, as it would empower adding such NL functionality to any application, without changing the code.
Feasibility: 4
Rationale: The project infrastructure seems more difficult than simply choosing some prompting methods. It would
be an iterative process choosing real example applications from Github, and developing the few-shot prompts
manually to get a feel for this task. Then, some of the modules seem like 1-2 week tasks (Execution Module,
Exploration, Storage) which I estimate would make the project more like 3 - 4 months to complete all modules AND
to do the evaluations.
Expected Effectiveness: 7
Rationale: The baseline here is a zero-shot prompt, asking to do the NL intent and feeding in all the documentation
of the API. Assuming the author is correct to say that such NL function mapping requires good few & diverse
few-shot examples, I expect the method to work well. It uses a number of external systems to enrich the code
dataset to give the LLM context and uses system errors to inform. So in some ways, Autoprompting is allowing an
agent to make use of all these SWE tools for understanding the software, which then will allow it to maximize its
understanding and better retrieve good few-shot examples for the task at hand.
Excitement: 7
Rationale: Seems like an impactful and ambitious outcome if completed. I am curious how such an approach fits
into the conversation about general agents, which can leverage API/tool/functions calls. It’s a little unclear from
the toy example why existing function-calling models can’t translate NL intents into.
Overall Score: 6 (Marginally above the acceptance threshold of major AI conferences)
Rationale: The results would be really exciting and the technical infrastructure to enable the Autoprompting
agent would be impressive. However, I’m missing a bit of which cases will be really difficult for other generalist
web/system agents, but where finding the few-shot examples for this task is really needed. Thus, the core idea of the
method doesn’t seem clarified enough to result in a really clear takeaway on the method.
Confidence: 3 (You are fairly confident that the evaluation is correct)
82
W Example Idea: Temporal Dependency Unfolding: Improving Code
Generation for Complex Stateful Systems
Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful Sys-
tems (Part 1)
1. Problem Statement: Generating code for complex, stateful systems or applications with intricate temporal
dependencies remains challenging for current code generation models. Most existing approaches focus on
generating individual functions or small code snippets without fully considering the temporal aspects and state
changes in larger systems. This limitation hinders the applicability of AI-assisted programming in areas such as
distributed systems, game development, and real-time applications.
2. Motivation: Many real-world applications require careful management of state over time. Existing code
generation models struggle with capturing the full complexity of temporal dependencies and state changes in
larger systems. A method that can effectively reason about and generate code for systems with complex temporal
dependencies could significantly improve the applicability of AI-assisted programming in critical areas. Our
proposed Temporal Dependency Unfolding method is inspired by how human developers approach complex
system design, first identifying key states and their relationships before implementing the detailed logic.
3. Proposed Method: We propose Temporal Dependency Unfolding, a novel prompting technique that guides the
model to generate code by explicitly reasoning about state changes and temporal relationships. The method consists
of five steps:
1. State Identification: Prompt the model to identify key states and variables that change over time in the target
system.
2. Temporal Graph Construction: Guide the model to create a conceptual graph of how these states evolve and
interact over time.
3. Staged Code Generation: Generate code in stages, focusing on different temporal slices or state transitions in
each stage.
4. Consistency Verification: After each stage, prompt the model to verify temporal consistency and make
necessary adjustments.
5. Integration: Finally, guide the model to integrate the stage-wise generated code into a cohesive system,
ensuring proper handling of all temporal dependencies.
4. Step-by-Step Experiment Plan:
1. Dataset Preparation:
• Create a dataset of programming tasks that involve complex temporal dependencies.
• Include tasks from three domains: 1) Multi-threaded applications, 2) Game logic, and 3) Distributed
systems.
• For each domain, prepare 50 task descriptions, each with a clear specification of the desired functionality
and temporal requirements.
2. Baseline Implementation:
• Implement two baseline methods:
– Direct prompting: Simply provide the task description to the model and ask it to generate the code.
– Chain-of-Thought (CoT) prompting: Append ’Let’s approach this step-by-step:’ to the task descrip-
tion.
• Use GPT-4 for both baselines.
83
Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful Sys-
tems (Part 2)
4. Step-by-Step Experiment Plan (Continued):
3. Temporal Dependency Unfolding Implementation:
• Implement our proposed method with the following sub-steps for each task:
(a) State Identification: Prompt GPT-4 with ’Identify the key states and variables that change over time
in this system:’.
(b) Temporal Graph Construction: Prompt with ’Create a conceptual graph showing how the identified
states evolve and interact over time:’.
(c) Staged Code Generation: For each major state or transition identified, prompt with ’Generate code
for the following state/transition: [state/transition]’.
(d) Consistency Verification: After each stage, prompt with ’Verify the temporal consistency of the
generated code and suggest any necessary adjustments:’.
(e) Integration: Finally, prompt with ’Integrate the generated code segments into a cohesive system,
ensuring proper handling of all temporal dependencies:’.
4. Evaluation Metrics:
• Correctness: Percentage of generated code that passes predefined test cases.
• Temporal Consistency: Manual evaluation of how well the code handles temporal dependencies (scale
1-5).
• Code Quality: Automated metrics like cyclomatic complexity and maintainability index.
• Execution Efficiency: Runtime performance on benchmark inputs.
5. Human Evaluation:
• Recruit 5 experienced developers to review a subset of 30 generated solutions (10 from each domain).
• They will rate the code on a scale of 1-5 for readability, maintainability, and correct handling of temporal
dependencies.
6. Experiment Execution:
• For each task in the dataset:
(a) Generate solutions using both baseline methods and our Temporal Dependency Unfolding method.
(b) Apply all evaluation metrics to the generated solutions.
(c) Collect human evaluations for the subset of solutions.
7. Analysis:
(a) Compare the performance of Temporal Dependency Unfolding against the baselines across all metrics.
(b) Analyze the effectiveness of each step in our method (State Identification, Temporal Graph Construction,
etc.) by examining intermediate outputs.
(c) Identify patterns in tasks where our method shows significant improvement or underperforms.
(d) Correlate automated metrics with human evaluations to validate their reliability.
84
Temporal Dependency Unfolding: Improving Code Generation for Complex Stateful Sys-
tems (Part 3)
5. Test Case Examples:
• Test Case 1:
– Baseline Prompt Input (Direct Prompting): Generate Python code for a simple multi-threaded producer-
consumer system with a shared buffer. The producer should generate random numbers and add
them to the buffer, while the consumer should remove and process these numbers. Implement proper
synchronization to avoid race conditions.
– Baseline Prompt Expected Output (Direct Prompting): [Python code for a simple producer-consumer
system]
– Proposed Prompt Input (Temporal Dependency Unfolding; Step 1: State Identification): For a multi-
threaded producer-consumer system with a shared buffer, identify the key states and variables that
change over time in this system:
– Proposed Prompt Expected Output (Temporal Dependency Unfolding; Step 1: State Identification): [List
of key states and variables]
– Proposed Prompt Input (Temporal Dependency Unfolding; Step 2: Temporal Graph Construction):
Create a conceptual graph showing how the identified states evolve and interact over time for the
producer-consumer system:
– Proposed Prompt Output (Temporal Dependency Unfolding; Step 2: Temporal Graph Construction):
[Conceptual graph of state evolution and interactions]
– Proposed Prompt Input (Temporal Dependency Unfolding; Step 3: Staged Code Generation): Generate
code for the producer functionality in the producer-consumer system, focusing on its interaction with
the buffer and synchronization mechanisms:
– Proposed Prompt Output (Temporal Dependency Unfolding; Step 3: Staged Code Generation): [Python
code for producer functionality]
– Proposed Prompt Input (Temporal Dependency Unfolding; Step 4: Consistency Verification): Verify the
temporal consistency of the generated producer code and suggest any necessary adjustments:
– Proposed Prompt Output (Temporal Dependency Unfolding; Step 4: Consistency Verification): [Verifica-
tion and adjustment suggestions]
– Proposed Prompt Input (Temporal Dependency Unfolding; Step 5: Integration): Integrate the generated
producer code with a consumer and main control logic to create a complete producer-consumer system,
ensuring proper handling of all temporal dependencies:
– Proposed Prompt Output (Temporal Dependency Unfolding; Step 5: Integration): [Complete Python
code for producer-consumer system]
– Explanation: The Temporal Dependency Unfolding method produces a more comprehensive and
robust solution compared to the baseline. It explicitly handles temporal dependencies, includes proper
synchronization, and provides mechanisms for graceful termination. The staged approach allows for
better handling of edge cases and improved overall system design.
6. Fallback Plan: If the Temporal Dependency Unfolding method does not show significant improvement over the
baselines, we can pivot the project in several ways. First, we could conduct an in-depth analysis of where and why
the method fails, which could provide valuable insights into the limitations of current language models in handling
temporal reasoning tasks. This analysis could involve examining the intermediate outputs (state identification,
temporal graphs) to understand where the reasoning breaks down. Second, we could explore combining our method
with other techniques, such as retrieval-augmented generation, to see if providing relevant examples improves
performance. Third, we could focus on developing a new evaluation framework specifically designed to assess
temporal reasoning in code generation, which could be a valuable contribution to the field even if our primary
method doesn’t outperform baselines. Lastly, we could investigate whether the method performs better on certain
types of temporal dependencies or specific programming domains, which could lead to a more targeted approach
for improving code generation in those areas.
85
Reviewer 1
Novelty: 6 (reasonably novel - there are some notable differences from existing ideas and probably enough to turn
into a new paper)
Rationale: The construction of Temporal Graph sounds novel. The research question is also relatively underexplored,
but necessary for coding in domains like distributed systems.
Feasibility: 6 (Feasible: Can be executed within the given constraints with some reasonable planning.)
Rationale: The data collection part should be the most challenging part. Collecting high-quality coding problems
that involve complex temporal dependencies could be hard. Also, the human evaluation might also take time to
execute.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing
baselines by moderate margins on a few benchmarks.)
Rationale: With specific prompting techniques, the proposed method should outperform baselines in terms of
temporal dependencies.
Excitement: 7
Rationale: I think this should be more exciting than most of the borderline papers since we are working on a new
problem. The collected data should also be super useful.
Overall Score: 7 (Good idea, would be accepted by major AI conferences)
Rationale: Again, working on a novel problem makes it better than most of the prompting papers.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
86
Reviewer 2
Novelty: 5 (somewhat novel - there are differences from existing ideas but not enough to turn into a new paper)
Rationale: Although I am not entirely familiar with the field of generating temporally adaptive programs, I suspect
some similar ideas can be found in software engineering works (e.g., ICSE). More concretely on the method, it is
rather similar to code generation with intermediate state reasoning, which has been explored in several multi-step,
conversational code generation works, e.g:
1. Zheng, Tianyu, et al. "Opencodeinterpreter: Integrating code generation with execution and refinement."
2. Cao, Liuwen, et al. "Beyond Code: Evaluate Thought Steps for Complex Code Generation." Proceedings of the 2024
Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING
2024). 2024.
3. Nijkamp, Erik, et al. "Codegen: An open large language model for code with multi-turn program synthesis."
Feasibility: 3 (Very challenging: there are flaws in the proposed method or experiments, or the experiments require
compute/human resources beyond any academic lab)
Rationale: It would be pretty hard to collect such datasets (e.g., would mostly require a whole repository), further, it
would be difficult to generate executable test cases to verify the multiple problems created. Especially because the
task targets temporally-dependent modules in the program, it may necessitate domain experts to carefully construct
examples and tests, which would demand a lot of time and costs.
Expected Effectiveness: 5 (Somewhat ineffective: There might be some chance that the proposed idea can work
better than existing baselines but the improvement will be marginal or inconsistent.)
Rationale: I am not very confident that the model can solve this complex temporally-dependent programming
problems with reasonable correctness. Furthermore, because the current method is basically prompting, which may
have a very low performance upper bound. Therefore, I don’t expect the proposed method to improve significantly
on code generation.
Excitement: 4
Rationale: Overall, I don’t expect this method to bring substantial improvements, hence am less excited about the
potential of this method. It would still be an interesting problem to solve, particularly in bringing more challenging
coding problems and proposed corresponding methods. With this being said, given the current performance of
models, building a solid benchmark regarding this temporal code generation problem may be more exciting than
proposing a method that is expectedly not working.
Overall Score: 4 (Ok but not good enough, rejection for major AI conferences)
Rationale: The task of temporal code generation is not the most urgent issue of current code generation models, and
the proposed method is expected to not bring much improvement. The method needs to be further refined and go
beyond simple prompting to convince the audience of the potential of this thread of methods.
Confidence: 3 (You are fairly confident that the evaluation is correct)
87
Reviewer 3
Novelty: 10 (very novel - very different from all existing ideas in a very interesting and clever way)
Rationale: This idea studies a very novel problem in LLM-based code generation. Temporal dependencies in code
generation should be specifically studied in the era of LLMs.
Feasibility: 5 (Moderately feasible: It can probably be executed within the given time frame but would require
careful planning, efficient use of APIs or some advanced computational strategies to overcome the limited GPU
resources, and would require some modifications to the original proposal to make it work.)
Rationale: Constructing a reasonable dataset is challenging within a short time. Also, human evaluation might take
more time. Whether LLM can construct high-quality graphs in this case is also to be examined.
Expected Effectiveness: 6 (Somewhat effective: There is a decent chance that the proposed idea can beat existing
baselines by moderate margins on a few benchmarks.)
Rationale: One needs to build reasonable metrics to show effectiveness. Also, one might need to tune prompts
carefully to construct high-quality graphs in this case.
Excitement: 8 (Exciting: would deepen the community’s understanding or make major progress in this research
direction)
Rationale: This is novel and could have a huge impact on those code generation cases requiring temporal dependen-
cies. But one needs to justify why such use cases are important, and why temporal dependency is the core problem
in such use cases.
Overall Score: 9 (Top 15% of all published ideas on this topic at major AI conferences, strong accept)
Rationale: Considering its novelty, valuable dataset, and comprehensiveness of experiment and evaluation design,
this could be an impactful work. But one needs to make experiment results concrete by re-examining whether each
step works well in practice.
Confidence: 4 (You are confident but not absolutely certain that the evaluation is correct)
88
X Identities of Example Ideas
We reveal whether each example idea is AI-generated or human-written:
• Human ideas: Example P, Example R, Example T, Example V
• AI ideas: Example Q, Example S, Example U, Example W
89
Y Attempt on Idea Execution Agent
For our execution agent, the input is the generate idea (the full project proposal), and the output is a
Python file that can be executed with our specified command. Since there is often a common pipeline
of implementing prompting-based research ideas, we provide a manually crafted code file example as
template. We attach the full template below:
1 import random
2 from tqdm import tqdm
3 from utils import call_api, load_model
4 import random
5 random.seed(2024)
6
7 ## Step 1: Generate synthetic test examples
8 def generate_testset():
test_data = [
9
{
},
{
},
{
},
{
"input": "Natalia sold clips to 48 of her friends in April, and then
she sold half as many clips in May. How many clips did Natalia sell
altogether in April and May?",
"output": "Natalia sold 48/2 = <<48/2=24>>24 clips in May. Natalia sold
48+24 = <<48+24=72>>72 clips altogether in April and May. #### 72"
"input": "Weng earns $12 an hour for babysitting. Yesterday, she just
did 50 minutes of babysitting. How much did she earn?",
"output": "Weng earns 12/60 = $<<12/60=0.2>>0.2 per minute. Working 50
minutes, she earned 0.2 x 50 = $<<0.2*50=10>>10. #### 10"
"input": "Tim has 30 less apples than Martha, and Harry has half as
many apples as Tim. If Martha has 68 apples, how many apples does
Harry have?",
"output": "Tim has 68-30 = <<68-30=38>>38 apples. Harry has 38/2 =
<<38/2=19>>19 apples. #### 19"
"input": "Four people lost a total of 103 kilograms of weight. The
first person lost 27 kilograms. The second person lost 7 kilograms
less than the first person. The two remaining people lost the same
amount. How many kilograms did each of the last two people lose?",
"output": "Second person = 27 - 7 = <<27-7=20>>20 kg 103 - 27 - 20 =
<<103-27-20=56>>56 kg 56/2 = <<56/2=28>>28 kg The last two people
each lost 28 kilograms of weight. #### 28"
}
]
return test_data
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31 ## Step 2: Implement the baseline method
32 def baseline_method(client, model_name, seed, question):
33
## zero-shot chain-of-thought
prompt = "Answer the following question: {}\n".format(question)
prompt += "Think step by step."
prompt_messages = [{"role": "user", "content": prompt}]
response, _ = call_api(client, model_name, prompt_messages, temperature=0.,
34
35
36
37
38
39
max_tokens=2000, seed=seed, json_output=False)
return response.strip()
90
40
41 ## Step 3: Implement the proposed method
42 def proposed_method(client, model_name, seed, question, print_all=False):
43
intermediate_outputs = ""
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
if print_all:
print ("question:\n", question)
## collaborative reasoning step 1: task decomposition
prompt = "Please break down the following task into smaller sub-tasks or
steps:: {}".format(question)
prompt_messages = [{"role": "user", "content": prompt}]
decomposition, _ = call_api(client, model_name, prompt_messages,
temperature=0., max_tokens=2000, seed=seed, json_output=False)
intermediate_outputs += "task decomposition:\n" + decomposition + "\n"
if print_all:
print ("decomposition:\n", decomposition)
## collaborative reasoning step 2: sub-task information generation
prompt = "For each of the following sub-tasks, please generate relevant
information or intermediate results: \n{}".format(decomposition)
prompt_messages = [{"role": "user", "content": prompt}]
intermediate, _ = call_api(client, model_name, prompt_messages, temperature=0.,
max_tokens=2000, seed=seed, json_output=False)
intermediate_outputs += "sub-task results:\n" + intermediate + "\n"
if print_all:
print ("intermediate:\n", intermediate)
## collaborative reasoning step 3: result combination
prompt = "Given the following intermediate results: \n{}, please combine them
to generate the final answer for the task: \n{}".format(intermediate,
question)
prompt_messages = [{"role": "user", "content": prompt}]
answer, _ = call_api(client, model_name, prompt_messages, temperature=0.,
max_tokens=2000, seed=seed, json_output=False)
intermediate_outputs += "result combination:\n" + answer + "\n"
if print_all:
print ("initial answer:\n", answer)
## collaborative reasoning step 4: reflection and refinement
prompt = "Given the task: {}\nPlease reflect on the generated
answer:\n{}.\n\nAre there any gaps or inconsistencies in the answer? If so,
please identify and address them and give me an improved answer. If not, you
don’t have to edit anything and can just return the original
answer.\n".format(question, answer)
prompt_messages = [{"role": "user", "content": prompt}]
final_answer, _ = call_api(client, model_name, prompt_messages, temperature=0.,
max_tokens=2000, seed=seed, json_output=False)
intermediate_outputs += "reflection and refinement:\n" + final_answer
if print_all:
print ("final answer:\n", final_answer)
return final_answer.strip(), intermediate_outputs
82
83 ## Step 4: Define the style evaluator
84 def style_evaluator(client, model_name, seed, question, baseline_prediction,
proposed_prediction):
## define all the components that the proposed method outputs should have
## and the advantages of the proposed method over the baseline method
## just need to check the style is correct
prompt = "Given the task: {}\n".format(question)
85
86
87
88
91
prompt += "The baseline method produced the following
output:\n{}\n\n".format(baseline_prediction)
prompt += "The proposed new method produced the following
output:\n{}\n\n".format(proposed_prediction)
prompt += "Now determine if the proposed method is better by checking if it has
satisfied the following criteria:\n"
prompt += "1. The proposed method’s output should produce all the intermediate
components including: task decomposition, sub-task information generation,
result combination, and reflection and refinement.\n"
prompt += "2. The proposed method should provide a more detailed and
comprehensive answer than the baseline method.\n"
prompt += "Just tell me ’yes’ or ’no’ for whether the criteria are met, nothing
else is needed."
prompt_messages = [{"role": "user", "content": prompt}]
response, _ = call_api(client, model_name, prompt_messages, temperature=0.,
max_tokens=1, seed=seed, json_output=False)
judgment = False
if response.strip().lower() == "yes":
return True
return judgment
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105 ## Step 5: Define the output evaluator
106 def output_evaluator(client, model_name, seed, question, gold_label, prediction):
107
## check if the prediction is correct given the gold label
prompt = "Given the following question and reference answer, determine if the
prediction is correct. Just tell me ’yes’ or ’no’, nothing else is
needed.\n\nQuestion: {}\n\nReference Answer: {}\n\nPrediction:
{}\n\n".format(question, gold_label, prediction)
prompt_messages = [{"role": "user", "content": prompt}]
response, _ = call_api(client, model_name, prompt_messages, temperature=0.,
max_tokens=1, seed=seed, json_output=False)
judgment = False
if response.strip().lower() == "yes":
return True
return judgment
118
119 ## Step 6: Define the function that runs the experiments to obtain model
predictions and performance
120 ## you shouldn’t need to modify this function in most cases
121 def run_experiment(client, model_name, seed, testset):
122
sample_size = len(testset)
baseline_predictions = []
proposed_predictions = []
baseline_correctness = []
proposed_correctness = []
style_check = []
for i in tqdm(range(sample_size)):
question = testset[i]["input"].strip()
gold_label = testset[i]["output"].strip()
baseline_prediction = baseline_method(client, model_name, seed, question)
proposed_prediction_final, proposed_prediction_intermediate =
proposed_method(client, model_name, seed, question)
92
108
109
110
111
112
113
114
115
116
117
123
124
125
126
127
128
129
130
131
132
133
134
135
136
baseline_predictions.append(baseline_prediction)
proposed_predictions.append(proposed_prediction_final)
baseline_correctness.append(output_evaluator(client, model_name, seed,
question, gold_label, baseline_prediction))
proposed_correctness.append(output_evaluator(client, model_name, seed,
question, gold_label, proposed_prediction_final))
style_check.append(style_evaluator(client, model_name, seed, question,
baseline_prediction, proposed_prediction_intermediate))
return baseline_correctness, proposed_correctness, style_check
137
138
139
140
141
142
143
144
145
146
147
148 ## Step 7: Execute the experiments and compare performance
149 if __name__ == "__main__":
150
testset = generate_testset()
print ("simulated {} test examples for evaluation.".format(len(testset)))
151
152
153
154
155
156
157
158
159
160
161
162
model_name = "claude-3-opus-20240229"
seed = 2024
client = load_model(model_name)
print ("using model: ", model_name)
## output correctness
baseline_correctness, proposed_correctness, style_check =
run_experiment(client, model_name, seed, testset)
print ("baseline correctness: ", sum(baseline_correctness) /
len(baseline_correctness))
print ("proposed correctness: ", sum(proposed_correctness) /
len(proposed_correctness))
print ("style check pass rate: ", sum(style_check) / len(style_check))
As seen above, we have defined two different evaluator functions. The style_evaluator()
checks whether all components mentioned in the proposed method are implemented, and the
output_evaluator() checks whether the model predictions are correct.
We provided this template along with instructions to tell the execution agent to modify all necessary
functions for the provided idea. We tested the agent on 30 safety prompting ideas and 30 factuality
prompting ideas.
Safety Factuality
Total
Executed
Passed Style Evaluator
Proposed Method Works Better
30
17
13
5
30
18
14
1
Table 18: Success rate of our execution agent on 30 randomly sampled AI ideas on safety and factuality.
As shown in Table 18, over half of the generated code files can be successfully executed, and the results
indicate that only a handful of these proposed methods actually outperform the baselines. While this
shows some promise that the execution agents can automatically implement and identify effective
methods out of all the generated ideas, a careful manual check reveals many pitfalls.
93
For example, the agent implemented the following baseline for the task of identifying malicious and
adversarial prompts, which is a keyword-based approach that is trivial to beat with any LLM-based
methods.
1 ## Step 2: Implement the baseline method
2 def baseline_method(client, model_name, seed, prompt):
3
# Keyword-based heuristic
keywords = ["ignore", "pretend", "roleplay", "disregard", "override"]
for keyword in keywords:
4
5
6
7
8
if keyword in prompt.lower():
return "Adversarial"
return "Benign"
In another example, for the same task of detecting adversarial prompts, the agent implemented the
following evaluator function:
1 ## Step 5: Define the output evaluator
2 def output_evaluator(client, model_name, seed, input_text, gold_label, prediction):
prompt = "Given the following text and reference sentiment classification,
3
determine if the predicted classification is correct. Just tell me ’yes’ or
’no’, nothing else is needed.\n\nText: {}\n\nReference: {}\n\nPrediction:
{}\n\n".format(input_text, gold_label, prediction)
prompt_messages = [{"role": "user", "content": prompt}]
response, _ = call_api(client, model_name, prompt_messages, temperature=0.,
max_tokens=1, seed=seed, json_output=False)
judgment = False
if response.strip().lower() == "yes":
return True
return judgment
4
5
6
7
8
9
10
11
The agent is supposed to inject adversarial triggers into sentiment classification data to test whether
the proposed method can detect those adversarial prompts while maintaining sentiment classification
accuracy. However, the agent only evaluates the accuracy on the original sentiment classification task
but not the task of adversarial prompt detection.
Given these errors, we believe more work is needed to carefully verify the code implementations
produced by the execution agent rather than blindly trusting their executed results, and we leave such
attempts to future work.
94
| Chenglei Si, Diyi Yang, Tatsunori Hashimoto | Recent advancements in large language models (LLMs) have sparked optimism about their potential to accelerate scientific discovery, with a growing number of works proposing research agents that autonomously generate and validate new ideas.
Despite this, no evaluations have shown that LLM systems can take the very first step of producing novel, expert-level ideas, let alone perform the entire research process.
We address this by establishing an experimental design that evaluates research idea generation while controlling for confounders and performs the first head-to-head comparison between expert NLP researchers and an LLM ideation agent.
By recruiting over 100 NLP researchers to write novel ideas and blind reviews of both LLM and human ideas, we obtain the first statistically significant conclusion on current LLM capabilities for research ideation:
we find LLM-generated ideas are judged as more novel ($p<0.05$) than human expert ideas while being judged slightly weaker on feasibility.
Studying our agent baselines closely, we identify open problems in building and evaluating research agents, including failures
of LLM self-evaluation and their lack of diversity in generation.
Finally, we acknowledge that human judgements of novelty can be difficult, even by experts, and propose an end-to-end study design which recruits researchers to execute these ideas into full projects, enabling us to study whether these novelty and feasibility judgements result in meaningful differences in research outcome. | Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers |
entigraph | 4
2
0
2
t
c
O
3
]
G
L
.
s
c
[
2
v
1
3
4
7
0
.
9
0
4
2
:
v
i
X
r
a
SYNTHETIC CONTINUED PRETRAINING
Zitong Yang∗
Department of Statistics
Stanford University
Neil Band∗
Department of Computer Science
Stanford University
Shuangping Li
Department of Statistics
Stanford University
Emmanuel Cand`es
Department of Statistics
Stanford University
Tatsunori Hashimoto
Department of Computer Science
Stanford University
ABSTRACT
Pretraining on large-scale, unstructured internet text enables language models to
acquire a significant amount of world knowledge. However, this knowledge ac-
quisition is data-inefficient—to learn a given fact, models must be trained on
hundreds to thousands of diverse representations of it. This poses a challenge
when adapting a pretrained model to a small corpus of domain-specific docu-
ments, where each fact may appear rarely or only once. We propose to bridge this
gap with synthetic continued pretraining: using the small domain-specific corpus
to synthesize a large corpus more amenable to learning, and then performing con-
tinued pretraining on the synthesized corpus. We instantiate this proposal with
EntiGraph, a synthetic data augmentation algorithm that extracts salient entities
from the source documents and then generates diverse text by drawing connec-
tions between the sampled entities. Synthetic continued pretraining with Enti-
Graph enables a language model to answer questions and follow generic instruc-
tions related to the source documents without access to them.
If, instead, the
source documents are available at inference time, we show that the knowledge
acquired through our approach compounds with retrieval-augmented generation.
To better understand these results, we build a simple mathematical model of Enti-
Graph, and show how synthetic data augmentation can “rearrange” knowledge to
enable more data-efficient learning.
1
INTRODUCTION
Language models have demonstrated a remarkable ability to acquire knowledge from unstructured
text, enabling them to perform challenging knowledge-intensive tasks (Brown et al., 2020; OpenAI
et al., 2024; Gemini, 2024; Anthropic, 2024b; Dubey et al., 2024; Gunter et al., 2024). These suc-
cesses are enabled by the combination of the next-token prediction objective (Shannon, 1951) and
large-scale internet data (Common Crawl, 2007). However, it is becoming increasingly apparent
that this approach is data-inefficient; for example, a 13-year-old human acquires knowledge from
fewer than 100M tokens, while state-of-art open-source language models are trained on 15T to-
kens (Warstadt et al., 2023; Dubey et al., 2024). Recent works have highlighted a range of related
problematic phenomena, including the “reversal curse”, where models struggle to learn the relation
“B=A” when trained on “A=B” (Berglund et al., 2023), and the requirement that models be exposed
to thousands of examples per fact for knowledge acquisition (Allen-Zhu & Li, 2024).
These drawbacks pose a challenge when adapting the next-token prediction paradigm to learn from
small-scale corpora. Because large-scale pretrained models already capture much of public common
knowledge, further advancements will necessitate learning from the tails of the distribution (Kand-
pal et al., 2023): niche data that is either contained in small, private domains or appears only once
or twice on the internet. This challenge of data-efficient, parametric knowledge acquisition is be-
coming increasingly important as the growing compute capacity enables language model providers
to exhaust publicly available data (Muennighoff et al., 2023; Villalobos et al., 2024).
∗Equal contribution. Correspondence to: zitong@berkeley.edu, nband@cs.stanford.edu.
1
Figure 1: Synthetic continued pretraining (synthetic CPT) converts a small source corpus into a
large synthetic corpus that is amenable to learning via standard continued pretraining. We instantiate
synthetic CPT using a synthetic data augmentation algorithm called EntiGraph, which forms a
knowledge graph over entities extracted from documents, and then prompts an LM to synthesize a
text-based representation of the graph.
We propose to address this problem of acquiring knowledge from small corpora with synthetic con-
tinued pretraining. To illustrate, consider the problem of teaching a language model a new area
of mathematics, succinctly documented by a small set of authoritative textbooks. Directly training
the model on those textbooks is unlikely to be effective due to the limited volume of text (typically
only tens of thousands of words), and the model will struggle to generalize from this compressed
representation of knowledge. In contrast, learning well-established areas of mathematics like linear
algebra is more straightforward because a large-scale corpus with diverse knowledge representations
is accessible: for example, online lecture notes, Stack Exchange discussions, or Python implemen-
tations of the singular value decomposition. Synthetic continued pretraining bridges this gap by
first converting a small and data-constrained domain into a synthetic corpus with diverse knowledge
representations, and then continuing pretraining on it.
One basic approach is to simply paraphrase or rewrite the source documents in multiple ways. How-
ever, we demonstrate that this generic rephrasing does not cover the gap in the diversity of knowledge
representations. We repeatedly rephrase a small corpus and find that the value of incremental syn-
thetic data quickly decreases, with downstream model performance scaling poorly. We attribute this
failure to the lack of diversity in paraphrasing alone. In the linear algebra example, online lecture
notes and Stack Exchange discussions go beyond a simple rewrite of any textbook—they provide
deeper analysis and application of the underlying concepts and techniques.
To address this shortcoming, we propose EntiGraph, an entity-centric augmentation algorithm. Enti-
Graph first breaks down a text corpus into a list of entities and then uses a language model to generate
text descriptions about relations among the extracted entities, iteratively “filling in” the knowledge
graph underlying the corpus (Figure 1).
To concretely measure progress towards effective knowledge acquisition from small corpora, we
propose an experimental setting based on a standard reading comprehension dataset (QuAL-
ITY, Pang et al. (2022)). This setup enables the evaluation of synthetic data generation methods
for data-efficient learning without incurring the high compute costs of pretraining from scratch.
Specifically, we evaluate methods in a scenario where we are given access to a collection of 265
books, totaling 1.3M tokens. Our task is to synthesize a corpus such that continued pretraining on it
enables a model to answer queries (e.g., multiple-choice QA or user instructions related to the book
content) without access to the source texts.
In our main experiments (§5), we use EntiGraph to generate 455M synthetic tokens from 1.3M real
tokens using gpt-4-turbo (OpenAI et al., 2024). Then, we continually pretrain Llama 3 8B
(Dubey et al., 2024) on the synthetic tokens and evaluate its QA accuracy on the QuALITY question
set. We observe a log-linear scaling trend in the accuracy as the number of tokens increases, up
to 455M synthetic tokens (§4.2). At the endpoint, we find that synthetic continued pretraining
with 455M EntiGraph tokens provides 80% of the accuracy improvement of having those source
documents available at inference time (§5). Beyond QA accuracy, we also perform instruction tuning
on the continually pretrained model and find that it is capable of following open-ended instructions
(e.g., summarization) related to the QuALITY books (§4.3).
2
Title: The Blue Behemoth Author: Leigh Blackett Shannon's Imperial Circus was a jinxed space-carny leased for a mysterious tour of the inner worlds. It made a one-night… Title: Cosmic Yo-Yo Author: Ross Rocklynne Bob Parker, looking through the photo-amplifiers at the wedge-shaped asteroid, was plainly flabbergasted. Not in his wildest… …Input: small, niche corpus of documentsTitle: Defining Decay Down Author: David Plotz If you haven’t visited a dentist in the past few years, first of all, that’s gross. (Checkups are every six months, and don’t pretend you…(1) Entity Extraction For each document , extract a list of entitiesDE1…CheckupsFluorideDentistE2E3E4EnamelE1E2E3E4(2) Relation Analysis Form a knowledge graph and prompt an LM to describe its edgesUser: Analyze relations among given entities in the provided text. […] Document {} Entities { = Fluoride, = Enamel} D=Defining Decay DownE3E4LM: The interplay between enamel and fluoride within the context of “Defining Decay Down” is a telling one, as it underpins the significant shift […] Output: diverse synthetic corpus for continued pretrainingTo summarize, our key contributions are as follows:
• We propose to learn from small corpora with synthetic continued pretraining—converting the
small corpus into a large, diverse, synthetic corpus and continuing pretraining on it—and instan-
tiate this approach using the EntiGraph synthetic data augmentation algorithm (§2.2).
• We demonstrate that continued pretraining on the EntiGraph-synthesized corpus yields a QA
accuracy scaling trend that is log-linear in the synthetic token count, significantly outperforming
continued pretraining on the original documents or paraphrases (§4.2). Furthermore, we show that
instruction tuning the EntiGraph continually pretrained model enables it to follow more diverse
queries related to the source documents (§4.3).
• We complement the main experiments with an open-book setup (§5), providing the model with
access to the source documents when answering queries. We demonstrate that the knowledge
acquired through synthetic continued pretraining with EntiGraph is complementary to the knowl-
edge accessed through retrieval-augmented generation (RAG, Lewis et al. (2020))—RAG with
the EntiGraph continually pretrained model outperforms RAG with the base model.
• Lastly, we build a mathematical model that captures the intuition behind synthetic data augmen-
tation with EntiGraph. Analysis of this model provides a parametric formula for the scaling trend
of a continually pretrained model’s accuracy with respect to EntiGraph synthetic tokens, which
closely matches our empirical observations (§6).
Practically, synthetic continued pretraining using EntiGraph enables pretrained language models to
adapt to specialized domains by acquiring parametric knowledge, rather than the non-parametric
knowledge accessed through retrieval methods. At a higher level, our approach points toward a
family of synthetic data generation algorithms that allow us to convert compute into data efficiency
for (continued) pretraining (Kaplan et al., 2020).
1.1 RELATED WORK
We next discuss recent work most related to our setting of synthetic data generation for continued
pretraining. In Appendix A, we provide an extended survey of classical work on synthetic data
generation and continual learning.
Synthetic generation of pretraining data. Recent approaches synthesize pretraining data using
hierarchical prompting methods to promote dataset diversity. Eldan & Li (2023) prompt API-based
LLMs to generate children’s stories containing sampled keywords, and demonstrate that even small
language models trained on their dataset can generate fluent text. Gunasekar et al. (2023) synthe-
size a diverse dataset of textbooks and code exercises by conditioning on topic, target audience, and
function names, and later release strong LLMs pretrained on synthetic data in follow-up work (Li
et al., 2023b; Abdin et al., 2023; 2024). However, their datasets and prompts are not publicly avail-
able. Maini et al. (2024) prompt an LM to rephrase documents for pretraining, improving training
efficiency. Different from all above works, our focus is teaching a pretrained LLM the knowledge of
a small corpus. Mecklenburg et al. (2024) consider task-specific finetuning and propose a fact-based
synthetic QA generation procedure, but do not show improvement on generic instruction following
tasks beyond simple QA. We instead focus on teaching a model generally useful knowledge about
a small corpus, untied to a particular downstream task. Ovadia et al. (2024) continually pretrain
Llama 2–based language models on synthetic paraphrases of Wikipedia articles, but do not observe
consistent performance improvements. We adapt the approach of Maini et al. (2024) and Mecklen-
burg et al. (2024) to our small corpus setting as the “Rephrase baseline” in §4. We find that our
graph-based augmentation algorithm outperforms it, likely because our approach enforces diversity
through entity-based generation.
Continued pretraining. Continual or continued pretraining works (Gururangan et al., 2020) suc-
cessfully adapt pretrained large language models to broad target domains such as code (Rozi`ere
et al., 2024), medicine (Chen et al., 2023), or mathematics (Lewkowycz et al., 2022; Shao et al.,
2024; Azerbayev et al., 2024) by collecting massive datasets (often >100B tokens, shown in Ta-
ble 1) and developing efficient training recipes using causal language modeling (Gupta et al., 2023;
Ibrahim et al., 2024; Parmar et al., 2024). This work aims to extend the success of continued pre-
training to small, specialized domains such as proprietary document stores. Observing that standard
3
Model Parameter Count Total Unique CPT Tokens
Study
Minerva (Lewkowycz et al., 2022)
MediTron (Chen et al., 2023)
Code Llama (Rozi`ere et al., 2024)
Llemma (Azerbayev et al., 2024)
DeepSeekMath (Shao et al., 2024)
SaulLM-7B (Colombo et al., 2024b)
SaulLM-{54, 141}B (Colombo et al., 2024a)
HEAL (Yuan et al., 2024a)
Domain
STEM
Medicine
Code
Math
Math
Law
Law
Medicine
8B, 62B, 540B
7B, 70B
7B, 13B, 34B
7B, 34B
7B
7B
54B, 141B
13B
Our setting
Articles & Books
8B
26B-38.5B
46.7B
520B-620B
50B-55B
500B
30B
520B
14.9B
1.3M
Table 1: Comparing the scale of modern continued pretraining (CPT) works with our small corpus
setting. Prior work adapts language models to broad domains with diverse, large-scale corpora. We
aim to downscale continued pretraining to small corpora; we use a corpus that is 10,000× smaller
than the smallest modern corpus for domain-adaptive CPT.
continued pretraining is ineffective on small corpora, we propose a knowledge graph–inspired ap-
proach to synthesize a diverse related corpus and find it more amenable to learning.
Knowledge editing. A related line of literature updates language models with small units of fac-
tual knowledge, such as (subject, relation, object) tuples. Zhu et al. (2020) studies a constrained
fine-tuning approach, limiting the model’s complexity to better suit the learning of simple factual
relations. Later approaches attempt to localize where factual knowledge is stored in Transformers
and update only those weights (Mitchell et al., 2022; Meng et al., 2022; 2023), or maintain an ex-
ternal memory of edits and prepend them as context during generation (Zhong et al., 2023; Cohen
et al., 2023). Most relevant to our work is deductive closure training (Aky¨urek et al., 2024), which
first deduces implications of a factual edit and then finetunes the language model on those implica-
tions. The line of knowledge editing differs from our setting in that we aim to learn from a small
corpus of documents, rather than atomic, sentence-length facts.
2 OUR METHOD
We focus on learning parametric knowledge from a small text corpus. Our goal is to continually
pretrain a language model to acquire the knowledge of a niche corpus of documents. Observing that
simple continued pretraining is ineffective (§4), we propose to use synthetic continued pretraining,
which first uses the small corpus to synthesize a larger one more amenable to learning, and then
continues pretraining on the synthetic corpus. In this section, we first outline this problem setting
and our evaluation approach in more detail (§2.1). Then, we provide a concrete instantiation of
synthetic continued pretraining using a data augmentation algorithm called EntiGraph (§2.2).
2.1 PROBLEM SETUP
Continued pretraining on small corpora. We focus on approaches that use continued pretraining
to teach a pretrained language model the knowledge of a small set of source documents Dsource.
These approaches acquire “parametric knowledge”, i.e., the knowledge of Dsource is learned in the
model’s parameters much like during the pretraining process.
Synthetic continued pretraining (synthetic CPT). First, we apply a synthetic data generation
algorithm Asynth to convert a small corpus Dsource into a synthetic corpus Dsynth:
Asynth : Dsource (cid:55)−→ Dsynth.
(1)
Then, we perform continued pretraining on Dsynth instead of on Dsource. We implement Asynth using
a language model. A natural concern is that the language model may hallucinate and fabricate false
knowledge. Therefore, we consider synthetic data augmentation algorithms that condition the
generation process on the source documents to improve the synthesized data’s faithfulness.
Evaluation with knowledge-intensive queries. We evaluate the quality of a synthetic data aug-
mentation algorithm Asynth by testing whether the downstream synthetic CPT model has effectively
4
acquired the knowledge of Dsource in its parameters. More precisely, we curate some test queries
Qtest that probe the knowledge about Dsource acquired by the model. For example, in the linear alge-
bra setting, Qtest could be held-out exam questions. To test parametric knowledge, we do not allow
the model to access the source documents Dsource at test time. Therefore, the queries cannot be am-
biguous without access to Dsource. For example, a reading comprehension question like “Where was
he born?” is ambiguous without context. Altogether, we can evaluate data augmentation algorithms
Asynth for synthetic CPT using a paired source corpus and related test queries (Dsource, Qtest).
2.2 ENTIGRAPH
Next, we present EntiGraph, our instantiation of a synthetic data augmentation algorithm Asynth. At
a high level, EntiGraph generates diverse representations of knowledge from a small corpus Dsource
by using a prompted LLM to synthesize a knowledge graph representation of Dsource. EntiGraph
consists of two steps/prompts: extracting entities from the document and analyzing relations among
an arbitrary subset of the entities (Figure 1). Altogether, this hierarchical prompting strategy ex-
ternalizes the problem of generating diverse synthetic text to a combinatorial structure—namely,
a graph relating various entities appearing in the corpus documents. In what follows, we provide
abbreviated prompts to illustrate the algorithm, and defer full prompts to Appendix G.1.
Step 1: Entity extraction. First, EntiGraph extracts a list of salient entities {E1, E2, . . . , En}
from the document Dsource using an entity extraction prompt:
{E1, E2, . . . , En} ∼ LMaug
(cid:0)entity extraction(Dsource)(cid:1).
We show the abbreviated entity extraction prompt below:
## System message
As a knowledge analyzer, identify salient entities in the given
text. Include: (a) Names (b) People (c) Places (d) Concepts, etc.
## User
* Document {document_text}
In the linear algebra example, Dsource could be one specific linear algebra textbook. We would expect
to extract entities such as {E1 = Linear space, E2 = Vector, E3 = SVD, . . . }.
Step 2: Relation analysis. Next, EntiGraph analyzes the relations among subsets of entities. The
intuition is to thoroughly explore the edges of the knowledge graph underlying the source document
Dsource, analogous to a student writing diverse notes about a linear algebra textbook. We apply
a relation analysis prompt to describe how a subset of k ≤ n entities are related in the
context of the source document Dsource, obtaining a synthetic document
(cid:101)DEi1 ...Eik
∼ LMaug
(cid:0)relation analysis(D, Ei1 , Ei2, . . . , Eik )(cid:1).
Specifically, we use the prompt below (abbreviated):
## System message
Analyze relations among given entities in the provided text.
Discuss how their interactions shape the document’s content.
## User
* Document {book_text}
* Entities {entity_name_1}, {entity_name_2}, ...
For example, if E1 = Linear space and E2 = Vector, (cid:101)DE1E2 could include the text Based
on the textbook, a vector is an element of a linear space... Exhaus-
tively enumerating all possible subsets of the n extracted entities is impractical. We choose to
generate data for all pairs (cid:101)DEiEj and triplets (cid:101)DEiEj Ek in our experiments.
5
EntiGraph synthetic corpora. Finally, we collect all sampled synthetic texts from Step 2 as the
EntiGraph output: DEntiGraph = { (cid:101)DEi1 ...Eik
, . . . }. Altogether, we described a data augmentation
algorithm mapping a small source corpus Dsource to a larger synthetic corpus DEntiGraph, as in (1).
3 EXPERIMENT SETUP
In this section, we describe in detail how we evaluate a given data augmentation algorithm Asynth. As
described in the problem setup (§2.1), we evaluate such algorithms Asynth by evaluating whether a
language model continually pretrained on their output synthetic corpus Asynth(Dsource) can accurately
answer test queries Qtest about the source documents Dsource.
In our main experiments, we use queries that are unambiguous even without the source documents
Dsource, and disallow the model from accessing Dsource while answering the queries Qtest (§2.1). This
allows us to evaluate which data augmentation algorithm best promotes the acquisition of parametric
knowledge through synthetic CPT. Later, in §5, we consider an open-book setting where the model
can access both the source documents Dsource and test queries Qtest at the same time, in order to
test how the parametric knowledge acquired through synthetic CPT composes with non-parametric
access to knowledge through retrieval (Lewis et al., 2020).
We next introduce the small corpus and related test queries (Dsource, Qtest) used in our experiments.
QuALITY corpus Dsource. Our corpus and test queries are based on the QuALITY dataset (Pang
et al., 2022), a long-document comprehension benchmark. The QuALITY corpus Dsource is com-
posed of 265 articles and short books on genres ranging from science fiction to journalism, with an
average length of ∼5,000 tokens.
QuALITY test queries Qtest. To curate the test queries Qtest, we use the 10-20 multiple choice
questions accompanying each article in QuALITY. These questions serve as high-quality knowledge
probes on Dsource, but the query phrasing often presupposes the reading comprehension context (e.g.,
“What does the author think about...”). We remove ambiguity by contextualizing them with the
corresponding article reference: “In the context of article {article name} by {author name}, what
does the author think about...”. Altogether, this provides us with 4,609 unambiguous queries Qtest to
test the parametric knowledge of our continually pretrained language models.
Evaluation on instruction-tuned summarization.
In addition to evaluation using the above test
queries Qtest, we also instruction tune the continually pretrained LMs and evaluate them on more
general instruction following queries. Specifically, we evaluate their closed-book summarization
abilities by prompting them to generate summaries of QuALITY articles given only title and author.
Performance with strong API-based LLMs.
In our continued pretraining setting, we must select
a corpus Dsource that is not already well-represented in standard pretraining datasets. As an initial
test of the obscurity of the QuALITY corpus Dsource, we evaluate GPT-3.5 (Brown et al., 2020)
and GPT-4 (OpenAI et al., 2024) on Qtest. In the closed-book setting, we find GPT-3.5 accuracy at
44.81% and GPT-4 accuracy at 51.30% (Figure 2). In the open-book setting (full access to Dsource),
we find GPT-3.5 accuracy at 72.60% and GPT-4 accuracy at 86.09% (Table 3). Based on the large
(∼30%) improvement when Dsource is provided, we conclude that the QuALITY corpus Dsource is
sufficiently niche to serve as an appropriate testbed.
4 MAIN EXPERIMENTS
In this section, we present our main experimental results1. Using GPT-4 (the gpt-4-turbo model
as of Aug. 19, 2024) as our prompted model LMaug, we apply EntiGraph to the 1.3M token QuAL-
ITY corpus Dsource, generating a 455M token synthetic corpus2. For the remainder of the paper, we
refer to the former as the “Raw corpus” and the latter as the “EntiGraph corpus”. Additional details
on these corpora are provided in Appendix B.
1Code https://github.com/ZitongYang/Synthetic_Continued_Pretraining.git.
2Data https://huggingface.co/datasets/zitongyang/entigraph-quality-corpus.
6
Figure 2: Accuracy on the QuALITY question set Qtest (y-axis) as a function of the synthetic token
count (x-axis). The accuracy of synthetic continued pretraining using the EntiGraph data augmen-
tation algorithm (EntiGraph CPT) scales log-linearly up to 455M tokens.
We continually pretrain Llama 3 8B (Dubey et al., 2024) with standard causal language modeling on
the 455M token EntiGraph corpus3. In §4.1, we describe our continued pretraining procedure and
introduce two natural baselines. In §4.2, we evaluate all methods on the QuALITY test queries Qtest.
In §4.3, we show that synthetic CPT using EntiGraph is compatible with downstream instruction
tuning (Ouyang et al., 2022), an important feature of real pretraining data.
4.1 CONTINUED PRETRAINING PROCEDURE
EntiGraph CPT.
In our main continued pretraining experiment, we continually pretrain Llama
3 8B Base on the 455M token EntiGraph corpus for 2 epochs with replay on RedPajama dataset
(TogetherAI, 2023). For the remainder of the work, we will refer to this continually pretrained model
as “EntiGraph CPT”. We provide details on continued pretraining setup in Appendix C. Next, we
describe two baselines which we compare to EntiGraph CPT in closed-book QA (§4.2).
Raw CPT baseline. The first natural baseline is to continually pretrain Llama 3 8B Base on the
1.3M token Raw corpus (the raw QuALITY articles Dsource, defined in §3). We jointly tune the
number of epochs and RedPajama replay rate, and refer to this continually pretrained model as
“Raw CPT”. Further tuning details are provided in Appendix C.
Rephrase CPT baseline. Another simple synthetic data augmentation procedure is to rephrase
QuALITY articles many times. As discussed in §1.1, Maini et al. (2024) and Ovadia et al. (2024)
execute a systematic extension of this idea. Based on their approaches, we craft three fixed prompts
(easy, medium, and hard rephrase) and repeatedly apply them to the QuALITY articles at temper-
ature 1.04. We refer to this data augmentation algorithm as the “Rephrase baseline”. We stopped
generating paraphrases at 38M tokens, where we observed a clear gap in QA evaluations from Enti-
Graph CPT and a slower scaling trend (Figure 2). We will refer to this data as the “Rephrase corpus”
and the continually pretrained Llama 3 8B Base models as the “Rephrase CPT”.
4.2 QUESTION-ANSWERING EVALUATIONS
Next, we provide the detailed setup of our closed-book QA evaluations with QuALITY test queries
Qtest, and present results.
3Model https://huggingface.co/zitongyang/llama-3-8b-entigraph-quality.
4Note that Maini et al. (2024) also includes a fourth prompt that generates synthetic QA pairs. We defer this
task-specific QA finetuning approach to Appendix D and focus on task-agnostic baselines that teach generic
knowledge about QuALITY articles.
7
100101102Number of synthetic tokens (in Millions)37.540.042.545.047.550.052.555.0QA AccuracyGPT-4 (51.30%)GPT-3.5 (44.81%)Raw CPT (38.15%)EntiGraph CPTRephrase CPTLlama 3 8B Base (39.49%)Evaluation procedure. Each QuALITY question is a four-choice, single-answer multiple choice
question (similar to MMLU, Hendrycks et al. (2021)). We evaluate with 5-shot chain-of-thought
prompting (Brown et al., 2020; Wei et al., 2024) and provide our prompt in Appendix H.1.
EntiGraph scaling. We find that continued pretraining on the 455M token EntiGraph corpus
improves closed-book QA accuracy from 39.49% (for Llama 3 8B Base) to 56.22% (Figure 2).
A natural question is how performance scales as we synthesize and train on more tokens with
EntiGraph. To test this, we randomly subsample without replacement the EntiGraph corpus with
varying sample sizes, continually pretrain Llama 3 8B Base on each subsample, and plot QuALITY
accuracy with respect to sample size in Figure 2. We observe log-linear scaling of the accuracy
in the number of synthetic tokens used for continued pretraining, up to 455M tokens. We will
mathematically investigate the scaling properties of EntiGraph in detail in §6. In broad strokes, we
postulate that QuALITY accuracy follows a mixture-of-exponential shape and follows three stages:
(i) linear growth, (ii) log-linear growth, and (iii) asymptotic plateau.
Comparison with baselines. Raw CPT performs even worse than Llama 3 8B Base (dashed black
line in Figure 2). We postulate two reasons for this: (i) The Raw corpus follows a narrower, different
distribution than the Llama 3 8B pretraining corpus, and heavily training on these tokens may harm
the overall English capabilities of the model. (ii) The limited diversity of knowledge representations
in the Raw corpus leads to limited knowledge acquisition due to problems such as the reversal
curse (Berglund et al., 2023). Rephrase CPT scales poorly compared with EntiGraph (Figure 2),
suggesting that for synthetic CPT to scale, the synthetic data must be sufficiently diverse. EntiGraph
tackles this problem using a hierarchical prompting strategy, which externalizes diversity to the
combinatorial relationships encoded in entity knowledge graphs.
4.3
INSTRUCTION FOLLOWING EVALUATIONS
In this section, we explore more general test queries be-
yond the QuALITY test queries Qtest. Concretely, we per-
form instruction tuning on EntiGraph CPT to obtain Enti-
Graph Instruct. We demonstrate that synthetic CPT on the
EntiGraph corpus is compatible with instruction tuning:
EntiGraph Instruct can directly use knowledge obtained
during synthetic CPT in instruction following tasks (Wei
et al., 2022), without any test-time access to the QuAL-
ITY books and articles Dsource. We provide details about
our instruction tuning procedure in Appendix C.
Table 2: EntiGraph Instruct examples.
Explicit reference: Summarize “Defining
Decay Down”.
The article “Defining Decay Down” by
David Plotz discusses [...] Dentists began to
focus on cosmetic dentistry, [...]
Implicit reference: How has dentistry in the
U.S. changed?
1. Increase in cosmetic dentistry [...]
2. Use of technology: [...]
Cross article instruction: Compare David
Plotz’s commentary on American dentistry
and the movie Fight Club?
Instruction tuning qualitative examples. We first
present a few qualitative examples to demonstrate Enti-
Graph Instruct’s ability to follow instructions related to
QuALITY articles. As a first test, we ask the model to
summarize a QuALITY article given an explicit refer-
ence to the title and author, but no access to the article
itself (Table 2, top row). This article provides context for
the coming examples. Next, we show that even without an explicit reference to the title and author,
knowledge of the article is stored in the model’s parameters and can affect its behavior (Table 2, mid-
dle row). Finally, we provide an example where the model performs a comparison using knowledge
across two articles (Table 2, bottom row). Albeit artificial, this shows that even though EntiGraph
does not synthesize data that simultaneously involves multiple articles, the model can reason about
their interaction using its parametric knowledge. We provide the full responses in Table 5.
David Plotz’s commentary style is different
when he analyzes American dentistry and
when he discusses the movie Fight Club. [...]
Evaluation metric for closed-book summarization. We also present quantitative metrics for
summarization, a well-studied instruction following task. We compare EntiGraph Instruct sum-
maries of QuALITY articles with human-written summaries from sQuALITY (Wang et al., 2022),
a variation of QuALITY with provided human summaries. Common scalar summarization metrics
such as ROUGE (Lin, 2004) or BERTScore (Zhang* et al., 2020) mostly evaluate text similarity
8
between the summary and source articles, and may not accurately reflect summarization quality for
abstractive systems (Zhang et al., 2024b).
We use a simple, automated evaluation metric based on pyramid evaluation (Nenkova et al., 2007;
Gao et al., 2019) that measures both the hallucination rate and how well the summary captures the
salient claims of the original article. Our approach uses GPT-4 to (1) split the summary into atomic
claims (Min et al., 2023), (2) decide whether each claim is true/false based on the source article,
and (3) determine if true claims are salient to the article’s main message. We hence obtain the count
of false and salient claims for each summary, normalize these by the corresponding count from the
human summary, and report the average of these normalized metrics in Figure 3. Appendix H.2
provides further details.
Results discussion.
In Figure 3, we compare
four summarizers: EntiGraph Instruct, Raw
Instruct, GPT-3.5, and GPT-4. We provide
each summarizer with two different prompts—
asking for progressively more detailed sum-
maries. We provide exact prompts in Appendix
H.2, as well as a smaller-scale token-matched
comparison to Rephrase CPT in Appendix H.3,
where we find EntiGraph CPT has consistently
lower false claims relative to Rephrase CPT.
As we request more detailed summaries, Raw
Instruct consistently hallucinates and generates
more false claims with little improvement in
the number of salient claims. In contrast, Enti-
Graph Instruct can generate more salient claims
as the summary gets longer, with a small in-
crease in the number of false claims (similar to
GPT-3.5 and GPT-4 levels). The gaps in both salient and false claim rates are sufficiently large that
these results likely hold beyond our particular metric. We complement the automated evaluation
metrics above with several qualitative examples in Appendix H.2.
Figure 3: Closed-book summarization: number of false
claims (y-axis) versus number of salient claims (x-axis)
normalized by the human summary.
5 OPEN-BOOK EXPERIMENTS
Next, we consider an open-book setting with the domain-specific corpus Dsource available at test
time. In this widespread setting, retrieval-augmented generation (RAG; Lewis et al. (2020); Gao
et al. (2024)) is the predominant approach. It has strong tooling (Chase, 2022; Han et al., 2023;
Pinecone, 2024), avoids finetuning, supports continual learning as the corpus is updated (Wu et al.,
2024), and has high recall (proportion of queries for which the correct documents are retrieved).
Therefore, it is a natural question whether the parametric knowledge learned through synthetic CPT
using EntiGraph complements the non-parametric knowledge accessed using RAG. We answer this
question by comparing a state-of-the-art RAG pipeline with and without Entigraph CPT.
RAG evaluation setup. Our RAG pipeline follows established best practices (Lewis et al., 2020;
It involves an offline stage which indexes document chunks, followed by
Gao et al., 2024).
inference-time retrieval, reranking, and placement of those chunks in a few-shot LM prompt.
Throughout, we use OpenAI text-embedding-3-large (Neelakantan et al., 2022) as our
API-based embedding model, FAISS as our similarity search index (Douze et al., 2024), and Cohere
rerank-english-v3.0 (Cohere, 2024) as our reranker. Following the evaluation procedure
detailed in §4, we evaluate parallel RAG pipelines on the QuALITY multiple choice test set using
few-shot chain-of-thought prompting. All hyperparameters are tuned separately for each LM’s RAG
pipeline. We refer the reader to Appendix E for further details on our RAG evaluation setup.
EntiGraph continued pretraining complements RAG. We observe in Table 3 that EntiGraph
CPT outperforms Llama 3 8B Base, the model from which it is continually pretrained. These re-
sults demonstrate that the knowledge internalized through synthetic CPT is complementary to that
accessed during RAG, and demonstrate a competitive new recipe for small corpus QA: (1) synthetic
data augmentation, (2) continued pretraining, and (3) RAG.
9
0.00.20.40.60.81.01.21.4# Salient claims relative to human2468# False claims relative to humanRawCPT shortRawCPT longGPT-3.5 shortGPT-3.5 longGPT-4 shortGPT-4 longEntiGraph shortEntiGraph longHumanEntiGraph CPT + RAG Llama 3 8B Base + RAG GPT-4 + Oracle RAG GPT-3.5 + Oracle RAG
Accuracy
Recall@8 Accuracy
Recall@8
Accuracy Recall@8 Accuracy
Recall@8
62.60
99.63
60.35
99.63
86.09
100.0
72.60
100.0
Table 3: QuALITY question-answering accuracy and recall rate in the open-book retrieval-
augmented generation (RAG) setting. EntiGraph CPT and Llama 3 8B Base are used in a RAG
pipeline (cf. §5 for setup details). Recall@8 is defined as the proportion of questions for which
the salient article appears in the top 8 reranked document chunks. GPT-4 and GPT-3.5 Oracle RAG
provide an upper bound with a perfect retriever, by placing the entire relevant document in-context.
EntiGraph continued pretraining alone approaches RAG performance. These results also
contextualize the effectiveness of EntiGraph in the closed-book, parametric knowledge setting (§4).
Comparing Figure 2 and Table 3, we observe that adding RAG to Llama 3 8B Base improves accu-
racy by 20.86% (39.49% → 60.35%). On the other hand, continued pretraining of Llama 3 8B Base
on the EntiGraph corpus improves accuracy by 16.73% (39.49% → 56.22%). Hence, EntiGraph
continued pretraining provides > 80% of the absolute performance improvement of RAG, even in a
small corpus setting where RAG recall is nearly perfect.
Overall, our results indicate that the parametric knowledge acquired in EntiGraph continued pre-
training composes with realistic knowledge-intensive QA pipelines, and that EntiGraph continued
pretraining alone—without test-time corpus access—is nearly competitive with a strong RAG base-
line.
6 THEORETICAL ANALYSIS OF ENTIGRAPH SCALING
It may seem surprising that simply “rewriting” the factual content of the source documents Dsource
can improve performance at all (§4), as the EntiGraph data augmentation algorithm does not ex-
plicitly add new factual information beyond Dsource. In this section, we build a mathematical model
based on a stochastic process on graphs to offer an explanation for this phenomenon. We postulate
that EntiGraph does not create knowledge de novo; rather, it simply “rearranges” the knowledge
of Dsource into a layout more amenable to learning. For example, in Dsource, the entity pair (A, B)
may appear together in some sentences and (B, C) in others. As a result, models trained directly on
Dsource with a next-token prediction objective may learn the (A, B) relation and the (B, C) relation,
but not the relation between A and C (Aky¨urek et al., 2024). We will build a mathematical model
that formalizes this intuition (§6.1). Based on this model, we provide a quantitative prediction that
the scaling trend of EntiGraph CPT follows a mixture-of-exponential shape (§6.3), which fits well
with our empirically observed scaling trend (Figure 4).
6.1 TOY MODEL SETUP
In this toy model, we use V to denote the set of entities, and represent the source documents Dsource
with pairs of known relations Dsource ⊂ {(x, y) ∈ V 2 : x ̸= y}. We assume that each relation
pair in V 2 appears in the source documents Dsource independently at random, with probability p.
Mathematically, P [(x, y) ∈ Dsource] = p for all x ∈ V and y ∈ V with x ̸= y. We write V = |V|
and assume that p = λ/V , for some constant λ > 1.
Training as memorization. We model the learning of factual knowledge as a memorization pro-
cess, in which a model memorizes the relations it is explicitly trained on but does not meaningfully
generalize beyond them (Yang et al., 2023; Feldman, 2020). In our knowledge graph setting, a lan-
guage model’s knowledge can be represented by a matrix M ∈ {0, 1}V ×V such that M (x, y) = 1
if the model “knows” the (x, y) relation and equals 0 otherwise. Then, training directly on the source
documents Dsource simply means setting all entries that appear in Dsource to 1. This denotes that the
model has memorized the relations given in the source documents. Mathematically, we denote this
model trained on Dsource by the matrix M0 ∈ {0, 1}V ×V , which has i.i.d. Bernoulli off-diagonal
entries with mean p.
10
EntiGraph synthetic data augmentation. Given the source documents Dsource, we define the
following iterative procedure of synthetic data generation: for each t = 1, 2, . . .
1. Entity pair selection: Sample (xt, yt) ∈ {(x, y) ∈ V 2 : x ̸= y} uniformly at random.
2. Relation analysis: Generate the “relation between (xt, yt)” by performing a breadth-first
search (BFS) on the directed graph represented by the adjacency matrix M0 starting at xt:
• If there exists a path (xt, z1
Dt = {(xt, z1
t , . . . , zkt
t , z2
t ), (xt, z2
t , yt) connecting xt to yt, define
t ), (xt, yt)} ∪ Dt−1,
t ), . . . , (xt, zkt
where we assume D0 = Dsource. The model trained on this round of synthetic data
would be
Mt = Mt−1 +
(cid:88)
Ixy,
where Ixy ∈ {0, 1}V ×V is a binary matrix with Ixy(x, y) = 1 and 0 otherwise.
(x,y)∈Dt\Dt−1
• If no such path exists, do nothing.
This mirrors the relation analysis step for the EntiGraph synthetic data augmentation algorithm
(introduced in §2.2). With the setup above, the index t is analogous to the number of synthetic
tokens that the model has generated, and the model’s knowledge is captured by how many ones the
matrix Mt contains. To make this connection precise, we define the link density (or accuracy) of
Mt to be
Acc(Mt) =
E[∥Mt∥1|M0]
V (V − 1)
,
where the expectation is taken over the randomness arising from the synthetic data generation pro-
cess and not the source documents Dsource. For a matrix M , we use ∥M ∥1 to denote (cid:80)
i,j |Mi,j|. We
use the notation Acc as this is intended to emulate the accuracy on QuALITY test queries studied in
the experimental sections (§4 and §5).
6.2 RIGOROUS UPPER AND LOWER BOUND
In this section, we derive rigorous upper and lower bounds on the scaling trend of Acc(Mt). We
show that Acc(Mt) as a function of t can be bounded above and below by two exponential functions
with different growth rates. Note that these two bounds do not necessarily imply that Acc(Mt) itself
grows exponentially. We will provide a precise formula for its growth in §6.3 via an approximation
through a Poisson branching process.
Definition 1. Let Cλ = (1 − ρ(λ))2, where ρ(λ) denotes the extinction probability for a Poisson(λ)
branching process (i.e., ρ is the smallest solution in [0, 1] to the fixed-point equation ρ = exp(λ(ρ −
1))). For any fixed ε > 0, we further define
CLB = 1 −
1
V (V − 1)
, CUB = 1 −
(1 + ε) log V
V (V − 1) log λ
.
Theorem 1. For any time t ≥ 1 and any ε > 0, the link density satisfies
(cid:0)p + Cλ
(cid:0)1 − C t
LB
(cid:1)(cid:1) (1 − ε) ≤ Acc(Mt) ≤ (cid:0)p + Cλ
(cid:0)1 − C t
UB
(cid:1)(cid:1) (1 + ε),
with probability → 1 when V → ∞.
Even though Theorem 1 provides mathematically rigorous upper and lower bounds on the scaling
trend of Acc(Mt), the exact growth curve is more intricate, as we will show next.
6.3 AN ANALYTICAL FORMULA
For the remainder of the section, we analyze the link density Acc(Mt) using a Poisson branching
process approximation of the cluster growth of vertices. This approach yields an approximation of
the form
Acc(Mt) ∼ p + Cλ
1 −
(cid:32)
∞
(cid:88)
ℓ=0
λ − 1
λℓ+1
∞
(cid:88)
k=1
(cid:18)
pℓ(k)
1 −
k
V (V − 1)
11
(cid:19)t(cid:33)
,
where A ∼ B means that A/B converges to 1 in probability as V → ∞. We refer the reader to
Appendix F for a comprehensive derivation. Here pℓ denotes the probability mass function of the
total progeny Yℓ of a Poisson(λ) branching process at level ℓ. Qualitatively, for a general represen-
tation of source documents Dsource beyond directed Erd˝os-R´enyi graphs, we still expect to observe a
mixture-of-exponential scaling trend:
(cid:32)
Acc(Mt) ∼ p + C
1 −
(cid:33)
µ(k) (1 − ak)t
.
∞
(cid:88)
k=1
(2)
In this context, the parameter C governs the
link density Acc(Mt) as t → ∞. In our model,
C is determined by the proportion of reachable
pairs of vertices in the initial matrix M0. Here,
we are essentially filling out the “deductive clo-
sure” (i.e., all the facts or relations that can be
deduced from Dsource; Stine (1976); Aky¨urek
et al. (2024)) of the original data—if some facts
cannot be deduced, then Acc(Mt) cannot ap-
proach 1. The measure µ(·) is the probability
mass function on k, which controls the propor-
tion of pairs of vertices with a specific decay
rate. The parameters µ(·) depend on M0 in a
more intricate manner. We find that the formula
in (2) accurately fits the empirical scaling trend
of EntiGraph CPT accuracy up to 455M synthetic tokens (Figure 4).
Figure 4: A mixture-of-exponential functional form (2)
closely fits the scaling trend of EntiGraph CPT with re-
spect to synthetic token count.
Intuitively, the edge (i, j) will eventually be added if and only if j is reach-
Sketch of derivation.
able from i in the original graph M0. This explains the limiting behavior of Acc(Mt) as t ap-
proaches infinity: the proportion of links will converge to the proportion of connected vertex pairs
in M0. To understand the mixture-of-exponential functional form, consider that at the time t, the
probability of adding each vertex pair follows an exponential pattern, with different vertex pairs
exhibiting different exponential growth rates. Specifically, think of a breadth-first search in M0
starting from a vertex i. If j is very close to the root, there are many paths from i to other vertices
passing through j, making it more likely that (i, j) will be included in each iteration. In contrast, if
j is far from the root (e.g., at the end of the exploration process), there are fewer such paths, making
it less likely for (i, j) to be included in each iteration. This accounts for the mixture-of-exponential
shape, where the mixture primarily reflects the distance of each vertex from the root, the number of
such vertices, and their corresponding exponential growth rates.
(a) Linear regime
(b) Log-linear (t in log scale)
(c) Plateau regime
Figure 5: Accuracy Acc(Mt) with respect to time t, for V = 100 and p = 0.03. The mixture-of-
exponential functional form in (2) leads to three distinct regimes.
Qualitative description. Finally, to help build an intuitive understanding, we provide a qualitative
description of the mixture-of-exponential shape. We demonstrate in Appendix F that this mixture-
of-exponential shape comprises three distinct phases: a fast growth phase, a slower growth phase,
and a plateau phase. Mathematically, we show the existence of two distinct times, 0 < t1 < t2, such
12
101100101102Number of synthetic tokens (in Millions)40.042.545.047.550.052.555.0EntiGraph AccuracyEmpirical observation on QuALITY QAMixture-of-exponential fitthat
Acc(MT ) =
Θ (p + t) ,
Θ(log t),
Θ(1),
for 0 ≤ t ≤ t1,
for t1 ≤ t ≤ t2,
for t ≥ t2,
where we use a convenient change of variable T = tV (V − 1). It is important to note that the
choice of log t in the second phase is not necessarily canonical. In fact, the bound holds for any
well-behaved monotone increasing concave function as a replacement for log t. Our representation
here is motivated by two factors: first, it aligns with the performance observed in our EntiGraph
CPT numerical results, and second, it reflects the gradual slowdown in growth. We illustrate the
three phases in Figure 5, which present a simulation of the toy model with p = 0.03.
7 DISCUSSION
7.1 LIMITATIONS
Because EntiGraph synthesizes data using a prompted language model, there is a risk that it may hal-
lucinate and fabricate non-existent relations among the entities. Although our process of generating
synthetic data is grounded by the source documents, it is an assumption that LMaug is capable enough
to generate faithful synthetic data when conditioned on Dsource. In our experiment with QuALITY
books, we manually read a few books and fact-checked a subset of the synthetic data generated for
those books; we did not find factually incorrect synthesized text. We postulate that this is because
we use a sufficiently strong prompted model LMaug (gpt-4-turbo). If EntiGraph were applied
to more challenging content like a complex research paper, it is possible that the prompted model
could be more prone to hallucination.
On the other hand, since we use a very capable prompted language model gpt-4-turbo to gen-
erate synthetic data, one might be concerned that our performance gains come from distilling the
prompted LM’s knowledge. The closed-book results indicate that distillation effects alone cannot
explain the performance of our approach (as we exceed GPT-4’s closed-book performance), but our
approach does not yet enable bootstrapping, where we use a model to generate its own synthetic
data for a small target domain. We view this as exciting future work.
7.2 FUTURE DIRECTIONS
Continued scaling beyond real data. The large but finite body of human-written text is rapidly
being consumed. Villalobos et al. (2024) predict that frontier language models will exhaust all pub-
lic, human-generated text in 2028. As we transition from a data-rich to a data-constrained regime
(Kaplan et al., 2020; Muennighoff et al., 2023), further scaling will require us to extract more knowl-
edge from existing data. We demonstrated that synthetic continued pretraining with EntiGraph ef-
fectively extracts more knowledge from small corpora, which could help us learn from proprietary
datasets or tail knowledge that appears only once or twice on the internet. It is an open question
whether synthetic data generation methods like EntiGraph could improve data efficiency more gen-
erally on standard pretraining data and without relying upon a stronger prompted model.
Alternatives to long-context language models. Recent work handles long user queries (e.g., 1M-
10M+ tokens) using efficient implementations of attention (Dao et al., 2022; Liu et al., 2023; Gemini,
2024) or alternative architectures that are sub-quadratic in the context length (Tay et al., 2022; Gu
et al., 2022; Gu & Dao, 2024; Sun et al., 2024). In settings where many queries share the same
long prefix—e.g., a corporation’s proprietary documents or other use cases with prompt caching
(Anthropic, 2024a)—one could instead continue pretraining on the prefix to internalize its knowl-
edge, and then perform standard quadratic attention on shorter queries. This approach pays a fixed
training cost to amortize the prefix’s knowledge into the weights of a model, and then benefits from
shorter context lengths (Gururangan et al., 2020; Snell et al., 2022). By adapting the continued
pretraining paradigm from 10B-100B tokens to as little as 1.3M tokens, our synthetic continued
pretraining approach could enable unsupervised learning of shared text prefixes at much smaller and
more practical token counts.
13
7.3 CONCLUSION
Continued pretraining with next-token prediction is remarkably effective in teaching pretrained lan-
guage models new knowledge, but to date has only been applied successfully in broad, data-rich
domains with 10B-100B+ tokens. We downscale continued pretraining to small, specialized cor-
pora with ∼1M tokens using synthetic continued pretraining: converting a small corpus into a large
synthetic one with diverse representations of knowledge, and continuing pretraining on it.
We instantiate this approach using EntiGraph, a knowledge graph–inspired synthetic data augmen-
tation algorithm. Synthetic continued pretraining with EntiGraph demonstrates consistent scaling in
downstream closed-book QA performance up to a 455M token synthetic corpus, whereas baselines
such as continued pretraining on the small corpus or synthetic paraphrases show no improvement
or asymptote early. Moreover, the acquired parametric knowledge composes with instruction tuning
and retrieved non-parametric knowledge in an open-book setting. Lastly, we present a simplified
mathematical model of EntiGraph and derive a functional form for its scaling trend, which closely
matches our empirical trend. We hypothesize that EntiGraph’s “externalization” of the synthetic data
generation process to a combinatorial structure—in this case, a knowledge graph over entities—is a
generally useful strategy in synthesizing highly diverse data and a promising object for future study.
8 ACKNOWLEDGEMENT
Zitong Yang would like to thank Samy Jelassi for feedback on a preliminary version of this work,
Ruiqi Zhong for discussion regarding context distillation work, Xiang Lisa Li for discussion about
reversal curse work, and the participants of the statistics seminar at Stanford University for their
insightful feedback about a preliminary version of this work. We also thank the Tatsu Lab for con-
structive feedback and interesting discussions that have helped improve the paper. Zitong Yang is
supported by the Albion Walter Hewlett Stanford Graduate Fellowship. Neil Band acknowledges
funding from an NSF Graduate Research Fellowship and a Quad Fellowship. This work was sup-
ported by gifts from Panasonic Research, the Google Research Scholar Program, and the Tianqiao
and Chrissy Chen Institute, as well as the NSF grant IIS-2338866. E.J.C. is supported by the Office
of Naval Research grant N00014-20-1-2157, the National Science Foundation grant DMS-2032014,
the Simons Foundation under award 814641.
14
REFERENCES
Marah Abdin, Jyoti Aneja, Sebastien Bubeck, Caio C´esar Teodoro Mendes, Weizhu Chen, Al-
lie Del Giorno, Ronen Eldan, Sivakanth Gopi, Suriya Gunasekar, Mojan Javaheripi, Piero Kauff-
mann, Yin Tat Lee, Yuanzhi Li, Anh Nguyen, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shi-
tal Shah, Michael Santacroce, Harkirat Singh Behl, Adam Taumann Kalai, Xin Wang, Rachel
Ward, Philipp Witte, Cyril Zhang, and Yi Zhang. Phi-2: The surprising power of small lan-
guage models, 2023. URL https://www.microsoft.com/en-us/research/blog/
phi-2-the-surprising-power-of-small-language-models/.
Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany
Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, Alon Ben-
haim, Misha Bilenko, Johan Bjorck, S´ebastien Bubeck, Qin Cai, Martin Cai, Caio C´esar Teodoro
Mendes, Weizhu Chen, Vishrav Chaudhary, Dong Chen, Dongdong Chen, Yen-Chun Chen, Yi-
Ling Chen, Parul Chopra, Xiyang Dai, Allie Del Giorno, Gustavo de Rosa, Matthew Dixon,
Ronen Eldan, Victor Fragoso, Dan Iter, Mei Gao, Min Gao, Jianfeng Gao, Amit Garg, Abhishek
Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Jamie Huynh,
Mojan Javaheripi, Xin Jin, Piero Kauffmann, Nikos Karampatziakis, Dongwoo Kim, Mahoud
Khademi, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Yunsheng Li, Chen Liang, Lars
Liden, Ce Liu, Mengchen Liu, Weishung Liu, Eric Lin, Zeqi Lin, Chong Luo, Piyush Madan,
Matt Mazzola, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel
Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Corby Rosset, Sam-
budha Roy, Olatunji Ruwase, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shi-
tal Shah, Ning Shang, Hiteshi Sharma, Swadheen Shukla, Xia Song, Masahiro Tanaka, Andrea
Tupini, Xin Wang, Lijuan Wang, Chunyu Wang, Yu Wang, Rachel Ward, Guanhua Wang, Philipp
Witte, Haiping Wu, Michael Wyatt, Bin Xiao, Can Xu, Jiahang Xu, Weijian Xu, Sonali Yadav,
Fan Yang, Jianwei Yang, Ziyi Yang, Yifan Yang, Donghan Yu, Lu Yuan, Chengruidong Zhang,
Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang, and Xiren
Zhou. Phi-3 technical report: A highly capable language model locally on your phone, 2024.
URL https://arxiv.org/abs/2404.14219.
Afra Feyza Aky¨urek, Ekin Aky¨urek, Leshem Choshen, Derry Wijaya, and Jacob Andreas. Deduc-
tive closure training of language models for coherence, accuracy, and updatability. In Lun-Wei
Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational
Linguistics ACL 2024, pp. 9802–9818, Bangkok, Thailand and virtual meeting, August 2024.
Association for Computational Linguistics. URL https://aclanthology.org/2024.
findings-acl.584.
Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 3.2, knowledge manipulation,
2024. URL https://arxiv.org/abs/2309.14402.
Dana Angluin. Queries and concept learning. Machine Learning, 2:319–342, 1988. URL https:
//api.semanticscholar.org/CorpusID:11357867.
Anthropic. Prompt caching (beta), 2024a. URL https://docs.anthropic.com/en/
docs/build-with-claude/prompt-caching.
Anthropic.
The Claude 3 Model Family: Opus, Sonnet, Haiku.
https://www-cdn.
anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_
Card_Claude_3.pdf, 2024b.
Anas Awadalla, Mitchell Wortsman, Gabriel Ilharco, Sewon Min, Ian Magnusson, Hannaneh Ha-
jishirzi, and Ludwig Schmidt. Exploring the landscape of distributional robustness for ques-
In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (eds.), Find-
tion answering models.
ings of the Association for Computational Linguistics: EMNLP 2022, pp. 5971–5987, Abu
Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics.
doi: 10.18653/v1/2022.findings-emnlp.441. URL https://aclanthology.org/2022.
findings-emnlp.441.
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen Marcus McAleer,
Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model
for mathematics. In The Twelfth International Conference on Learning Representations, 2024.
URL https://openreview.net/forum?id=4WnqRR915j.
15
Maria-florina Balcan, Avrim Blum,
To-
In L. Saul, Y. Weiss, and L. Bottou (eds.), Ad-
wards bridging theory and practice.
vances in Neural Information Processing Systems, volume 17. MIT Press, 2004. URL
https://proceedings.neurips.cc/paper_files/paper/2004/file/
9457fc28ceb408103e13533e4a5b6bd1-Paper.pdf.
Co-training and expansion:
and Ke Yang.
Lukas Berglund, Meg Tong, Max Kaufmann, Mikita Balesni, Asa Cooper Stickland, Tomasz Kor-
bak, and Owain Evans. The reversal curse: Llms trained on ”a is b” fail to learn ”b is a”, 2023.
David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin
Raffel. Mixmatch: A holistic approach to semi-supervised learning, 2019. URL https:
//arxiv.org/abs/1905.02249.
Avrim Blum and Tom Mitchell. Combining labeled and unlabeled data with co-training. In Pro-
ceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT’ 98, pp.
92–100, New York, NY, USA, 1998. Association for Computing Machinery. ISBN 1581130570.
doi: 10.1145/279943.279962. URL https://doi.org/10.1145/279943.279962.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhari-
wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agar-
wal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh,
Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz
Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec
In
Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners.
H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neu-
ral Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc.,
URL https://proceedings.neurips.cc/paper_files/paper/2020/
2020.
file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
Harrison Chase. LangChain, 10 2022. URL https://github.com/langchain-ai/
langchain.
Zeming Chen, Alejandro Hern´andez Cano, Angelika Romanou, Antoine Bonnet, Kyle Matoba,
Francesco Salvi, Matteo Pagliardini, Simin Fan, Andreas K¨opf, Amirkeivan Mohtashami, Alexan-
dre Sallinen, Alireza Sakhaeirad, Vinitra Swamy, Igor Krawczuk, Deniz Bayazit, Axel Marmet,
Syrielle Montariol, Mary-Anne Hartley, Martin Jaggi, and Antoine Bosselut. Meditron-70b: Scal-
ing medical pretraining for large language models, 2023. URL https://arxiv.org/abs/
2311.16079.
Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, and Mor Geva. Evaluating the ripple effects of
knowledge editing in language models. arXiv preprint arXiv:2307.12976, 2023.
Cohere. Improve search performance with a single line of code, 2024. URL https://cohere.
com/rerank.
Pierre Colombo, Telmo Pires, Malik Boudiaf, Rui Melo, Dominic Culver, Sofia Morgado, Etienne
Malaboeuf, Gabriel Hautreux, Johanne Charpentier, and Michael Desa. Saullm-54b and saullm-
141b: Scaling up domain adaptation for the legal domain, 2024a. URL https://arxiv.
org/abs/2407.19584.
Pierre Colombo, Telmo Pessoa Pires, Malik Boudiaf, Dominic Culver, Rui Melo, Caio Corro, Andre
F. T. Martins, Fabrizio Esposito, Vera L´ucia Raposo, Sofia Morgado, and Michael Desa. Saullm-
7b: A pioneering large language model for law, 2024b. URL https://arxiv.org/abs/
2403.03883.
Common Crawl. Common crawl. https://commoncrawl.org/, 2007.
Tri Dao, Daniel Y Fu, Stefano Ermon, Atri Rudra, and Christopher Re. Flashattention: Fast and
memory-efficient exact attention with IO-awareness. In Alice H. Oh, Alekh Agarwal, Danielle
Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022.
URL https://openreview.net/forum?id=H4DqfPSibmx.
16
Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong
Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional
conversations, 2023.
Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, Pierre-
Emmanuel Mazar´e, Maria Lomeli, Lucas Hosseini, and Herv´e J´egou. The faiss library, 2024.
URL https://arxiv.org/abs/2401.08281.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha
Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony
Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark,
Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere,
Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris
Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong,
Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny
Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino,
Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael
Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Ander-
son, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah
Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan
Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Ma-
hadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy
Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak,
Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Al-
wala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini,
Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Lauren Rantala-Yeary, Laurens van der
Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo,
Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Man-
nat Singh, Manohar Paluri, Marcin Kardas, Mathew Oldham, Mathieu Rita, Maya Pavlova,
Melanie Kambadur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal,
Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Olivier Duchenne, Onur
C¸ elebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhar-
gava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong,
Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic,
Roberta Raileanu, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ronnie Polidoro, Roshan Sum-
baly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sahana Chennabasappa,
Sanjay Singh, Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang,
Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende,
Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sydney
Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom,
Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta,
Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish Vogeti, Vladan Petro-
vic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang,
Xiaoqing Ellen Tan, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur,
Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre
Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aaron Grattafiori, Abha
Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay
Menon, Ajay Sharma, Alex Boesenberg, Alex Vaughan, Alexei Baevski, Allie Feinstein, Amanda
Kallet, Amit Sangani, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew
Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Franco, Aparajita
Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh
Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De
Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Bran-
don Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina
Mejia, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching-Hsiang Chu, Chris Cai,
Chris Tindal, Christoph Feichtenhofer, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li,
Danny Wyatt, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana
Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil,
Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Erik Brinkman, Esteban Ar-
caute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Firat Ozgenel, Francesco
17
Caggioni, Francisco Guzm´an, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella
Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Govind Thattai, Grant Herman, Grigory
Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hamid Shojanazeri, Han Zou, Hannah Wang,
Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry Aspegren, Hunter Gold-
man, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Irina-Elena Veliche, Itai Gat, Jake Weissman,
James Geboski, James Kohli, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jennifer
Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe
Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie, Jonathan Torres, Josh Ginsburg, Junjie
Wang, Kai Wu, Kam Hou U, Karan Saxena, Karthik Prasad, Kartikay Khandelwal, Katayoun
Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Kun Huang, Kunal
Chawla, Kushal Lakhotia, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva,
Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrstedt, Madian
Khabsa, Manav Avalani, Manish Bhatt, Maria Tsimpoukelli, Martynas Mankus, Matan Hasson,
Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Ke-
neally, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel
Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mo-
hammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navy-
ata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikolay Pavlovich Laptev, Ning Dong,
Ning Zhang, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli,
Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro Rittner, Philip Bontrager, Pierre Roux,
Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao,
Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Raymond Li,
Rebekkah Hogan, Robin Battey, Rocky Wang, Rohan Maheswari, Russ Howes, Ruty Rinott,
Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Sa-
tadru Pan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lindsay, Shaun Lind-
say, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shiva Shankar, Shuqiang Zhang, Shuqiang
Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen
Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Sungmin Cho,
Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser,
Tamara Best, Thilo Kohler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Tim-
othy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan,
Vinay Satish Kumar, Vishal Mangla, V´ıtor Albiero, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu
Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Con-
stable, Xiaocheng Tang, Xiaofang Wang, Xiaojian Wu, Xiaolan Wang, Xide Xia, Xilun Wu,
Xinbo Gao, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi,
Youngjin Nam, Yu, Wang, Yuchen Hao, Yundi Qian, Yuzi He, Zach Rait, Zachary DeVito, Zef
Rosnbrick, Zhaoduo Wen, Zhenyu Yang, and Zhiwei Zhao. The llama 3 herd of models, 2024.
URL https://arxiv.org/abs/2407.21783.
Rick Durrett. Random graph dynamics, volume 20. Cambridge university press, 2010.
Ronen Eldan and Yuanzhi Li. Tinystories: How small can language models be and still speak
coherent english?, 2023.
Vitaly Feldman. Does learning require memorization? a short tale about a long tail. In Proceedings
of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, pp. 954–959,
New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450369794. doi:
10.1145/3357713.3384290. URL https://doi.org/10.1145/3357713.3384290.
Yanjun Gao, Chen Sun, and Rebecca J. Passonneau. Automated pyramid summarization evalu-
In Mohit Bansal and Aline Villavicencio (eds.), Proceedings of the 23rd Conference
ation.
on Computational Natural Language Learning (CoNLL), pp. 404–418, Hong Kong, China,
November 2019. Association for Computational Linguistics. doi: 10.18653/v1/K19-1038. URL
https://aclanthology.org/K19-1038.
Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng
Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey,
2024. URL https://arxiv.org/abs/2312.10997.
Team Gemini. Gemini: A family of highly capable multimodal models, 2024. URL https:
//arxiv.org/abs/2312.11805.
18
Siavash Golkar, Michael Kagan, and Kyunghyun Cho. Continual learning via neural pruning. arXiv
preprint arXiv:1903.04476, 2019.
Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical
investigation of catastrophic forgetting in gradient-based neural networks, 2015. URL https:
//arxiv.org/abs/1312.6211.
Stephen T Grossberg. Studies of mind and brain: Neural principles of learning, perception, devel-
opment, cognition, and motor control, volume 70. Springer Science & Business Media, 2012.
Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces, 2024.
URL https://openreview.net/forum?id=AL1fq05o7H.
Albert Gu, Karan Goel, and Christopher Re. Efficiently modeling long sequences with structured
In International Conference on Learning Representations, 2022. URL https:
state spaces.
//openreview.net/forum?id=uYLFoz1vlAC.
Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek
Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, Arnaud
Doucet, Orhan Firat, and Nando de Freitas. Reinforced self-training (rest) for language modeling,
2023. URL https://arxiv.org/abs/2308.08998.
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio C´esar Teodoro Mendes, Allie Del Giorno, Sivakanth
Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital
Shah, Harkirat Singh Behl, Xin Wang, S´ebastien Bubeck, Ronen Eldan, Adam Tauman Kalai,
Yin Tat Lee, and Yuanzhi Li. Textbooks are all you need, 2023. URL https://arxiv.org/
abs/2306.11644.
Tom Gunter, Zirui Wang, Chong Wang, Ruoming Pang, Andy Narayanan, Aonan Zhang, Bowen
Zhang, Chen Chen, Chung-Cheng Chiu, David Qiu, Deepak Gopinath, Dian Ang Yap, Dong
Yin, Feng Nan, Floris Weers, Guoli Yin, Haoshuo Huang, Jianyu Wang, Jiarui Lu, John Pee-
bles, Ke Ye, Mark Lee, Nan Du, Qibin Chen, Quentin Keunebroek, Sam Wiseman, Syd Evans,
Tao Lei, Vivek Rathod, Xiang Kong, Xianzhi Du, Yanghao Li, Yongqiang Wang, Yuan Gao,
Zaid Ahmed, Zhaoyang Xu, Zhiyun Lu, Al Rashid, Albin Madappally Jose, Alec Doane, Alfredo
Bencomo, Allison Vanderby, Andrew Hansen, Ankur Jain, Anupama Mann Anupama, Areeba
Kamal, Bugu Wu, Carolina Brum, Charlie Maalouf, Chinguun Erdenebileg, Chris Dulhanty, Do-
minik Moritz, Doug Kang, Eduardo Jimenez, Evan Ladd, Fangping Shi, Felix Bai, Frank Chu,
Fred Hohman, Hadas Kotek, Hannah Gillis Coleman, Jane Li, Jeffrey Bigham, Jeffery Cao, Jeff
Lai, Jessica Cheung, Jiulong Shan, Joe Zhou, John Li, Jun Qin, Karanjeet Singh, Karla Vega,
Kelvin Zou, Laura Heckman, Lauren Gardiner, Margit Bowler, Maria Cordell, Meng Cao, Nicole
Hay, Nilesh Shahdadpuri, Otto Godwin, Pranay Dighe, Pushyami Rachapudi, Ramsey Tantawi,
Roman Frigg, Sam Davarnia, Sanskruti Shah, Saptarshi Guha, Sasha Sirovica, Shen Ma, Shuang
Ma, Simon Wang, Sulgi Kim, Suma Jayaram, Vaishaal Shankar, Varsha Paidi, Vivek Kumar,
Xin Wang, Xin Zheng, Walker Cheng, Yael Shrager, Yang Ye, Yasu Tanaka, Yihao Guo, Yun-
song Meng, Zhao Tang Luo, Zhi Ouyang, Alp Aygar, Alvin Wan, Andrew Walkingshaw, Andy
Narayanan, Antonie Lin, Arsalan Farooq, Brent Ramerth, Colorado Reed, Chris Bartels, Chris
Chaney, David Riazati, Eric Liang Yang, Erin Feldman, Gabriel Hochstrasser, Guillaume Seguin,
Irina Belousova, Joris Pelemans, Karen Yang, Keivan Alizadeh Vahid, Liangliang Cao, Mah-
yar Najibi, Marco Zuliani, Max Horton, Minsik Cho, Nikhil Bhendawade, Patrick Dong, Piotr
Maj, Pulkit Agrawal, Qi Shan, Qichen Fu, Regan Poston, Sam Xu, Shuangning Liu, Sushma
Rao, Tashweena Heeramun, Thomas Merth, Uday Rayala, Victor Cui, Vivek Rangarajan Sridhar,
Wencong Zhang, Wenqi Zhang, Wentao Wu, Xingyu Zhou, Xinwen Liu, Yang Zhao, Yin Xia,
Zhile Ren, and Zhongzheng Ren. Apple intelligence foundation language models, 2024. URL
https://arxiv.org/abs/2407.21075.
Kshitij Gupta, Benjamin Th´erien, Adam Ibrahim, Mats L. Richter, Quentin Anthony, Eugene
Belilovsky, Irina Rish, and Timoth´ee Lesort. Continual pre-training of large language models:
How to (re)warm your model?, 2023. URL https://arxiv.org/abs/2308.04014.
Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey,
and Noah A. Smith. Don’t stop pretraining: Adapt language models to domains and tasks. In
Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), Proceedings of the 58th
19
Annual Meeting of the Association for Computational Linguistics, pp. 8342–8360, Online, July
2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.740. URL
https://aclanthology.org/2020.acl-main.740.
Yikun Han, Chunjiang Liu, and Pengfei Wang. A comprehensive survey on vector database: Storage
and retrieval technique, challenge, 2023. URL https://arxiv.org/abs/2310.11703.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Ja-
cob Steinhardt. Measuring massive multitask language understanding. In International Confer-
ence on Learning Representations, 2021. URL https://openreview.net/forum?id=
d7KBjmI3GmQ.
Remco van der Hofstad. Random Graphs and Complex Networks. Cambridge Series in Statistical
and Probabilistic Mathematics. Cambridge University Press, 2016.
Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. Unnatural instructions: Tun-
ing language models with (almost) no human labor.
In Anna Rogers, Jordan Boyd-Graber,
and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers), pp. 14409–14428, Toronto, Canada, July
2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.806. URL
https://aclanthology.org/2023.acl-long.806.
Jiaxin Huang, Shixiang Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han.
Large language models can self-improve.
In Houda Bouamor, Juan Pino, and Kalika Bali
(eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Pro-
cessing, pp. 1051–1068, Singapore, December 2023. Association for Computational Linguis-
tics. doi: 10.18653/v1/2023.emnlp-main.67. URL https://aclanthology.org/2023.
emnlp-main.67.
Adam Ibrahim, Benjamin Th´erien, Kshitij Gupta, Mats L. Richter, Quentin Anthony, Timoth´ee
Lesort, Eugene Belilovsky, and Irina Rish. Simple and scalable strategies to continually pre-train
large language models, 2024. URL https://arxiv.org/abs/2403.08763.
Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. Large language
models struggle to learn long-tail knowledge. In Proceedings of the 40th International Conference
on Machine Learning, ICML’23. JMLR.org, 2023.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child,
Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language
models, 2020. URL https://arxiv.org/abs/2001.08361.
Richard M Karp. The transitive closure of a random digraph. Random Structures & Algorithms, 1
(1):73–93, 1990.
Ronald Kemker, Marc McClure, Angelina Abitino, Tyler L. Hayes, and Christopher Kanan. Mea-
In Proceedings of the Thirty-Second AAAI
suring catastrophic forgetting in neural networks.
Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelli-
gence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelli-
gence, AAAI’18/IAAI’18/EAAI’18. AAAI Press, 2018. ISBN 978-1-57735-800-8.
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A.
Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Has-
sabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic for-
getting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–
3526, 2017. doi: 10.1073/pnas.1611835114. URL https://www.pnas.org/doi/abs/
10.1073/pnas.1611835114.
Hunter Lang, Monica N Agrawal, Yoon Kim, and David Sontag. Co-training improves prompt-
based learning for large language models. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song,
Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International
Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research,
pp. 11985–12003. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/
v162/lang22a.html.
20
Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep
neural networks. ICML 2013 Workshop: Challenges in Representation Learning, 2013.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal,
Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, Sebastian Riedel, and Douwe
Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Proceedings of the
34th International Conference on Neural Information Processing Systems, NIPS ’20, Red Hook,
NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546.
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ra-
masesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam
Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with lan-
guage models, 2022. URL https://arxiv.org/abs/2206.14858.
Haoran Li, Qingxiu Dong, Zhengyang Tang, Chaojun Wang, Xingxing Zhang, Haoyang Huang,
Shaohan Huang, Xiaolong Huang, Zeqiang Huang, Dongdong Zhang, Yuxian Gu, Xin Cheng,
Xun Wang, Si-Qing Chen, Li Dong, Wei Lu, Zhifang Sui, Benyou Wang, Wai Lam, and Furu
Wei. Synthetic data (almost) from scratch: Generalized instruction tuning for language models,
2024. URL https://arxiv.org/abs/2402.13064.
Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following
models. https://github.com/tatsu-lab/alpaca_eval, 5 2023a.
Yuanzhi Li, S´ebastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee.
Textbooks are all you need ii: phi-1.5 technical report, 2023b. URL https://arxiv.org/
abs/2309.05463.
Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization
Branches Out, pp. 74–81, Barcelona, Spain, July 2004. Association for Computational Linguis-
tics. URL https://aclanthology.org/W04-1013.
Hao Liu, Matei Zaharia, and Pieter Abbeel. Ring attention with blockwise transformers for near-
In NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following,
infinite context.
2023. URL https://openreview.net/forum?id=xulyCXgIWH.
David Lopez-Paz and Marc’Aurelio Ranzato. Gradient episodic memory for continual learning.
Advances in neural information processing systems, 30:6467–6476, 2017.
Pratyush Maini, Skyler Seto, Richard Bai, David Grangier, Yizhe Zhang, and Navdeep Jaitly.
Rephrasing the web: A recipe for compute and data-efficient language modeling.
In Lun-
Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the 62nd Annual Meet-
ing of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 14044–
14072, Bangkok, Thailand, August 2024. Association for Computational Linguistics. URL
https://aclanthology.org/2024.acl-long.757.
Michael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks: The
sequential learning problem. In Gordon H. Bower (ed.), Psychology of Learning and Motivation,
volume 24 of Psychology of Learning and Motivation, pp. 109–165. Academic Press, 1989. doi:
https://doi.org/10.1016/S0079-7421(08)60536-8. URL https://www.sciencedirect.
com/science/article/pii/S0079742108605368.
Nick Mecklenburg, Yiyou Lin, Xiaoxiao Li, Daniel Holstein, Leonardo Nunes, Sara Malvar, Bruno
Silva, Ranveer Chandra, Vijay Aski, Pavan Kumar Reddy Yannam, Tolga Aktas, and Todd
Hendry. Injecting new knowledge into large language models via supervised fine-tuning, 2024.
URL https://arxiv.org/abs/2404.00213.
Kevin Meng, David Bau, Alex J Andonian, and Yonatan Belinkov. Locating and editing factual asso-
ciations in GPT. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.),
Advances in Neural Information Processing Systems, 2022. URL https://openreview.
net/forum?id=-h6WAS6eE4.
21
Kevin Meng, Arnab Sen Sharma, Alex J Andonian, Yonatan Belinkov, and David Bau. Mass-editing
memory in a transformer. In The Eleventh International Conference on Learning Representations,
2023. URL https://openreview.net/forum?id=MkbcAHIYgyS.
Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen tau Yih, Pang Wei Koh, Mohit Iyyer,
Luke Zettlemoyer, and Hannaneh Hajishirzi. Factscore: Fine-grained atomic evaluation of fac-
tual precision in long form text generation, 2023. URL https://arxiv.org/abs/2305.
14251.
Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D Manning. Fast
model editing at scale. In International Conference on Learning Representations, 2022. URL
https://openreview.net/pdf?id=0DcZxeWfOPt.
Niklas Muennighoff, Alexander M Rush, Boaz Barak, Teven Le Scao, Nouamane Tazi, Aleksan-
dra Piktus, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. Scaling data-constrained language
models. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL
https://openreview.net/forum?id=j5BuTrEj35.
Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming
Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris
Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski
Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter
Welinder, and Lilian Weng. Text and code embeddings by contrastive pre-training, 2022. URL
https://arxiv.org/abs/2201.10005.
Ani Nenkova, Rebecca Passonneau, and Kathleen McKeown. The pyramid method: Incorporat-
ing human content selection variation in summarization evaluation. ACM Trans. Speech Lang.
Process., 4(2):4–es, may 2007.
doi: 10.1145/1233912.1233913. URL
https://doi.org/10.1145/1233912.1233913.
ISSN 1550-4875.
Cuong V Nguyen, Yingzhen Li, Thang D Bui, and Richard E Turner. Variational continual learning.
arXiv preprint arXiv:1710.10628, 2017.
OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Floren-
cia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red
Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Moham-
mad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher
Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brock-
man, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann,
Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis,
Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey
Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux,
Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila
Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix,
Sim´on Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gib-
son, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan
Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hal-
lacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan
Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu,
Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun
Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Ka-
mali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook
Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel
Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen
Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel
Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez,
Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv
Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney,
Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick,
Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel
22
Mossing, Tong Mu, Mira Murati, Oleg Murk, David M´ely, Ashvin Nair, Reiichiro Nakano, Ra-
jeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe,
Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel
Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe
de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny,
Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl,
Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra
Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders,
Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Sel-
sam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor,
Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky,
Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang,
Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Pre-
ston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cer´on Uribe, Andrea Vallone, Arun Vi-
jayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan
Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng,
Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Work-
man, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming
Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao
Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. Gpt-4 technical report, 2024. URL
https://arxiv.org/abs/2303.08774.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong
Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kel-
ton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike,
and Ryan Lowe. Training language models to follow instructions with human feedback.
In
S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in
Neural Information Processing Systems, volume 35, pp. 27730–27744. Curran Associates, Inc.,
URL https://proceedings.neurips.cc/paper_files/paper/2022/
2022.
file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf.
Oded Ovadia, Menachem Brief, Moshik Mishaeli, and Oren Elisha. Fine-tuning or retrieval? com-
paring knowledge injection in llms, 2024. URL https://arxiv.org/abs/2312.05934.
Richard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica Chen,
Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, and Samuel Bowman. QuALITY:
Question answering with long input texts, yes!
In Marine Carpuat, Marie-Catherine de Marn-
effe, and Ivan Vladimir Meza Ruiz (eds.), Proceedings of the 2022 Conference of the North
American Chapter of the Association for Computational Linguistics: Human Language Technolo-
gies, pp. 5336–5358, Seattle, United States, July 2022. Association for Computational Linguis-
tics. doi: 10.18653/v1/2022.naacl-main.391. URL https://aclanthology.org/2022.
naacl-main.391.
Jupinder Parmar, Sanjev Satheesh, Mostofa Patwary, Mohammad Shoeybi, and Bryan Catanzaro.
Reuse, don’t retrain: A recipe for continued pretraining of language models, 2024. URL https:
//arxiv.org/abs/2407.07263.
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning
with gpt-4, 2023. URL https://arxiv.org/abs/2304.03277.
Pinecone. Rag with pinecone, 2024. URL https://www.pinecone.io/solutions/rag/.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions
for machine comprehension of text, 2016. URL https://arxiv.org/abs/1606.05250.
Vinay Venkatesh Ramasesh, Aitor Lewkowycz, and Ethan Dyer. Effect of scale on catastrophic
forgetting in neural networks. In International Conference on Learning Representations, 2022.
URL https://openreview.net/forum?id=GhVS8_yPeEa.
R. Ratcliff. Connectionist models of recognition memory: Constraints imposed by learning and
forgetting functions. Psychological Review, 97(2):285–308, 1990. doi: 10.1037/0033-295X.97.
2.285.
23
Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert.
icarl:
In Proceedings of the IEEE conference on
Incremental classifier and representation learning.
Computer Vision and Pattern Recognition, pp. 2001–2010, 2017.
Anthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2):
123–146, 1995.
Baptiste Rozi`ere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, J´er´emy Rapin, Artyom Kozhevnikov, Ivan Ev-
timov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong,
Alexandre D´efossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier,
Thomas Scialom, and Gabriel Synnaeve. Code llama: Open foundation models for code, 2024.
URL https://arxiv.org/abs/2308.12950.
Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray
Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint
arXiv:1606.04671, 2016.
Jeffrey C. Schlimmer and Douglas Fisher. A case study of incremental concept induction. In Pro-
ceedings of the Fifth AAAI National Conference on Artificial Intelligence, AAAI’86, pp. 496–501.
AAAI Press, 1986.
Raphael Schumann and Ines Rehbein. Active learning via membership query synthesis for semi-
supervised sentence classification. In Mohit Bansal and Aline Villavicencio (eds.), Proceedings
of the 23rd Conference on Computational Natural Language Learning (CoNLL), pp. 472–481,
Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/
v1/K19-1044. URL https://aclanthology.org/K19-1044.
H. Scudder. Probability of error of some adaptive pattern-recognition machines. IEEE Transactions
on Information Theory, 11(3):363–371, 1965. doi: 10.1109/TIT.1965.1053799.
Claude Elwood Shannon. Prediction and entropy of printed english. Bell System Technical
Journal, 30:50–64, January 1951. URL http://languagelog.ldc.upenn.edu/myl/
Shannon1950.pdf.
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang,
Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathe-
matical reasoning in open language models, 2024. URL https://arxiv.org/abs/2402.
03300.
Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep gener-
ative replay. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan,
and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 30. Cur-
ran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/
paper/2017/file/0efbe98067c6c73dba1250d2beaa81f9-Paper.pdf.
Charlie Snell, Dan Klein, and Ruiqi Zhong. Learning by distilling context, 2022. URL https:
//arxiv.org/abs/2209.15189.
G. C. Stine. Skepticism, relevant alternatives, and deductive closure. Philosophical Studies: An
ISSN
International Journal for Philosophy in the Analytic Tradition, 29(4):249–261, 1976.
00318116, 15730883. URL http://www.jstor.org/stable/4319027.
Yu Sun, Xinhao Li, Karan Dalal, Jiarui Xu, Arjun Vikram, Genghan Zhang, Yann Dubois, Xinlei
Chen, Xiaolong Wang, Sanmi Koyejo, Tatsunori Hashimoto, and Carlos Guestrin. Learning to
(learn at test time): Rnns with expressive hidden states, 2024. URL https://arxiv.org/
abs/2407.04620.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model.
https://github.com/tatsu-lab/stanford_alpaca, 2023.
Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. Efficient transformers: A survey, 2022.
URL https://arxiv.org/abs/2009.06732.
24
TogetherAI. Redpajama: an open dataset for training large language models, 2023. URL https:
//github.com/togethercomputer/RedPajama-Data.
Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Shengyi Huang, Kashif Rasul,
Alvaro Bartolome, Alexander M. Rush, and Thomas Wolf. The Alignment Handbook, 2023. URL
https://github.com/huggingface/alignment-handbook.
Pablo Villalobos, Anson Ho, Jaime Sevilla, Tamay Besiroglu, Lennart Heim, and Marius Hobbhahn.
Will we run out of data? limits of llm scaling based on human-generated data, 2024.
Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Courna-
peau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, St´efan J. van der
Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nel-
son, Eric Jones, Robert Kern, Eric Larson, C J Carey, ˙Ilhan Polat, Yu Feng, Eric W. Moore,
Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero,
Charles R. Harris, Anne M. Archibald, Antˆonio H. Ribeiro, Fabian Pedregosa, Paul van Mul-
bregt, and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing
in Python. Nature Methods, 17:261–272, 2020. doi: 10.1038/s41592-019-0686-2.
Alex Wang, Richard Yuanzhe Pang, Angelica Chen, Jason Phang, and Samuel R. Bowman. SQuAL-
ITY: Building a long-document summarization dataset the hard way. In Yoav Goldberg, Zornitsa
Kozareva, and Yue Zhang (eds.), Proceedings of the 2022 Conference on Empirical Methods in
Natural Language Processing, pp. 1139–1156, Abu Dhabi, United Arab Emirates, December
2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.75. URL
https://aclanthology.org/2022.emnlp-main.75.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. In The Eleventh International Conference on Learning Representations, 2023a. URL
https://openreview.net/forum?id=1PL1NIMMrw.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and
Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions.
In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual
Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 13484–
13508, Toronto, Canada, July 2023b. Association for Computational Linguistics. doi: 10.18653/
v1/2023.acl-long.754. URL https://aclanthology.org/2023.acl-long.754.
Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Wilcox, Chengxu Zhuang, Juan Ciro,
Rafael Mosquera, Bhargavi Paranjabe, Adina Williams, Tal Linzen, and Ryan Cotterell (eds.).
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Lan-
guage Learning, Singapore, December 2023. Association for Computational Linguistics. URL
https://aclanthology.org/2023.conll-babylm.0.
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du,
Andrew M. Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In Interna-
tional Conference on Learning Representations, 2022. URL https://openreview.net/
forum?id=gEZrGCozdqR.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi,
Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language
models. In Proceedings of the 36th International Conference on Neural Information Processing
Systems, NIPS ’22, Red Hook, NY, USA, 2024. Curran Associates Inc. ISBN 9781713871088.
Tongtong Wu, Linhao Luo, Yuan-Fang Li, Shirui Pan, Thuy-Trang Vu, and Gholamreza Haffari.
Continual learning for large language models: A survey, 2024. URL https://arxiv.org/
abs/2402.01364.
Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V. Le. Self-training with noisy student
improves imagenet classification. In 2020 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 10684–10695, 2020. doi: 10.1109/CVPR42600.2020.01070.
25
I. Zeki Yalniz, Herv´e J´egou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. Billion-scale semi-
supervised learning for image classification, 2019. URL https://arxiv.org/abs/1905.
00546.
Zitong Yang, MICHAL LUKASIK, Vaishnavh Nagarajan, Zonglin Li, Ankit Rawat, Manzil Za-
heer, Aditya K Menon, and Sanjiv Kumar. Resmem: Learn what you can and memorize the rest.
In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (eds.), Advances in
Neural Information Processing Systems, volume 36, pp. 60768–60790. Curran Associates, Inc.,
URL https://proceedings.neurips.cc/paper_files/paper/2023/
2023.
file/bf0857cb9a41c73639f028a80301cdf0-Paper-Conference.pdf.
Dong Yuan, Eti Rastogi, Gautam Naik, Sree Prasanna Rajagopal, Sagar Goyal, Fen Zhao, Bharath
Chintagunta, and Jeff Ward. A continued pretrained llm approach for automatic medical note
generation, 2024a. URL https://arxiv.org/abs/2403.09057.
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian Li, Sainbayar Sukhbaatar, Jing Xu,
and Jason Weston. Self-rewarding language models, 2024b. URL https://arxiv.org/
abs/2401.10020.
Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence.
In International Conference on Machine Learning, pp. 3987–3995. PMLR, 2017.
Dan Zhang, Sining Zhoubian, Yisong Yue, Yuxiao Dong, and Jie Tang. Rest-mcts*: Llm self-
training via process reward guided tree search, 2024a. URL https://arxiv.org/abs/
2406.03816.
Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. Bertscore:
Evaluating text generation with bert. In International Conference on Learning Representations,
2020. URL https://openreview.net/forum?id=SkeHuCVFDr.
Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B.
Hashimoto. Benchmarking large language models for news summarization. Transactions of the
Association for Computational Linguistics, 12:39–57, 2024b. doi: 10.1162/tacl a 00632. URL
https://aclanthology.org/2024.tacl-1.3.
Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-Chin Huang, Min Xu, Less Wright,
Hamid Shojanazeri, Myle Ott, Sam Shleifer, Alban Desmaison, Can Balioglu, Pritam Damania,
Bernard Nguyen, Geeta Chauhan, Yuchen Hao, Ajit Mathews, and Shen Li. Pytorch fsdp: Expe-
riences on scaling fully sharded data parallel. Proc. VLDB Endow., 16(12):3848–3860, aug 2023.
ISSN 2150-8097. doi: 10.14778/3611540.3611569. URL https://doi.org/10.14778/
3611540.3611569.
Zexuan Zhong, Zhengxuan Wu, Christopher Manning, Christopher Potts, and Danqi Chen.
MQuAKE: Assessing knowledge editing in language models via multi-hop questions. In Houda
Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empiri-
cal Methods in Natural Language Processing, pp. 15686–15702, Singapore, December 2023.
Association for Computational Linguistics.
doi: 10.18653/v1/2023.emnlp-main.971. URL
https://aclanthology.org/2023.emnlp-main.971.
Chen Zhu, Ankit Singh Rawat, Manzil Zaheer, Srinadh Bhojanapalli, Daliang Li, Felix Yu, and
Sanjiv Kumar. Modifying memories in transformer models, 2020.
26
CODEBASE, DATASET, AND MODEL WEIGHTS
We provide the codebase for reproducing all results discussed in the paper below:
https://github.com/ZitongYang/Synthetic_Continued_Pretraining.git
We release the 455M EntiGraph corpus below:
https://huggingface.co/datasets/zitongyang/entigraph-quality-corpus
We release the EntiGraph CPT model weights below:
https://huggingface.co/zitongyang/llama-3-8b-entigraph-quality
CONTENTS
A Additional related work
B Details on the QuALITY dataset
C Training details for the main experiments
D Task-specific finetuning for QuALITY Question set
E Additional details on open-book experiments
E.1 Stage 1: offline indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E.2 Stage 2: inference-time retrieval and reranking . . . . . . . . . . . . . . . . . . . .
E.3 Hyperparameter tuning .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
F Proof of Theorem 1 and other analytical formulas
F.1 Curve fitting with mixture of exponential formula . . . . . . . . . . . . . . . . . .
G Synthetic data generation prompts
G.1 EntiGraph Prompts .
G.2 Rephrase prompts .
.
.
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
H Additional evaluation details of main experiments
H.1 QuALITY QA question set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
H.2 Closed-book Summarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
H.3 Additional quantitative summary evaluation . . . . . . . . . . . . . . . . . . . . .
28
28
28
29
30
30
31
31
32
35
36
36
37
39
39
40
45
27
A ADDITIONAL RELATED WORK
Synthetic data generation. There is rich literature on using neural nets to generate synthetic
data. Many such approaches were originally developed for semi-supervised learning—self-training
and pseudo-labeling methods improve models by iteratively training them on their own predictions
(Scudder, 1965; Lee, 2013; Yalniz et al., 2019; Berthelot et al., 2019; Xie et al., 2020), and co-
training uses two models to supervise each other (Blum & Mitchell, 1998; Balcan et al., 2004).
Before language models rose to prominence, few approaches attempted to synthesize inputs. One
exception is membership query synthesis, which explored the synthesis of inputs in a supervised
learning context (Angluin, 1988; Schumann & Rehbein, 2019).
Contemporary works employ co-training (Lang et al., 2022) and self-training to improve language
model performance, often on mathematical reasoning tasks (Huang et al., 2023; Gulcehre et al.,
2023; Zhang et al., 2024a), or synthesize input-output pairs for instruction tuning, usually by con-
ditioning on a curated seed set (Wang et al., 2023b; Honovich et al., 2023; Taori et al., 2023; Peng
et al., 2023; Yuan et al., 2024b; Li et al., 2024).
Continual learning and pretraining. Continual learning is rooted in historical work on connec-
tionist networks (McCloskey & Cohen, 1989; Ratcliff, 1990) and considers learning with tasks ar-
riving in an online manner (Schlimmer & Fisher, 1986; Grossberg, 2012). The main focus is on
mitigating a neural net’s “catastrophic forgetting” of previously encountered tasks (Robins, 1995;
Goodfellow et al., 2015; Kemker et al., 2018). Approaches include regularizing parameter updates
to preserve important parameters (Nguyen et al., 2017; Zenke et al., 2017; Kirkpatrick et al., 2017);
dynamically modifying the architecture (Rusu et al., 2016; Golkar et al., 2019); and recalling or
replaying previous experiences (Rebuffi et al., 2017; Shin et al., 2017; Lopez-Paz & Ranzato, 2017).
Modern works in continued pretraining (cf. §1.1) effectively mitigate catastrophic forgetting by
scaling parameter count (Ramasesh et al., 2022) and mixing in updates on pretraining data (Ouyang
et al., 2022).
B DETAILS ON THE QUALITY DATASET
We provide additional details on the QuALITY dataset below. For each book, we execute entity
extraction (Step 1, §2.2) and then analyze all pair-wise relations between entities and a subset of all
triplet relations (Step 2, 2.2). We provide summary statistics for the Raw and EntiGraph corpora in
Figure 6.
(a) Raw article tokens
(b) Extracted entities
(c) EntiGraph corpus tokens
Figure 6: Histograms over the 265 QuALITY articles and books. (a) The token count of raw articles.
(b) The number of extracted entities. (c) The token count of EntiGraph synthetic data (generated for
each book).
C TRAINING DETAILS FOR THE MAIN EXPERIMENTS
Continued pretraining details.
In all experiments, we continue pretraining the Llama 3 8B Base
model with a context length of 2048 and batch size of 16. We apply a linear learning rate warmup
for 5% of total steps, followed by a cosine decay with peak learning rate 5e-6. We perform full
parameter training with Fully Sharded Data Parallelism (FSDP, Zhao et al. (2023)).
28
2345678Token count (K)051015202530Frequency020406080100Entity count0510152025303540Frequency010002000300040005000Token count (K)051015202530FrequencyEntiGraph continued pretraining details. To mitigate the forgetting of pretrained knowledge,
we perform replay with a rate of 0.1 using 1B RedPajama tokens (TogetherAI, 2023). More pre-
cisely, for each training batch, we flip a biased coin such that with 10% probability, we load the
RedPajama data instead of the EntiGraph synthetic data.
Raw continued pretraining details. Next, we provide details for our continued pretraining di-
rectly on the Raw corpus, producing the “Raw CPT” model. Because the Raw corpus only has 1.3M
tokens, we jointly tune the number of epochs (repetition factor) and the RedPajama replay rate on
accuracy over a QuALITY QA validation split. The selected hyperparameter configuration uses 4
epochs and a 0.1 replay rate.
Instruction tuning details. We use the UltraChat instruction tuning dataset (Ding et al., 2023)
filtered by the Huggingface team (Tunstall et al., 2023) as our instruction tuning data. We use the
chat template of Llama 3.1 8B Instruct (Dubey et al., 2024) to format the UltraChat conversations,
obtaining a 250M token instruction tuning dataset. We apply a linear learning rate warmup followed
by a cosine decay to 0 with peak learning rate 5e-6, and train the model for 1 epoch with a batch size
of 512 and context window of 2048. To sanity check our instruction tuning procedure, we measure
the AlpacaEval (Li et al., 2023a) winrate against GPT-4 and find it improves from 0% to 6.25%,
comparable to a 7.7% baseline winrate of Llama 2 Chat 13B.
Compute resource. All the continued pretraining experiments are performed with one 8×H100
node. With PyTorch FSDP Zhao et al. (2023), we obtain throughput of 6090 tokens per second.
Since all experiments use the same model architecture, batch size, and context length, the time to run
the experiments can be calculated based on the total tokens seen during training. For example, the
main EntiGraph is trained on 455M tokens with 2 epochs. Therefore, it should take 455M×2/6090
seconds, which is about 41 hours.
D TASK-SPECIFIC FINETUNING FOR QUALITY QUESTION SET
Our work considers task-agnostic synthetic data generation and continued pretraining as a way to
obtain generalizable knowledge about a domain, in a way that can later be extracted via few-shot
prompting (Brown et al., 2020) and instruction tuning (Ouyang et al., 2022).
However, if our goal is only to do well on a single task, such as question answering, then we could
fine-tune a language model for that particular task. This approach worked extremely well on tasks
such as SQuAD (Rajpurkar et al., 2016) in-domain but suffered from degraded performance outside
the fine-tuning data distribution Awadalla et al. (2022).
We do not extensively perform comparisons to task-specific finetuning due to the more general multi-
task goals of EntiGraph, we run preliminary experiments comparing a simple QA SFT baseline
to EntiGraph, and find that EntiGraph scaling and synthetic data generation costs are generally
favorable even when compared to this strong, task-specific baseline.
QA SFT. We follow the same set as in §2.1 and §3 except that we do not prompt LMsynth to
generate general knowledge about QuALTY articles. Instead, we prompt LMsynth to generate QA
pairs directly:
You are an assistant to help read a article and then rephrase it in a
question answering format. The user will provide you with an article
with title, year, content. You need to generate a paraphrase of the
same article in question and answer format with multiple tags of
"Question: ..." followed by "Answer: ...". Remember to keep the
meaning and every content of the article intact, including the title,
year, etc.
We repeat this prompt many times at temperature 1.0, resulting in 28M tokens on synthetic question
answer pairs. We perform the same continued pretraining procedure in §4.1 on Llama 3 8B and refer
to this model as “QA SFT”.
29
Figure 7: Accuracy on the QuALITY question set Qtest (y-axis) as a function of the synthetic token
count (x-axis). Comparison among EntiGraph CPT, Rephrase CPT, and QA SFT.
Results discussion We plot the QA SFT scaling curve in Figure 7. We can see that task-specific
finetuning demonstrates a very sharp improvement in QA accuracy, consistent with prior results
showing task-specific finetuning gains for pretrained models. While QA SFT performance is high,
we note that EntiGraph attains similar performance despite being entirely task-agnostic, and the
overall dollar cost of creating the dataset is much lower for EntiGraph.
This difference in synthetic data generation cost is hidden in Figure 7, as we plot the number of
training tokens rather than dollars spent to generate the synthetic data. For QA SFT, each QA
question is generally short, resulting in large inefficiencies in generating this QA dataset. We found
that the input token to output token ratio was large compared with Rephrase CPT and EntiGraph
CPT, resulting in over $5k to generate just 28M tokens 5. This difference in cost means that further
scaling became prohibitively expensive, and that EntiGraphs’s performance in Figure 7 is even better
than it appears, if we match for total cost rather than token budget.
E ADDITIONAL DETAILS ON OPEN-BOOK EXPERIMENTS
We provide additional details on our open-book experimental setup below, including our retrieval-
augmented generation (RAG, Lewis et al. (2020); Gao et al. (2024)) pipeline. As mentioned in §5,
we use a standard two-stage RAG pipeline: first, an offline stage which indexes document chunks;
second, inference-time retrieval, reranking, and placement of those chunks in a few-shot LM prompt.
E.1 STAGE 1: OFFLINE INDEXING
The purpose of the indexing stage is to construct an index over all the 265 articles and books from
the QuALITY corpus Dsource. More specifically, this stage chunks documents from the given corpus,
obtains dense vector embeddings for each chunk using an API-based embedding model, and indexes
the (embedding, chunk) pairs.
1 , ..., C (i)
Chunking documents. We first split each document D(i) ∈ {D(i)}n
i=1 = Dsource into a set
of mi document chunks {C (i)
mi}. To perform this splitting, we use the Recursive
CharacterTextSplitter from Chase (2022), which attempts to keep all paragraphs (and then
sentences, and then words) together for as long as possible, in order to preserve the semantics within
each chunk. We use non-overlapping chunks and tune chunk size in characters (chunk size,
hyperparameter values provided below). Lastly, because we have access to metadata about each
document D(i)—namely, the title, author, and year of the book or article—we prepend this meta-
data to each document chunk. This is analogous to how a corporation building a RAG system over
5OpenAI API pricing, Sep 2024
30
100101102Number of synthetic tokens (in Millions)40.042.545.047.550.052.555.0QA AccuracyEntiGraph CPTRephrase CPTQA SFTtheir own document store could include metadata about the document (title, author, year, etc.). These
final chunks with metadata prepended are embedded, and are the ones that are retrieved and placed
in-context.
Embedding and indexing document chunks. Next, we obtain dense embeddings for all
document chunks using a state-of-the-art text embedding model OpenAI text-embedding
-3-large (Neelakantan et al., 2022). Lastly, we index all (embedding, chunk) tuples using a
FAISS vector store (Douze et al., 2024).
E.2 STAGE 2: INFERENCE-TIME RETRIEVAL AND RERANKING
At inference time, the RAG system receives a test query q ∈ Qtest. Each query q is contextualized
with the article title and author name, as described in §3, and contains its four possible answer
choices (QuALITY is a 4-choice, multiple choice dataset). In Stage 2, we embed the query with the
API-based embedding model, retrieve K document chunks using an approximate nearest-neighbor
search, and lastly, select the k < K most relevant chunks using an API-based reranker.
Retrieving top-K document chunks. We embed q with text-embedding-3-large, and
retrieve the top-K most relevant document chunks from our indexed vector store using FAISS simi-
larity search with a Euclidean distance metric.
Reranking to obtain top-k (k < K) chunks. Next, we use a reranker to filter the K retrieved
document chunks to a smaller number of reranked chunks k. Rerankers are known to significantly
improve recall (the proportion of the time that the salient article is contained in the top chunks),
and indeed, the recall of our RAG pipelines is near-perfect (Table 3 in §5). Specifically, we pass
the query q and the list of K retrieved document chunks to a state-of-the-art reranker—Cohere
rerank-english-v3.0 (Cohere, 2024)—which returns a list of the K chunks in order from
most to least semantically relevant for the query. We take the k highest scoring chunks and place
them in our few-shot prompt.
Few-shot prompt formatting. Our full few-shot chain-of-thought evaluation prompts for the
open-book setting are provided in the codebase. Similar to the closed-book QA evaluation prompt,
we manually write and fact-check in-context learning examples about well-known books, to avoid
leaking knowledge from the QuALITY articles. In early experiments, we found that placing the
retrieved contexts first, followed by the question and answer choices after, significantly improved
performance compared to question-then-contexts; we use this format throughout the retrieval exper-
iments. We treat as a hyperparameter whether the reranked chunks are ordered from the best match
to worst (best first) or from the worst match to best (best last). When performing few-shot
evaluation, we follow the sampling procedure used in the closed-book experiments (Appendix H.1).
Specifically, we generate 64 responses for each question, and filter out responses that do not parse
to one of the four choices. Lastly, we randomly select one of the valid responses as the model’s final
answer.
E.3 HYPERPARAMETER TUNING
In our experiments, we compare two LMs used in the RAG pipeline above: EntiGraph CPT and its
base model, Llama 3 8B Base. As mentioned above, we fix the retrieved number of chunks to K =
128, but vary the number of reranked chunks k which are ultimately placed in the context window.
For each language model + RAG pipeline, we independently tune the following hyperparameters
with a grid search on accuracy using a QuALITY QA validation split:
• Document chunk size ∈ {256, 512, 1024}
• Rerank top-k ∈ {1, 2, 4, 8, 16}
• Order of chunks ∈ {best first, best last}
• Eval temperature ∈ {0.1, 0.3, 0.5, 0.7}
We refer the reader to our codebase for tuned hyperparameters.
31
F PROOF OF THEOREM 1 AND OTHER ANALYTICAL FORMULAS
In this section, we prove Theorem 1 and provide the derivations for several other approximation
formulas.
Proof of Theorem 1. Fix the matrix M0, we observe that
Acc(Mt) =
E[∥Mt∥1|M0]
V (V − 1)
=
(cid:88)
(i,j)∈V 2
E[1((i, j) ∈ Dt)|M0]
V (V − 1)
=
(cid:88)
(i,j)∈V 2
P[(i, j) ∈ Dt|M0]
V (V − 1)
.
For each (i, j) ∈ V 2, we define qi,j to be the probability that (i, j) is included in the set
{(xt, z1
t ), (xt, yt)}. Note that each iteration of the procedure generates a
path (xt, z1
t , yt) independently identically. So naturally qi,j does not depend on the
time t. This implies that P[(i, j) ∈ Dt|M0] = 1 − (1 − qi,j)t. Thus we can further rewrite the link
density as
t ), . . . , (xt, zkt
t , . . . , zkt
t ), (xt, z2
t , z2
Acc(Mt) =
|Dsource|
V (V − 1)
=
|Dsource|
V (V − 1)
+
+
(cid:88)
(i,j)∈V 2\Dsource
(cid:88)
(i,j)∈V 2\Dsource
P[(i, j) ∈ Dt|M0]
V (V − 1)
1 − (1 − qi,j)t
V (V − 1)
.
The remaining task is to estimate qi,j. We say a vertex j is reachable from i and denote i ∼ j, if
there is a directed path from i to j in M0. We define R = {(u, v) ∈ V 2 : u ̸= v, u ∼ v} to be the
set of all reachable pairs of vertices in V. We note that qi,j is non-zero if and only if j is reachable
from i in M0. Now, for any t ≥ 1, the function 1 − (1 − x)t is concave, thus by Jensen’s inequality,
we have
(cid:88)
1 − (1 − qi,j)t ≤
(i,j)∈V 2\Dsource
(cid:88)
(i,j)∈R
1 − (1 − qi,j)t ≤ |R| (cid:0)1 − (1 − ¯qi,j)t(cid:1) ,
where
¯qi,j =
(cid:80)
(i,j)∈R qi,j
|R|
.
For each (i, j) ∈ R, the probability qi,j satisfies
(cid:80)
qi,j =
a̸=b∈V 2 1((i, j) ∈ {(a, z1), (a, z2), . . . , (a, zk), (a, b)})
V (V − 1)
where (a, z1, z1, · · · , zk, b) is the shortest path in M0 connecting a and b. If there is no such path,
then by default the indicator equals zero. Now we look at
(cid:88)
qi,j =
(i,j)∈R
1
V (V − 1)
≤
=
1
V (V − 1)
1
V (V − 1)
(cid:88)
(cid:88)
(i,j)∈R
(a,b)∈R
(cid:88)
(cid:88)
(a,b)∈R
i̸=j∈V 2
(cid:88)
ℓa,b,
(a,b)∈R
1((i, j) ∈ {(a, z1), (a, z2), . . . , (a, zk), (a, b)})
1((i, j) ∈ {(a, z1), (a, z2), . . . , (a, zk), (a, b)})
where ℓa,b is the length of the shortest path connecting a to b. To analyze the typical shortest length
of paths, we present a few classical results on directed Erd˝os-R´enyi graphs. For any a ∈ V, let X(a)
denote the set of vertices reachable from a and let Y (a) denote the set of vertices from which a is
reachable. Recall that ρ(λ) is the extinction probability for the Poisson(λ) branching process.
Lemma F.1 (Lemma 1 and Corollary 1 in Karp (1990)). For each vertex a, with probability tending
to 1 as V tends to infinity, there exists a constant β > 0 such that either |X(a)| ≤ β log V or
V ). Moreover, the probability that the latter happens tends to 1−ρ(λ)
|X(a)| = (1−ρ(λ))V +Θ(
as V tends to infinity. The same is true for Y (a).
√
32
For each vertex a, the set X(a) is said to be small if |X(a)| ≤ β log V (in such case we write
a ∈ SX ) and large if |X(a)| = (1 − ρ(λ))V + Θ(
V ) (we write a ∈ LX ). We define SY and LY
similarly.
√
Lemma F.2 (Theorem 3 in Karp (1990) and Theorem 2.4.1 in Durrett (2010)). With probability
tending to 1, the following statement holds for all a and b in V: if X(a) is large and Y (b) is large,
then b is reachable from a. Moreover, if X(a) is large and Y (b) is large, then for any ε > 0 and any
sufficiently small δ > 0,
P[ℓa,b > (1 + ε) log V / log λ] < exp(−V εδ).
With Lemma F.1 and Lemma F.2, we can now give useful estimates of |R|. In particular, for any
ε > 0,
|R| = |{(a, b) ∈ R : a ∈ LX , b ∈ LY }| + |{(a, b) ∈ R : a ∈ SX or b ∈ SY }|
≤ (1 − ρ(λ))2(1 + ε/4)V 2 + 2(1 + ε)V β log V
≤ (1 − ρ(λ))2(1 + ε/3)V (V − 1),
with high probability. Similarly, for the lower bound,
|R| = |{(a, b) ∈ R : a ∈ LX , b ∈ LY }| + |{(a, b) ∈ R : a ∈ SX or b ∈ SY }|
≥ (1 − ρ(λ))2(1 − ε)V 2
≥ (1 − ρ(λ))2(1 − ε)V (V − 1),
with high probability. By a union bound over all pairs of (a, b) ∈ R, we also have that
(cid:88)
qi,j ≤
(i,j)∈R
1
V (V − 1)
=
1
V (V − 1)
(cid:88)
ℓa,b
(a,b)∈R
(cid:88)
(a,b)∈R
a∈LX ,b∈LY
ℓa,b +
1
V (V − 1)
(cid:88)
ℓa,b
(a,b)∈R
a∈SX or b∈SY
≤ (1 − ρ(λ))2(1 + ε/2)
log V
log λ
+
1
V (V − 1)
2(1 + ε)V (β log V )2
≤ (1 − ρ(λ))2(1 + ε)
log V
log λ
,
with probability larger than 1 − V 2 exp(−V εδ). Combining the above, for any ε > 0,
(i,j)∈R qi,j
|R|
(1 + ε) log V
V (V − 1) log λ
¯qi,j =
(cid:80)
≤
,
with high probability. Therefore, for any ε > 0,
Acc(Mt) ≤
|Dsource|
V (V − 1)
(cid:32)
+
|R| (1 − (1 − ¯qi,j)t)
V (V − 1)
(cid:32)
(cid:18)
≤ (1 + ε)
p + (1 − ρ(λ))2
1 −
1 −
(1 + ε) log V
V (V − 1) log λ
(cid:19)t(cid:33)(cid:33)
,
with high probability, which completes the proof of the upper bound. For the lower bound, we
observe that if i ∼ j and (i, j) ∈ R\Dsource, then qi,j ≥ 1/V (V − 1), because when i and j are
chosen in the procedure, the edge (i, j) will be added. This implies that
Acc(Mt) =
|Dsource|
V (V − 1)
+
(cid:88)
R\Dsource
1 − (1 − qi,j)t
V (V − 1)
≥
|Dsource|
V (V − 1)
(cid:32)
+
|R\Dsource|
V (V − 1)
≥ (1 − ε)
p + (1 − ρ(λ))2
(cid:32)
(cid:18)
1 −
1 −
(cid:32)
(cid:18)
1 −
1 −
1
V (V − 1)
1
V (V − 1)
(cid:19)t(cid:33)
(cid:19)t(cid:33)(cid:33)
,
with high probability which completes the proof of the lower bound.
33
To obtain a more precise description of Acc(Mt), we employ a Poisson branching process to ap-
proximate the cluster growth of vertices, which we now define. A Poisson(λ) branching process is a
model for a population evolving in time, where each individual independently gives birth to a num-
ber of children with Poisson(λ) distribution. We denote by Zn the number of individuals in the n-th
generation, where by default Z0 = 1. Then Zn satisfies the recursion relation Zn = (cid:80)Zn−1
i=1 Xn,i,
where {Xn,i}n,i≥1is a doubly infinite array of i.i.d. Poisson(λ) random variables. The total progeny
Yn is then defined as Yn = (cid:80)n
i=0 Zn. Zn is often called a Galton–Watson branching process and
the associated tree is called a Galton–Watson tree.
As in the previous proof, an accurate estimate of Acc(Mt) relies on understanding qi,j, the proba-
bility that the edge (i, j) will be added in each round. As before, the only edges that will be added
are those connected to the giant component (i.e., i ∈ LX and j ∈ LY ). The proportion of such
edges converges to Cλ as V → ∞. Recall that
(cid:80)
(a,b)∈R
qi,j =
1((i, j) ∈ {(a, z1), (a, z2), . . . , (a, zk), (a, b)})
V (V − 1)
(3)
where (a, z1, z1, · · · , zk, b) represents the shortest path in M0 connecting a and b. Equivalently, if
we consider the tree generated by a breadth-first search in M0 rooted at i, then since i ∼ j, j will be
in the tree, and the numerator counts the total number of offspring of j in the tree, including j itself.
This is the point at which a rigorous mathematical characterization of the tree becomes challenging.
Instead, we approximate the tree and analyze its behavior. It is well-known that when p = λ/V ,
the cluster growth (or the breadth-first search at a vertex) can be approximated by a Poisson(λ)
branching process (see e.g., Hofstad (2016); Durrett (2010)). For fixed vertex i, we define T as a
Galton–Watson tree rooted at i with Poisson(λ) offspring distribution with depth L. We use T to
approximate the exploration process at i. For 0 ≤ ℓ ≤ L, the number of vertices at level L − ℓ is
approximately λL−ℓ. Given that the total number of vertices in T is approximately (1 − ρ(λ))V , the
number of vertices at level L − ℓ is also (1 − ρ(λ))V (λ − 1)/λℓ+1. For each vertex at level L − ℓ, the
number of its offspring (including itself) equals k with probability pℓ(k). In this case, the numerator
in (3) equals k. Combining the above, there are around (1−ρ(λ))V ·pℓ(k)(1−ρ(λ))V (λ−1)/λℓ+1
vertex pairs (i, j) in the graph such that i ∈ LX , j ∈ LY , qi,j = k/V (V − 1) and j is located at the
L − ℓ level in the tree T . Ultimately, we arrive at an approximation of the form
Acc(Mt) ∼ p + Cλ
1 −
(cid:32)
∞
(cid:88)
ℓ=0
λ − 1
λℓ+1
∞
(cid:88)
k=1
(cid:18)
pℓ(k)
1 −
k
V (V − 1)
(cid:19)t(cid:33)
.
Beyond Erd˝os-R´enyi graphs, the term qi,j may not be as explicit. We can define C as the proportion
of vertex pairs (i, j) such that i ∼ j in M0, then qi,j is nonzero for CV (V − 1) pairs of vertices. In
this case, if we write ak = k/V (V − 1) and define µ(k) as the probability that qi,j = ak, then we
can have a general formula
(cid:32)
Acc(Mt) ∼ p + C
1 −
(cid:33)
µ(k) (1 − ak)t
.
∞
(cid:88)
k=1
The drawback of this formula is the lack of explicit expressions. For a given M0, it is unclear how
to compute the measure µ(·) easily.
Next, we provide a qualitative description of the shape of such a mixture of exponentials.
Lemma F.3. For a fixed constant 0 < C < 1 and a probability measure µ(·) on Z+ with finite mean
m, we define
(cid:32)
f (t) = p + C
1 −
∞
(cid:88)
k=1
(cid:18)
µ(k)
1 −
k
V (V − 1)
(cid:19)tV (V −1)(cid:33)
.
Then we have that there exists 0 < t1 < t2 such that
as V → ∞.
f (t) =
Θ (p + t) ,
Θ(log t),
Θ(1),
for 0 ≤ t ≤ t1,
for t1 ≤ t ≤ t2,
for t ≥ t2,
34
Proof of Lemma F.3. Fix any 1 < t1 < t2. Note that f (t) is monotone increasing, concave and
always bounded by 1. We also have
(cid:32)
(cid:18)
f (t2) ≥ p + C
1 −
1 −
1
V (V − 1)
(cid:19)t2V (V −1)(cid:33)
≥ p + C(1 − exp(−t2)) = Θ(1).
So f (t) = Θ(1) when t ≥ t2. Now when t ≤ t1,
(cid:32)
f (t) ≤ p + C
1 −
∞
(cid:88)
k=1
(cid:33)
µ(k)(1 − tk)
≤ p + Cmt.
Since f (0) = p and f (t2) ≥ p + C(1 − exp(−t2)), by concavity, f (t) is lower bounded by
p + tC(1 − exp(−t2))/t2 = Θ(p + t) for any 0 ≤ t ≤ t1. Finally for t1 ≤ t ≤ t2, we note
that f (t1) ≤ f (t) ≤ 1, so easily, f (t) ≤ log t1/ log t1 ≤ log t/ log t1 = O(log t). Similarly,
f (t) ≥ f (t1) log t2/ log t2 ≥ log t(f (t1)/ log t2) ≥ Ω(log t). Therefore, f (t) = Θ(log t) for any
t1 ≤ t ≤ t2.
F.1 CURVE FITTING WITH MIXTURE OF EXPONENTIAL FORMULA
To perform curve fitting using the mixture-of-exponential formula, we approximate the infinite sum
with three terms in
(cid:32)
Acc(Mt) ∼ p + C
1 −
(cid:33)
µ(k) (1 − ak)t
.
∞
(cid:88)
k=1
Mathematically, we fit the empirical observation against the formula
y(x) = a − b1rx
1 − b2rx
2 − b3rx
3 ,
where x is the EntiGraph token count (in millions) and y(x) is the QuALITY QA accuracy. We
use the non-linear least squares method implemented by Virtanen et al. (2020). As a result of this
procedure, we obtain the fitted formula
y(x) = 64.5456 − 13.8352 × (0.9989)x − 8.4705 × (0.8961)x − 3.932 × (0.0546)x.
For the implementation of this procedure, we refer readers to our codebase.
35
G SYNTHETIC DATA GENERATION PROMPTS
We generate two synthetic corpora in this paper: EntiGraph (Appendix G.1) and the Rephrase base-
line (Appendix G.2). In our experiments, the Dsource is a collection of documents D, and our syn-
thetic augmentation procedure is applied to each document D ∈ Dsource. We will focus on a single
document D for the remainder of this section.
G.1 ENTIGRAPH PROMPTS
The EntiGraph procedure is described in detail in §2.2. We will recap the three steps below.
Step 1: Entity extraction. The first step is to extract the salient entities from the document D
using the entity extraction operation (Step 1, §2.2). The complete entity extraction
prompt is as follows:
As a knowledge analyzer, your task is to dissect and understand an
article provided by the user. You are required to perform the
following steps:
1. Summarize the Article: Provide a concise summary of the entire
article, capturing the main points and themes.
2. Extract Entities: Identify and list all significant "nouns" or
entities mentioned within the article. These entities should include
but not limited to:
* People: Any individuals mentioned in the article, using the
names or references provided.
* Places: Both specific locations and abstract spaces relevant to
the content.
* Object: Any concrete object that is referenced by the provided
content.
* Concepts: Any significant abstract ideas or themes that are
central to the article’s discussion.
Try to exhaust as many entities as possible. Your response should be
structured in a JSON format to organize the information effectively.
Ensure that the summary is brief yet comprehensive, and the list of
entities is detailed and accurate.
Here is the format you should use for your response:
{
}
"summary":
"entities": ["entity1", "entity2", ...]
"<A concise summary of the article>",
Step 2: relation analysis. The last step is to generate diverse descriptions of relations among
two or more entities. In our experiments, for each document D, we enumerate all entity pairs and
generate a description for each. The prompt for generating a description relating a pair of entities is
as follows:
You will act as a knowledge analyzer tasked with dissecting an
article provided by the user. Your role involves two main
objectives:
1. Rephrasing Content: The user will identify two specific entities
mentioned in the article. You are required to rephrase the
content of the article twice:
* Once, emphasizing the first entity.
* Again, emphasizing the second entity.
2. Analyzing Interactions: Discuss how the two specified entities
interact within the context of the article.
36
Your responses should provide clear segregation between the
rephrased content and the interaction analysis. Ensure each section
of the output include sufficient context, ideally referencing the
article’s title to maintain clarity about the discussion’s focus.
Here is the format you should follow for your response:
### Discussion of <title> in relation to <entity1>
<Rephrased content focusing on the first entity>
### Discussion of <title> in relation to <entity2>
<Rephrased content focusing on the second entity>
### Discussion of Interaction between <entity1> and <entity2>
in context of <title>
<Discussion on how the two entities interact within the article>
We also generate synthetic data involving three entities, using the prompt below:
You will act as a knowledge analyzer tasked with dissecting an
article provided by the user. Your role involves three main
objectives:
1. Rephrasing Content: The user will identify three specific
entities mentioned in the article. You are required to rephrase
the content of the article three times:
* Once, emphasizing the first entity.
* Again, emphasizing the second entity.
* Lastly, emphasizing the third entity.
2. Analyzing Interactions: Discuss how these three specified
entities interact within the context of the article.
Your responses should provide clear segregation between the
rephrased content and the interaction analysis. Ensure each section
of the output include sufficient context, ideally referencing the
article’s title to maintain clarity about the discussion’s focus.
Here is the format you should follow for your response:
### Discussion of <title> in relation to <entity1>
<Rephrased content focusing on the first entity>
### Discussion of <title> in relation to <entity2>
<Rephrased content focusing on the second entity>
### Discussion of <title> in relation to <entity3>
<Rephrased content focusing on the third entity>
### Discussion of Interaction between <entity1>, <entity2> and
<entity3> in context of <title>
<Discussion on how the three entities interact within the article>
G.2 REPHRASE PROMPTS
For the rephrase corpus, we adapt the prompt from Maini et al. (2024) to our setting of books and
articles. We provide four rephrase styles below:
Easy rephrase:
You are an assistant to help read a article and then rephrase it in
simpler terms. The user will provide you with an article with
37
title, year, content. You need to generate a paraphrase of the same
article using a very small vocabulary and extremely simple
sentences that a toddler will understand. Remember to keep the
meaning and every content of the article intact, including the
title, year, etc.
Medium rephrase:
You are an assistant to help read a article and then rephrase it in
different terms. The user will provide you with an article with
title, year, content. You need to generate a paraphrase of the same
article using diverse and high quality English language as in
sentences on Wikipedia. Remember to keep the meaning and every
content of the article intact, including the title, year,
etc.
Hard rephrase:
You are an assistant to help read a article and then rephrase it in
more sophisticated terms. The user will provide you with an article
with title, year, content. You need to generate a paraphrase of the
same article using very terse and abstruse language that only an
erudite scholar will understand. Remember to keep the meaning and
every content of the article intact, including the title, year,
etc.
38
H ADDITIONAL EVALUATION DETAILS OF MAIN EXPERIMENTS
H.1 QUALITY QA QUESTION SET
In this section, we provide more details of evaluation on the QuALITY QA test queries. Throughout
the closed-book QA experiments, we use a fixed 5-shot prompt below:
## Example 1
### Question
In the context of "Les Mis´erables", written by Victor Hugo in 1862,
what is the main setting of the novel? There is only one correct
choice.
### Choices
A. London
B. Madrid
C. Paris
D. Rome
### Thought Process and Answer
Thought process: "Les Mis´erables" is primarily set in Paris, making
C the correct choice. London, Madrid, and Rome are significant
cities in other literary works but not in Victor Hugo’s "Les
Mis´erables". There is only one correct choice.
Answer: C.
## Example 2
### Question
In the context of "Brave New World", written by Aldous Huxley in
1932, what substance is widely used in the society to control
citizens’ happiness? There is only one correct choice.
### Choices
A. Gold
B. Soma
C. Silver
D. Iron
### Thought Process and Answer
Thought process: In Aldous Huxley’s "Brave New World," Soma is used
as a means to maintain social control by ensuring citizens’
happiness, making B the correct choice. Gold, Silver, and Iron are
not the substances used for this purpose in the book.
Answer: B.
## Example 3
### Question
In the context of "Romeo and Juliet", written by William
Shakespeare in the early 1590s, what are the names of the two
feuding families? There is only one correct choice.
Choices:
A. Montague and Capulet
B. Bennet and Darcy
C. Linton and Earnshaw
D. Bloom and Dedalus
### Thought Process and Answer
Thought process: In William Shakespeare’s "Romeo and Juliet," the
two feuding families are the Montagues and the Capulets, making A
the correct choice. The Bennets and Darcys are in "Pride and
Prejudice", the Lintons and Earnshaws in "Wuthering Heights", and
Bloom and Dedalus in "Ulysses".
Answer: A.
## Example 4
### Question
39
In the context of "1984", written by George Orwell in 1949, what is
the name of the totalitarian leader? There is only one correct
choice.
### Choices
A. Big Brother
B. O’Brien
C. Winston Smith
D. Emmanuel Goldstein
### Thought Process and Answer
Thought process: In George Orwell’s "1984," the totalitarian leader
is known as Big Brother, making A the correct choice. O’Brien is a
character in the novel, Winston Smith is the protagonist, and
Emmanuel Goldstein is a rebel leader.
Answer: A.
## Example 5
### Question
In the context of "Moby-Dick", written by Herman Melville in 1851,
what is the name of the ship’s captain obsessed with hunting the
titular whale? There is only one correct choice.
### Choices
A. Captain Hook
B. Captain Nemo
C. Captain Flint
D. Captain Ahab
### Thought Process and Answer
Thought process: In Herman Melville’s "Moby-Dick," the ship’s
captain obsessed with hunting the whale is Captain Ahab, making D
the correct choice. Captain Nemo is in "Twenty Thousand Leagues
Under the Sea", Captain Flint in "Treasure Island", and Captain
Hook in "Peter Pan".
Answer: D.
## Example 6
If the output of the model correctly follows the format of the few-shot prompt, its last two characters
should be “A.”, “B.”, “C.”, or “D.”. However, the model sometimes cannot successfully follow
the few-shot prompting format, particularly for the continually pretrained model. As a result, in all
our evaluations, we sample the response 64 times, and only select the ones that can be parsed in the
correct format. Out of these 64 attempts, we randomly select among the valid answers to give the
final answer. Note that this is different from majority voting in self-consistency prompting (Wang
et al., 2023a).
H.2 CLOSED-BOOK SUMMARIZATION
Automated evaluation metric. We design a three-stage evaluation procedure: (i) In the first stage,
we use GPT-46 to break the summary into atomic claims, similar to Min et al. (2023); (ii) In the
second stage, we provide both the list of claims and the source article to a judge model (also GPT-4).
We ask the judge model to determine whether each claim is true or false, based on the source article.
If the claim is true, we further ask the model to determine whether the claim is salient (contributes
to the main message of the article) or cosmetic (factual details that do not help understand the main
message). (iii) Finally, for each summary, we obtain its number of false and salient claims and
normalize it by the corresponding count from the human summary. We report the average of these
normalized metrics across the QuALITY corpus articles in Figure 3.
Prompts to generate summaries. For summarization evaluation with EntiGraph Instruct and Raw
Instruct, we apply the following two prompts to obtain two summaries of increasing length. We
provide three examples of summarization outputs below. For each of the three examples, we will
6Specifically, we use the gpt-4-turbo model as of Aug. 19, 2024.
40
➤ Short prompt: Summarize the article {article title} by {author
name} for me.
Give a short summary of ‘‘Cosmic Yo-Yo’’ by Ross Rocklynne.
➤ Long
prompt:
Write an extremely long and detailed article
regarding the book {article title} by {author name}.
Write an extremely long and detailed article regarding the
book ‘‘Cosmic Yo-Yo’’ by Ross Rocklynne.
Table 4: Summarization prompt for EntiGraph Instruct, Raw Instruct, and Reprhase Instruct.
first present the human summary for this article to provide context for the example, and then present
the short summary from the two summarizers.
Example 1. The first example is “Cosmic Yo-Yo” by Ross Rocklynne.
Human summary: Bob Parker, the President of Interplanetary Hauling & Moving Co.,
sells asteroids to wealthy people on earth. Clients ask for asteroids with size parameters and
specifications, and Bob finds them in space and hauls them to earth. His company is almost
bankrupt because a rival company, Saylor & Saylor, stole his idea and now offers the same
services. Bob receives mail from Mr. Andrew S. Burnside with a request for an asteroid that
he would like to use in an upcoming wedding. Bob and his partner Queazy set out to find the
perfect asteroid for Mr. Burnside, although they know it’s a longshot. Fairly quickly, they
find one that looks perfect. The men land on the asteroid, and Bob deploys his atomic-whirl
spectroscope to test it. Suddenly, a beautiful woman interrupts him and demands that they
leave the asteroid. She pulls out her spasticizer gun before telling them that they can have it
in a month after she’s gone. Bob explains that they are desperate, but the girl retorts that her
fate is worse than death if she leaves.
Suddenly, the Saylor brothers’ ship appears, and Bob tells the girl that they have to fight this
enemy together. Wally and Billy Saylor, along with three other men, jump out of the ship.
Bob tells them that Mr. Burnside has ordered this asteroid, and the Saylor brothers say that
they received the same order. Bob quickly grabs the girl’s spasticizer while Queazy throws
his body at Billy. However, Wally manages to shoot the gun out of Bob’s hand and attack
him. Bob is knocked unconscious in the scuffle.
When Bob wakes up, he is completely alone, floating in space. He panics because he has
very little oxygen left. Finally, he hears Queazy’s voice explaining that the girl used her
ship’s technology to find them both. The mystery girl introduces herself as Starre Lowenthal,
the granddaughter of Mr. Burnside. She concedes that this entire mission was fake. She told
her grandfather that she would only marry her fiance Mac if he could get this particular
asteroid, and then she made plans to conquer and protect the asteroid so it could not be
supplied for the wedding.
Bob is confident that they can reach the Saylor brothers before they bring the asteroid back
to earth, but his plan does nothing to protect Starre from marrying a man she doesn’t love.
She agrees to help Bob and Queazy. Within five days, Bob realizes he is in love with Starre.
Starre compares her small ship to a yo-yo, and Bob gets an idea - they will use Starre’s ship
like a yo-yo to retrieve the asteroid from the Saylor brothers. Once the team catches up to
the Saylor brothers, Bob flings Starre’s ship at the asteroid several times, and Wally calls
them to tell them that they might die as a result of the damage their ship has sustained. Bob
makes it clear that they have no intention of stopping, and the Saylor brothers release the
asteroid.
EntiGraph Instruct with short prompt: Cosmic Yo-Yo is a science fiction story by Ross
Rocklynne about Bob Parker and Queazy, two interplanetary haulers who are hired to deliver
an asteroid to Earth for a wealthy client’s backyard wedding. However, they face competition
from their rivals, the Saylor brothers, who also want to deliver the asteroid. The story follows
their race to deliver the asteroid before the deadline, using a yo-yo-like device to maneuver
41
the asteroid and outsmart their rivals. The story is filled with action, adventure, and a touch
of romance.
Raw Instruct with short prompt: Cosmic Yo-Yo by Rocklynne, Ross is a science fiction
novel that follows the story of a young woman named Susan who discovers a mysterious
device that can transport her through time and space. She uses the device to travel to different
planets and meet different alien species, but soon realizes that her actions have consequences
and she must find a way to return home. The novel explores themes of identity, responsibility,
and the consequences of our actions.
Example 2. The second example is “Beach Scene” by Marshall King. We can see that EntiGraph
correctly gives a summary of the story while Raw Instruct mistakes the book as a painting.
Human summary: Purnie, an animal, is going to see the ocean on his fifth birthday. He
has heard stories about this place, and experiencing it firsthand is surreal for him. Purnie
is careful not to disturb the animals he sees along the way because he has frozen time, and
everything must resume normally when he unfreezes it. He knows that time-stopping is
forbidden for animals his age, but he chooses to believe that his family will be proud of his
bravery.
Finally, he sees the ocean in front of him, and he resumes time. He does a head-stand and
feels weak and dizzy. These feelings are a result of the time-stop, and he knows it. Purnie
approaches some humans on the beach. A man named Forbes is in the middle of explaining
to his captain, Benson, that he has found 17 planets to claim as his own. Forbes is hellbent
on raising his FORBES flag as soon as possible. He is eager to stake his claim to the land
and says that his mission is much bigger than real estate alone. Benson retorts that yes, his
mission is bigger than just real estate because his paperwork says that Forbes will own all of
the inhabitants of the planets he claims as well as the land.
The crew members use a special machine and find radiation emanating from Purnie. Forbes
demands that they put the animal in a box. Benson protests and reminds Forbes that it’s
against Universal Law, but Forbes insists. Purnie experiences his first-ever impulse to run
away with fear when a noose comes towards him. He goes back to pick up his fruit, and
Forbes shoots him in the leg. When the man throws the noose again, Purnie involuntarily
stops time. He drags himself up the knoll where he originally came from.
The humans are astonished when time resumes and Purnie is not where he was a split second
ago. They spot him up on top of a pile of petrified logs, and suddenly the logs fall down the
hill and pin the men down. Purnie is shocked and regretful. The whole thing was an accident.
He deliberately stops time and uses all of his remaining strength to lift the logs off of the
humans. Purnie begins to lose consciousness, and he knows that he must resume time or
he will die. After pouring all of his strength into this action, time does begin again. The
humans resume life and feel as though they have gone mad. They know that they were just
facing death by drowning, and now they are free. The logs were so heavy that it would have
taken superhuman strength to move them. Forbes, in particular, has really gone mad, and he
laughs to himself uncontrollably. Benson believes that Purnie was responsible for moving
the logs, but of course that seems physically impossible.
Purnie stares off at the beautiful ocean views and watches the men leave in their vehicle as
he dies.
EntiGraph Instruct with short prompt: Beach Scene is a short story by Marshall King
that follows a young boy named Purnie who discovers the power of stopping time. On his
fifth birthday, Purnie decides to explore the beach and use his newfound ability to stop time
to play with the animals and explore the ocean. However, his adventures take a turn when
he encounters a group of explorers who want to capture him for their own gain. Purnie must
use his powers to save himself and the animals from the explorers’ greed.
42
Raw Instruct with short prompt: Beach Scene by King, Marshall is a painting that depicts
a group of people enjoying a day at the beach. The painting features a vibrant blue ocean,
a sandy beach, and a group of people relaxing, playing, and swimming in the water. The
painting is known for its bright colors and lively atmosphere, capturing the joy and energy
of a day at the beach.
Example 3. The third example is “The Madman From Earth” by “Keith Laumer”.
Human summary: Following the departure of Consul Whaffle, Retief has taken over
as Consul for the Terrestrial States with the Terrestrial Consulate General on the planet
Groac. His administrative assistant, Miss Yolanda Meuhl, wants him to attend Groacian
cultural events, but Retief is more interested in addressing the nine-year-old mystery of
the disappearance of a Terrestrial cruiser, the ISV Terrific–an event which was followed
by a coup d’etat enacted by the current Groacian government. Much to Miss Meuhl’s dis-
may, Retief shirks his cultural duties and makes his way to the Foreign Office Archives,
whereupon he is promptly barred from entering by a pale-featured Archivist speaking in
the throat-bladder vibrations of the native Groacians. Because of the Archivist’s insistence
that ¨outworlders¨cannot access the archives, Retief begins walking back to the Consulate and
stops at a bar for a drink. At the, a drunken Groacian approaches Retief and threatens to cage
him and put him on display as a freak. The bartender orders the drunken Groacian out of the
bar, and Retief follows him, ultimately beating him up for information. When Retief returns
to the Consulate, Miss Meuhl informs him that two angry Groaci await him in his office. One
is Fith, an employee of the Terrestrial Desk at the Ministry of Foreign Affairs; the other is
Shluh, a representative of the Internal Police. They are there to investigate reports that Retief
has assaulted a Groacian national–an accusation Retief ignores in order to launch into his
own accusations that the Groaci were engaged in a cover-up of the whereabouts of the ISV
Terrific. Miss Meuhl continually interjects, expresses horror at Retief’s claims, and apolo-
gizes to the Groacians on behalf of the Terrestrial Consulate. Despite the persistent denials
of the Groacians, Retief continues his accusations, suggesting the coup d’etat was an effort
to silence government officials with knowledge of the truth of what happened to the cruiser
and its crew. Then he reveals what he discovered from the drunken Groacian: The crew of
the ISV Terrific had been caged and paraded through the streets of Groac and then neglected
until they died. Fith and Shluh finally admit the truth and offer to show Retief the hidden
cruiser in order to show their contrition. When Retief sees the ship, he once again accuses
the Groacians of attempting to mislead him, saying that this is a lifeboat, and he demands
to see the actual ship. Fith has had enough and threatens to arrest Retief, who yields and
goes back to the Consulate. There, Miss Meuhl is at her wits end. Retief orders her to barri-
cade herself inside the office while he goes to the Foreign Ministry to gather more evidence.
When he returns, Miss Meuhl informs him she has reported him to Regional Headquarters,
and Retief learns he has been relieved of his post. Soon after, the Groacians appoint Miss
Meuhl to his position, and Fith and Shluh enter to arrest him.
EntiGraph Instruct with short prompt: The Madman From Earth is a science fiction
novel by Keith Laumer, published in 1959. The story follows Retief, a diplomat from Earth,
who is sent to the planet Groac to investigate the disappearance of a Terrestrial cruiser and
its crew. Retief uncovers a conspiracy by the Groacian government to cover up the incident
and hide evidence from Earth. He must navigate through political intrigue, cultural misun-
derstandings, and dangerous situations to uncover the truth and bring justice to the missing
crew. The novel is a satirical take on diplomacy and bureaucracy, with Retief’s unorthodox
methods and wit standing out against the rigid and often absurd procedures of the diplomatic
corps.
Raw Instruct with short prompt: The Madman From Earth by Laumer, Keith is a science
43
fiction novel that follows the story of a man named John Smith who is transported to a
distant planet called Earth Two. There, he discovers that he is the only human on the planet
and must navigate a world filled with strange creatures and dangerous obstacles. Despite
his initial confusion and fear, John eventually adapts to his new surroundings and becomes
a respected member of the alien society. However, when he is eventually transported back
to Earth One, he realizes that he has been gone for over a thousand years and must grapple
with the changes that have occurred during his absence.
➤ Explicit article reference instruction: Summarize the article “Defining Decay Down” by
“David Plotz” for me.
The article ”Defining Decay Down” by David Plotz discusses how the dental industry has
changed over the years. In the past, dentists were primarily focused on treating cavities and
other dental issues, but as the prevalence of cavities decreased due to the widespread use of flu-
oride, the industry had to adapt. Dentists began to focus on cosmetic dentistry, offering services
like teeth whitening and orthodontics to appeal to the vanity of aging baby boomers. The arti-
cle also discusses how dentists have become more customer-friendly, with amenities like virtual
reality glasses and massage therapy. Despite the shift towards elective procedures, the dental
industry has continued to thrive, with dentists earning higher salaries than ever before.
➤ Implicit article reference: How has the dentistry practice in United States has changed?
The dentistry practice in the United States has undergone significant changes in recent years.
Here are some of the ways in which it has changed:
1. Focus on cosmetic dentistry: In recent years, there has been a growing demand for cosmetic
dental procedures such as teeth whitening, orthodontics, and veneers. Dentists have responded
to this demand by offering more cosmetic services.
2. Use of technology: Dentists are now using advanced technology such as intra-oral cameras,
3D imaging, and laser technology to diagnose and treat dental problems. This technology has
made dental procedures more accurate and efficient.
3. Emphasis on preventive care: Dentists are now focusing more on preventive care to help pa-
tients avoid dental problems. This includes regular check-ups, cleanings, and fluoride treatments.
4. Increased competition: With the rise of dental schools and the influx of foreign-trained den-
tists, the dental industry has become more competitive. Dentists are now offering more services
and amenities to attract and retain patients.
5. Focus on patient experience: Dentists are now focusing on providing a positive patient ex-
perience. This includes offering amenities such as virtual reality glasses, massage therapy, and
entertainment during procedures.
Overall,
technology-driven, and focused on preventive care and cosmetic procedures.
the dentistry practice in the United States has become more patient-centered,
➤ Cross article instruction: Compare David Plotz’s commentary style between how he analyze
American dentistry and how he discuss the movie Fight Club?
David Plotz’s commentary style is different when he analyzes American dentistry and when
he discusses the movie Fight Club. In his analysis of American dentistry, Plotz uses a critical
and investigative approach, highlighting the over-commercialization and unnecessary procedures
in the industry. He uses statistics and expert opinions to support his arguments and provides
practical advice for readers to avoid unnecessary dental procedures.
On the other hand, when discussing the movie Fight Club, Plotz adopts a more satirical and
humorous tone. He uses pop culture references and witty commentary to analyze the themes
and messages of the movie. He also provides a critical analysis of the film’s characters and their
motivations, while also acknowledging the movie’s cultural impact.
Overall, Plotz’s commentary style is more serious and investigative when analyzing American
dentistry, while his commentary on Fight Club is more satirical and humorous.
Table 5: Complete instruction following example used in Table 2 from Section 4.3.
44
H.3 ADDITIONAL QUANTITATIVE SUMMARY EVALUATION
Our main summarization evaluation focused on testing the summarization ability of our strongest
CPT models and the GPT baselines, which made a fair comparison to baselines such as rephrase
difficult due to the difference in total token counts.
We perform a controlled comparison between EntiGraph and Rephrase CPT under subsampling the
synthetic dataset and find that much like for the QA case, EntiGraph matches or improves upon
Rephrase CPT, though the gains here are generally smaller.
Concretely, we apply the same instruction procedure described in §4.3 to the Raw CPT and Rephrase
CPT models from §4.1, obtaining two additional instruction-tuned models that have knowledge
about QuALITY books. In addition, we also subsample 29M tokens out of the 455M token Enti-
Graph corpus to token-match the Raw and Rephrase corpus, and refer to the corresponding instruc-
tion tuned model as EntiGraph-29M.
Figure 8: Closed-book summarization: number of false claims (y-axis) versus number of salient
claims (x-axis) normalized by the human summary. Same setup as in §4.3.
Figure 8 shows that EntiGraph summaries for the short prompt have significantly fewer false claims
while having a comparable number of salient claims. The trend holds for the longer summary
prompt, with clear separation in the error bars for the false claims gap between EntiGraph and
Rephrase baselines, and overlap in the error bars for the salient claims count.
Finally, we also see clear improvements in scaling from 29M to the full EntiGraph model, with
significant reductions in false claims for both the short and long prompts, suggesting that much
like in the QA case, EntiGraph could bring improvements to knowledge-intensive downstream tasks
through additional scale.
45
0.00.20.40.60.81.01.21.41.6# Salient claims relative to human246810# False claims relative to humanRawCPT shortRawCPT longRephrase shortRephrase longEntiGraph-29M shortEntiGraph-29M longEntiGraph shortEntiGraph longHuman | Zitong Yang, Neil Band, Shuangping Li, Emmanuel Candes, Tatsunori Hashimoto | Pretraining on large-scale, unstructured internet text enables language models to acquire a significant amount of world knowledge.
However, this knowledge acquisition is \emph{data-inefficient}---to learn a given fact, models must be trained on hundreds to thousands of diverse representations of it.
This poses a challenge when adapting a pretrained model to a small corpus of domain-specific documents, where each fact may appear rarely or only once.
We propose to bridge this gap with \emph{synthetic continued pretraining}: using the small domain-specific corpus to synthesize a large corpus more amenable to learning, and then performing continued pretraining on the synthesized corpus.
We instantiate this proposal with EntiGraph, a synthetic data augmentation algorithm that extracts salient entities from the source documents and then generates diverse text by drawing connections between the sampled entities.
Synthetic continued pretraining with EntiGraph enables a language model to answer questions and follow generic instructions related to the source documents without access to them.
If, instead, the source documents are available at inference time, we show that the knowledge acquired through our approach compounds with retrieval-augmented generation.
To better understand these results, we build a simple mathematical model of EntiGraph, and show how synthetic data augmentation can ``rearrange'' knowledge to enable more data-efficient learning. | Synthetic continued pretraining |