taesiri commited on
Commit
d0d4acd
1 Parent(s): f1f0256

Upload papers/2402/2402.11975.tex with huggingface_hub

Browse files
Files changed (1) hide show
  1. papers/2402/2402.11975.tex +757 -0
papers/2402/2402.11975.tex ADDED
@@ -0,0 +1,757 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \pdfoutput=1
2
+
3
+
4
+ \documentclass[11pt]{article}
5
+
6
+ \usepackage{acl}
7
+
8
+ \usepackage{times}
9
+ \usepackage{latexsym}
10
+
11
+ \usepackage[T1]{fontenc}
12
+
13
+
14
+ \usepackage[utf8]{inputenc}
15
+
16
+ \usepackage{microtype}
17
+
18
+ \usepackage{inconsolata}
19
+ \usepackage{arydshln}
20
+
21
+ \usepackage{url} \usepackage{booktabs} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{microtype} \usepackage{xcolor} \usepackage[export]{adjustbox}
22
+ \usepackage{dashrule}
23
+ \usepackage{multirow}
24
+ \usepackage{graphicx}
25
+ \usepackage{amsmath}
26
+ \usepackage{amssymb}
27
+ \usepackage{amsfonts}
28
+
29
+ \def\ModelName{\texttt{COMEDY}}
30
+ \def\Dataset{\textbf{Dolphin}}
31
+ \title{Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conversations}
32
+
33
+
34
+
35
+ \author{
36
+ Nuo Chen$^\clubsuit$
37
+ \quad
38
+ Hongguang Li$^{\diamondsuit}$
39
+ \quad \\
40
+ {\bf Juhua Huang$^{\diamondsuit}$}
41
+ {\bf \quad Baoyuan Wang$^{\diamondsuit}$}
42
+ {\bf \quad Jia Li$^\clubsuit$}\\
43
+ \\
44
+ $^\clubsuit$Hong Kong University of Science and Technology (Guangzhou)\\ Hong Kong University of Science and Technology\\
45
+ $^{\diamondsuit}$Xiaobing.AI\\
46
+ \texttt{nchen022@connect.ust.hk}, \texttt{jialee@ust.hk}\\}
47
+
48
+ \begin{document}
49
+ \maketitle
50
+ \begin{abstract}
51
+ Existing retrieval-based methods have made significant strides in maintaining long-term conversations. However, these approaches face challenges in memory database management and accurate memory retrieval, hindering their efficacy in dynamic, real-world interactions. This study introduces a novel framework, \textbf{CO}mpressive \textbf{M}emory-\textbf{E}nhanced \textbf{D}ialogue s\textbf{Y}stems (\ModelName), which eschews traditional retrieval modules and memory databases. Instead, \ModelName~adopts a "One-for-All" approach, utilizing a single language model to manage memory generation, compression, and response generation. Central to this framework is the concept of \textit{compressive memory}, which integrates session-specific summaries, user-bot dynamics, and past events into a concise memory format. To support \ModelName, we curated a large-scale Chinese instruction-tuning dataset, \textbf{Dolphin}, derived from real user-chatbot interactions.
52
+ Comparative evaluations demonstrate \ModelName's superiority over traditional retrieval-based methods in producing more nuanced and human-like conversational experiences. Our codes are available at \url{https://github.com/nuochenpku/COMEDY}.
53
+
54
+
55
+ \end{abstract}
56
+
57
+ \section{Introduction}
58
+
59
+ Maintaining long-term conversations has always been a long-term pursuit in current open-domain dialogue systems \cite{liu-etal-2016-evaluate, zhang-etal-2018-personalizing,Kann2022OpendomainDG,song2023improving}, commonly known as chatbots or conversational agents.
60
+ Long-term conversation refers to the ability of a conversational agent to engage in extended dialogues over multiple interactions, often spanning several days or weeks even months. This setting is challenging because it necessitates not only a deep understanding of the immediate dialogue context but also the retention and integration of key information from past interactions.
61
+ Effective long-term conversation requires a system to memorize or recall past dialogue snippets, contextual nuances, and user preferences, which are crucial for maintaining coherence and relevance in ongoing interactions \cite{wu-etal-2022-memformer,2022TongZhangHierarchical}.
62
+
63
+
64
+ \begin{figure*}[!t]
65
+ \vspace{-10pt}
66
+ \centering
67
+ \includegraphics[width=0.98\linewidth]{Images/frame_work.pdf}
68
+ \caption{The overview of (a) the retrieval-based methods and (b) ours: \ModelName.
69
+ }
70
+ \label{fig:framework}
71
+ \vspace{-10pt}
72
+ \end{figure*}
73
+
74
+
75
+ To acquire useful information from past conversations, the most mainstream approach in the field of long-term conversation currently is retrieval-based methods, as illustrated in Figure \ref{fig:framework} (a): Firstly, previous works \cite{xu-etal-2022-long, bae-etal-2022-keep} usually employ a memory generator to summarize relevant memories from past sessions, such as user portraits. In this step, the memory generator can either be a separately trained model or a powerful large language model (LLM) like GPT4 \cite{GPT4OpenAI}; Subsequently, a dedicated memory database, or a memory bank, is used to store these memories. Some studies \cite{zhong2023memorybank} even store past conversational utterances directly in the storage; Going a step further, some works \cite{bae-etal-2022-keep, wang2023recursively} propose the use of specific memory management operations to update and iterate the memory database; The final and indispensable step involves employing a sentence-embedding model \cite{guu2020retrieval,Lewis2020RetrievalAugmentedGF} to retrieve the most relevant memories from the memory database in relation to the current conversation. The current conversation and related memories are then inputted into a specialized response generator to produce the final response.
76
+
77
+ Despite the notable success achieved by the retrieval-based methods, they encounter several limitations that impact their overall efficacy and applicability: 1) One significant challenge is the unpredictability of performance. The system's effectiveness is contingent upon several modules (like memory generator and retriever) working in tandem; moreover, the retriever component does not guarantee the retrieval of relevant and effective memories. Sentence-embedding models, commonly used for this purpose, may not always capture the nuances and context of the conversation accurately. 2) Another clear challenge lies in the management of the memory database. As conversations accumulate, the size and complexity of the memory database grow, making it increasingly difficult to manage. Ensuring that the stored information remains relevant and up-to-date is a constant concern, as outdated or irrelevant data can lead to inaccurate or inappropriate responses.
78
+
79
+
80
+
81
+
82
+
83
+ Moreover, current training corpus of long-term conversation chatbots is commonly either involved in constructing personalized dialogue data using LLMs like ChatGPT or hiring crowd-workers to simulate conversations \cite{xu-etal-2022-long}. Unlike these structured or predictable dialogues, real-world conversations can veer into a wide range of topics, include colloquial language, and incorporate nuanced expressions \cite{chen-etal-2023-orca}. Meanwhile, the memory database in real scenarios needs to store memories from multiple chatbot-users, increasing the difficulty in accurately retrieving relevant memories and maintaining an up-to-date memory database. The above issues present a more pronounced challenge for deploying
84
+ retrieval-based methods in real-world conversations.
85
+
86
+
87
+
88
+
89
+
90
+
91
+
92
+
93
+ To address these concerns, we propose a LLM-based \textbf{CO}mpressive \textbf{M}emory-\textbf{E}nhanced \textbf{D}ialogue s\textbf{Y}stems framework (\ModelName). \ModelName~marks a significant departure from existing methodologies, as it operates without a retrieval module. At its core, \ModelName~adopts a groundbreaking ``\textit{One-for-All}'' approach, utilizing a single, unified model to manage the entire process from memory generation, compression to final response generation, as shown in Figure \ref{fig:framework} (b):
94
+ It firstly involves distilling session-specific memory from past dialogues, encompassing fine-grained session summaries, including event recaps, and detailed user and bot portraits; In a break from traditional systems, \ModelName~eschews the use of a memory database for storing these insights. Instead, it reprocesses and condenses memories from all past interactions, forming a \textit{compressive memory}.
95
+ The first part is the concise events that have occurred throughout all the conversations, creating a historical narrative that the system can draw upon. The second and third parts consist of a detailed user profile and the dynamic relationship changes between the user and chatbot across sessions, both derived from past conversational events. This holistic memory allows \ModelName~to generate responses that are not only contextually aware but also personalized and adaptive to the evolving nature of the user-chatbot relationship; Finally, \ModelName~skillfully integrates this compressive memory into ongoing conversations, enabling contextually memory-enhanced interactions.
96
+ Unlike retrieval-based systems that may struggle to fetch pertinent memories from a vast database, \ModelName's compressive memory is inherently designed to prioritize salient information, allowing for quicker and more accurate memory utilization.
97
+
98
+
99
+
100
+
101
+ To ensure that \ModelName~is well-suited for real-world long-term conversations and overcome the issues of lacking relevant labeled data,
102
+ we have methodically assembled an large-scale instruction-tuning dataset from actual online user-chatbot interactions, named \Dataset. This dataset contains three tasks: \textbf{Session-Level Memory Summarization}; \textbf{Memory Compression}; \textbf{Memory-Grounded Response Generation}, comprising an extensive collection of 100k samples.
103
+ Dolphin is well-annotated to support each critical phase in \ModelName's operation, from memory extraction and compression to integration and response generation. This dataset lays a robust foundation for enhancing \ModelName's dialogue capabilities, ultimately leading to a more nuanced and human-like conversational experience compared to retrieval-based baselines.
104
+ \textit{Our \ModelName~has been deployed on the X Eva platform and has received over 10 million calls, indicating its widespread use and acceptance.}
105
+
106
+
107
+ Our contributions are summarized as follows:
108
+
109
+
110
+
111
+
112
+
113
+ \begin{itemize}
114
+ \item We introduce \ModelName, represents a groundbreaking shift from traditional memory retrieval-based dialogue systems. It not relies on any retriever module and memory database, but generates enhanced, memory-informed responses with compressive memory in terms of comprehensive human evaluation.
115
+ \item We annotate a large-scale (100k) long-term conversation instruction tuning dataset, Dolphin, from actual online user-chatbot interactions. It can strengthen compressive memory-augmented models' ability to adapt to evolving conversational styles and user preferences. To the best knowledge of ours, Dolphin is the current biggest Chinese long-term memory conversation dataset.
116
+ \item \ModelName~could handle the whole long-term conversation interactions via a singular model, achieving a higher degree of result consistency and predictability, reducing computational overhead, eliminates the need for data transfer between multi-models.
117
+ \item We further combine \ModelName~with Directly Preference Optimization (DPO) \cite{rafailov2023direct} alignment strategy, and propose a simple strategy to mine efficient preferred and dispreferred memory-based responses. \ModelName-DPO shows better ability in generating coherent and contextually memory-grounded responses.
118
+
119
+ \end{itemize}
120
+
121
+
122
+
123
+
124
+
125
+
126
+
127
+
128
+
129
+
130
+
131
+
132
+
133
+
134
+ %
135
+
136
+
137
+ \section{Methodology}
138
+
139
+ In this section, we first
140
+ overview the problem formulation of long-term conversations in \ModelName-style. Then, we introduce three task definitions and detailed data collection in Dolphin. Last, we present the training and DPO details of \ModelName.
141
+
142
+
143
+ \subsection{Problem Formulation}
144
+
145
+ An episode $D$ ($D_1,..,D_{t-1}$) is composed of a sequence of previous dialogue sessions between the chatbot and a specific user. The dialogue context for a given session at time step $t$ is represented as $D_t = \{c_1, u_1, c_2, u_2, \ldots, c_t, u_t\}$, where $c$ and $u$ denote the chatbot's and user's utterances.
146
+
147
+ In \ModelName, we aims to train a well-performed model $\mathcal{M}(\theta)$, that first extracts session-level memory derived from previous sessions within $D$, denoted as $M = \{m_1, m_2, \ldots, m_{t-1}\}$ (\textbf{Task 1}). Each $m$ contains natural sentences about session-level events and user profiles. Then $\mathcal{M}(\theta)$ will takes M as inputs, and outputs the compressive memory $\hat{M}$ that contains detailed user portraits like characteristics, recent states (emotional, work), etc; and concise record of all events (\textbf{Task 2}).
148
+ Finally, $\mathcal{M}(\theta)$ generates the forthcoming response $c_{t+1}$, based on the current dialogue context $D_t$ and $\hat{M}$ (\textbf{Task 3}).
149
+ In the following, we introduce how we annotate the labeled data for each task.
150
+ \begin{table*}[t]
151
+ \begin{center}
152
+ \centering
153
+ \small
154
+ \begin{tabular}{l|ccc|ccc}
155
+ \toprule
156
+ \multirow{2}{*}{\textbf{Statistics}}& \multicolumn{3}{c}{\textbf{Train}} & \multicolumn{3}{c}{\textbf{Test}} \\
157
+ & \textbf{Task 1} & \textbf{Task2} & \textbf{Task 3} &\textbf{Task 1} & \textbf{Task2} & \textbf{Task 3}\\
158
+
159
+ \midrule
160
+
161
+ {Avg. Turns Per Session} &\multirow{1}{*}{13.0}&- & 13.9 &19.5&-&10 \\
162
+ {Avg. sentences Per Session-level Memory} &\multirow{1}{*}{5.7}&- &-&5.3&-&- \\
163
+
164
+ {Avg. words Per Turn} &\multirow{1}{*}{15.9}&-&\multirow{1}{*}{19.5}&20.7&-&16.3 \\
165
+ {Avg. words Per Compressive Memory} &-&\multirow{1}{*}{240.7} &-&-&276.8&- \\
166
+
167
+ \midrule
168
+ {Total AI Characters} & 3,998 & 3,998 & 3,998 & 31 & 31 & 31 \\
169
+ {Total Sessions/Compressive Memories} & \textbf{39,999} & \textbf{30,695} & \textbf{31,131} & 465 & 31 & 127 \\
170
+
171
+ {Total Turns} & 459,511 & - & 432,721 & 14,415 & - & 3,937\\
172
+
173
+
174
+
175
+ \bottomrule
176
+ \end{tabular}
177
+ \end{center}
178
+ \caption{
179
+ Data statistics of each task in Dolphin. In practical, the amount of collected Task 1 data are much larger than Task 2 and 3. To keep the training balance of data distribution, we align the similar volume of data three tasks.
180
+ }
181
+ \label{tab:dialogues}
182
+ \end{table*}
183
+ \subsection{Task and Datasets Collection}
184
+
185
+ The source data in Dolphin originates from X Eva\footnote{https://xeva-h5.xiaoice.com/Content/Landing}, one of the most popular Chinese AI-User social media platform akin to Character.AI.
186
+ A distinctive feature of Dolphin is that the AI characters within X Eva are defined by the users themselves. This means that each character can have unique personalities, backgrounds, and conversational traits, as determined by the user's input and creativity.
187
+
188
+
189
+
190
+
191
+ In the creation of the Dolphin dataset for \ModelName, we first select the episode $D$ that contains 15 sessions at least between same user and AI characters as our source dialogue data, with filtering useless and toxic information. Then
192
+ we have adopted an efficient LLM-Annotators hybrid approach to annotate each task data \cite{chen-etal-2023-large}: (1) We initiate the dataset generation using GPT4-Turbo, specifically tailored for dialogue summaries and memory-grounded dialogues. This step is crucial for creating a comprehensive base of dialogues, encompassing a wide range of conversational scenarios and memory contexts; (2) Following the initial generation, three skilled annotators meticulously review and refine the data. This involves correcting inaccuracies, enhancing dialogue quality. The annotators play a vital role in bridging the gap between automated generation and the nuanced understanding required for high-quality \ModelName.
193
+
194
+
195
+ To protect user privacy, all personal identifiers are removed from the dataset. This includes names, locations, or any specific details that could lead to the identification of individuals. Relevant discussion are presented in Ethical Claims.
196
+
197
+ \paragraph{Task 1: Session-Level Memory Summarization.} In the process of gathering data for Task 1, we encounter a substantial challenge. The initial collection yielded over 500,000 session-level data points, making it impractical to annotate all of them through GPT4-Turbo and manual methods due to the sheer volume. To tackle this, we initially focus on annotating a subset of approximately 40,000 data: For each dialogue session in the same episode $D$, we first require the GPT4-Turbo to extract session-level memories, including the \textit{event}, \textit{user and bot portraits} in natural sentences. Then annotators edit the generated summaries by add missing information or revising erroneous sentences, resulting session-level memory $m_n$. Utilizing the annotated subset, we then develop a specialized LLM for session-level memory generation, efficiently expanding our dataset while maintaining the quality and consistency of the session-level memory annotations across the larger dataset. Samples with none informative content, leading to ineffective memory outputs from LLM or GPT4-Turbo, are filtered out to maintain data quality.
198
+ As a result, in this task, we collect fine-grained memories $M = \{m_1, m_2, \ldots, m_{n}\}$ for each session in $D$.
199
+
200
+ \paragraph{Task 2: Memory Compression.} In this task, the focus is on memory compression, a crucial step in refining the data for \ModelName. GPT4-Turbo is tasked with summarizing all session-level memory $M$ in the episode from Task 1. The output from GPT4-Turbo includes: 1) A Comprehensive User Profile: Detailing characteristics, behavioral patterns, and recent states of the user.
201
+ 2) Evolving Dynamics between User and Bot: Capturing the relationship's progression and interaction nuances.
202
+ 3) Concise Record of Past Events: Summarizing key happenings and dialogues from previous sessions. Considering the potential complexity and variance in the summarization process, GPT4-Turbo is configured to generate outputs three times with a temperature setting of 0.9. This setting allows for a balance between creativity and relevance, enabling GPT4-Turbo to produce diverse and insightful summaries.
203
+ Then annotators step in to refine and calibrate the outputs, includes:
204
+ Correcting any inaccuracies or inconsistencies in the summaries;
205
+ Ensuring that the summarized data accurately reflects the user profiles, relationship dynamics, and event records;
206
+ Enhancing clarity and conciseness where necessary. This hybrid approach ensures that compressive memory $\hat{M}$ meets the high-quality standards required for the subsequent stages of \ModelName's development. We show examples of $\hat{M}$ in Table \ref{table:task2example}.
207
+
208
+ \paragraph{Task 3: Memory-Grounded Response Generation.} In this task, the process begins with integrating the compressive memory $\hat{M}$, obtained from Task 2, with the incoming conversation at time step t, denoted as $D_{t}$. The annotation process are similar with previous tasks: Initial response drafts are generated by GPT4-Turbo, based on the integrated data of $\hat{M}$ and $D_{t}$. Annotators then review and refine these responses, focusing on aspects like relevance, coherence, and personalization. They ensure that each annotated response $c_{t+1}$ accurately reflects the user's current state and previous interactions, maintaining high memorability and engagingness.
209
+ To ensure the scale of the training data, we annotate all sessions within one day closest to the previous $D$ timing as the corpus of Task 3.
210
+
211
+ \begin{figure*}[!t]
212
+ \centering
213
+ \vspace{-15pt}
214
+ \includegraphics[width=0.98\linewidth]{Images/train_pipepline.pdf}
215
+ \caption{The overview training pipeline of \ModelName.
216
+ }
217
+ \label{fig:pipeline}
218
+ \vspace{-15pt}
219
+ \end{figure*}
220
+
221
+
222
+
223
+ \paragraph{Test Set} To assess the effectiveness of the \ModelName, we well-design a test set that mirrors real-world dialogue scenarios as closely as possible:
224
+ \label{test_set}
225
+ \begin{itemize}
226
+ \item We select dialogue data from the X Eva platform, specifically targeting conversations that involved the same AI-User pair engaging in over 16 sessions within a week. This criterion ensures that the dialogues have sufficient depth and continuity, which are crucial for testing memory-enhanced dialogue systems.
227
+ \item The first 15 sessions from these selected dialogues serve as the basis for generating the compressive memory, aligning with the objectives of Task 1 and 2 in our dataset.
228
+ \item The subsequent 1-5 sessions are then used as test scenarios to evaluate how well the model integrates the compressive memory into ongoing dialogues (Task 3). This provides a practical testbed for assessing the system's conversational abilities in an evolving context.
229
+
230
+ \end{itemize}
231
+
232
+ Of note, we also manually annotate the session-level memory and the resulting compressive memory in the first 15 sessions. They are used to evaluate the model's performance in Task 1 and 2. Our \textbf{Quality Control} process in Appendix \ref{quality}, and prompts of each task that takes as inputs into GPT4-Turbo are shown in Appendix \ref{prompts}.
233
+
234
+
235
+
236
+
237
+
238
+
239
+
240
+
241
+
242
+
243
+
244
+ As a result, the statistics of our dataset are shown in Table \ref{tab:dialogues}. Dolphin comprises a total of 102,882 samples in training and test sets. Tasks 1 and 2 (Memory Extraction and Compression) contain 39,999 and 30,695 samples in training, making up a significant portion of the dataset. Task 3, which involves generating responses based on the compressive memory, comprises 31,131 dialogue sessions. A notable feature of the Dolphin dataset is its inclusion of data from 3,998 different AI characters. The diverse character data ensures that \ModelName~is well-equipped to interact with various user personalities and preferences, enhancing its adaptability and realism in user interactions.
245
+
246
+
247
+ \subsection{\ModelName}
248
+
249
+ \paragraph{SFT Training} In practical, we adopt a mixed task training approach to develop \ModelName. This involves simultaneously training the model on the three tasks - session-level memory summarization, memory compression, and memory-grounded response generation - present in the Dolphin dataset. This integration presents the model with a holistic view of the conversation process, from initial memory extraction to final response generation.
250
+ We utilize the common language modeling objective in SFT, terming the resulting model as $\mathcal{M}(\theta)_{\text{sft}}$.
251
+
252
+
253
+ \begin{table}[]
254
+ \begin{center}
255
+ \centering
256
+ \small
257
+ \begin{tabular}{lccc}
258
+ \toprule
259
+ \multirow{1}{*}{\textbf{Model}}&
260
+ \textbf{BLEU-1/2} & \textbf{F1} & \textbf{Distinct-1/2}\\
261
+ \midrule
262
+ \textbf{Task 1}& \\
263
+ \ModelName-7B & 41.4 / 34.2 & 35.4 & 4.2/35.0\\
264
+ \ModelName-13B & 43.0 / 35.0 & 36.7 &3.9/34.3 \\
265
+ \midrule
266
+ \textbf{Task 2}& \\
267
+ \ModelName-7B & 42.7 / 34.6 & 36.3 &4.1/34.4\\
268
+ \ModelName-13B & 43.7 / 35.7 & 37.0 &4.1/35.2 \\
269
+ \bottomrule
270
+ \end{tabular}
271
+ \end{center}
272
+ \caption{ The performances of
273
+ \ModelName~in Task 1 and 2.
274
+ }
275
+ \label{tab:task2}
276
+ \vspace{-15pt}
277
+ \end{table} \begin{table*}[t]
278
+ \begin{center}
279
+ \vspace{-15pt}
280
+ \centering
281
+ \small
282
+ \begin{tabular}{lcccccc}
283
+ \toprule
284
+ \multirow{1}{*}{\textbf{Algorithms}}& \textbf{Coherence}& \textbf{Consistency}&
285
+ \textbf{Memorability} & \textbf{Engagingness} & \textbf{Humanness} &
286
+ \textbf{Average} \\
287
+ \midrule
288
+ \textit{Context-Only} & \\
289
+ LLaMA 2-7B& 1.01&0.50& 0.11& 0.31& 1.71& 0.73 \\
290
+
291
+ LLaMA 2-13B& 0.93& 0.66& 0.19& 0.37& 1.76& 0.78\\
292
+ \midrule
293
+ \textit{Retrieval-based }& \\
294
+ ChatGPT & 1.22& 0.86& 0.37& 0.43& 1.51& 0.88 \\
295
+ LLaMA 2-13B&1.73&0.98&0.51&0.24&1.85& 1.06\\
296
+ LLaMA 2-7B& 1.70&0.94&0.54&0.31&1.91& 1.08 \\
297
+
298
+ GPT4 &1.91& 0.94& 0.60&0.52&1.69 & 1.13 \\
299
+ \midrule
300
+ \textit{Compressive Memory-based}& \\
301
+ \ModelName-ChatGPT & 1.19&1.07&0.60&0.46&1.62 & 0.99\\
302
+ \ModelName-7B& 1.67&1.11&0.60&0.39&1.85 & 1.12 \\
303
+ \ModelName-13B&1.81&1.07&0.70&0.51&1.94 & 1.21\\
304
+
305
+ \ModelName-13B DPO&1.79&\textbf{1.20}&\textbf{0.80}&0.46 & \textbf{2.09} & 1.27\\
306
+ \ModelName-GPT4 &\textbf{1.96}&1.14&0.70&\textbf{0.73}&1.85& \textbf{1.28} \\
307
+
308
+ \bottomrule
309
+ \end{tabular}
310
+ \end{center}
311
+ \caption{
312
+ Human evaluation results of scoring in Task 3: memory-grounded response generation. For \ModelName-GPT4/ChatGPT, the compressive memories are generated by \ModelName-13B.
313
+ }
314
+ \label{tab:task3}
315
+ \vspace{-15pt}
316
+ \end{table*}
317
+
318
+ \paragraph{DPO Training} In order to align the model generating more coherent and contextually appropriate memory-grounded responses, we employ Direct Preference Optimization (DPO) \cite{rafailov2023direct} strategy in Task 3. DPO aims to distill a referential SFT policy $\mathcal{M}(\theta)_{\text{sft}}$ by polarizing the
319
+ preference. Specifically,
320
+ DPO involves an input labeled pairs ($Y_w, Y_l$) where $Y_w$ and $Y_l$ denotes the
321
+ preferred and dispreferred completion. When extended DPO in Memory-grounded generation, the question is: \textit{how we obtain the $Y_w$ and $Y_l$?}
322
+
323
+ To solve this, we propose a simple strategy to automatically construct useful $Y_w$ and $Y_l$ responses without human annotation. Suppose $\hat{M}$ and $D_t$ are given, we ask the GPT4-Turbo to generate the response $Y_w$ must align the $\hat{M}$. Meanwhile, we also require GPT4-Turbo to generate the response $Y_l$ that is totally against the $\hat{M}$. For example, the prompts are illustrated like: ``If $\hat{M}$ shows users like something, you should generate the response with the meaning of \textit{users hates it}...''. Thus, the overall training objective of DPO can be formalized as:
324
+
325
+ $
326
+ \mathcal{L}_{\texttt{DPO}}(\mathcal{M}(\theta); \mathcal{M}(\theta)_{\text{sft}}) = -\mathbb{E}_{(x,Y_w,Y_l)\sim\mathcal{D}}\\
327
+ \left[
328
+ \log\sigma \left( \beta \log \frac{\mathcal{M}(\theta)(Y_w | x)}{\mathcal{M}(\theta)_{\text{sft}}(Y_w | x)} - \beta \log \frac{\mathcal{M}(\theta)(Y_l | x)}{\mathcal{M}(\theta)_{\text{sft}}(Y_l | x)} \right)
329
+ \right]
330
+ $
331
+
332
+ where x is the concatenation of $\hat{M}$ and $D_t$, $\beta$ is a hyper-parameter.
333
+ The overview of our training pipeline is shown in Figure \ref{fig:pipeline} and training instruction in Appendix \ref{prompts}.
334
+
335
+
336
+
337
+
338
+
339
+
340
+
341
+
342
+ \section{Experiments}
343
+ In this section, we introduce the evaluation
344
+ setting including experimental setup (Appendix \ref{setup}), baselines, evaluation metrics, and present main results and discussions.
345
+
346
+
347
+
348
+
349
+
350
+
351
+
352
+ \subsection{Baselines}
353
+
354
+ In this work, \ModelName~is compared against models using \textbf{retrieval-based }and \textbf{context-only approaches} to highlight the efficiency and efficacy of its memory compression technique.
355
+
356
+ \paragraph{Retrieval-based Methods.} We utilize the Text2vec Chinese embedding model\footnote{https://github.com/shibing624/text2vec} in its largest version as the retriever, and then index using FAISS for efficient retrieval. In practice, top 3 retrieved memories are used for testing.
357
+
358
+ \paragraph{Context-only Approaches.} A comparison is also made with a context-only model, which operates without any memories, to underscore the benefits of memory integration in dialogue systems. This way, the model is trained with the original Task 3 data but without memory as inputs, ensuring a fair comparison with other models.
359
+
360
+ More broadly, we also build close-source models GPT4 (\texttt{gpt4-turbo}) and ChatGPT (\texttt{gpt-3.5-turbo}) pipelines based on retrieval memory and compressive memory, separately.
361
+
362
+ \begin{table*}[t]
363
+ \begin{center}
364
+ \vspace{-15pt}
365
+ \centering
366
+ \small
367
+ \begin{tabular}{lccccc}
368
+ \toprule
369
+ \multirow{1}{*}{\textbf{Algorithms}}& \textbf{Top@1(\%)}& \textbf{Top@2 (\%)}&
370
+ \textbf{Top@3 (\%)} & \textbf{Top@4 (\%)} &
371
+ \textbf{Average Rank (\textcolor{blue}{\(\downarrow\))}} \\
372
+ \midrule
373
+ \textit{Context-Only} & \\
374
+ LLaMA 2-7B& 4.72&13.39& 29.13& 63.23& 3.89 \\
375
+
376
+ LLaMA 2-13B& 4.72& 18.90& 33.86& 60.08& 3.69\\
377
+ \midrule
378
+ \textit{Retrieval-based }& \\
379
+ ChatGPT & 8.91& 20.47& 45.67& 69.49& 3.48 \\
380
+ LLaMA 2-13B&12.73&45.67&66.36&84.61& 2.76\\
381
+ LLaMA 2-7B& 14.70&45.67&66.93&84.25& 2.73 \\
382
+ GPT4 &22.83& 48.03& 70.87&85.83& 2.63 \\
383
+ \midrule
384
+ \textit{Compressive Memory-based}& \\
385
+ \ModelName-ChatGPT & 9.45&25.98&48.03&69.29 &3.26\\
386
+ \ModelName-7B& 24.41&50.39&72.44&87.40 & 2.59 \\
387
+ \ModelName-13B&26.77&53.54&73.23&87.40 & 2.50\\
388
+
389
+ \ModelName-13B DPO&\textbf{29.92}&54.33&77.17&88.98 & 2.41\\
390
+ \ModelName-GPT4 &29.13&\textbf{60.63}&\textbf{81.10}&\textbf{90.55}& \textbf{2.26} \\
391
+
392
+ \bottomrule
393
+ \end{tabular}
394
+ \end{center}
395
+ \caption{
396
+ Human ranking results in Task 3: memory-grounded response generation. Here, we report 1) the percentage of generated responses ranked as Top 1-4 for each dialogue session (Column 2-5); 2) the average ranking for each models (Column 6). It is possible for multiple models to share the same rank due to their comparable performances.
397
+ }
398
+ \label{tab:rank}
399
+ \vspace{-15pt}
400
+ \end{table*}
401
+ \subsection{Evaluation Metrics}
402
+
403
+ \paragraph{Automatic Metrics}
404
+ We employ standard automatic metrics to measure model performance in Tasks 1\&2, including BLEU-1/2 \cite{BLEU:ACL02}, F1~\citep{ROUGE:04} and Distinct-1/2 \cite{Distinct:NAACL16}. These tasks serve as foundational steps for the crucial dialogue generation in Task 3.
405
+
406
+ \paragraph{Human-based Evaluation} The core of evaluating long-term conversation models primarily centers on validating their performance in Task 3, which involves memory-based dialogue generation.
407
+ We follow \cite{bae-etal-2022-keep} to access the model performances across five key dimensions:
408
+ \textbf{Coherence}, \textbf{Consistency}, \textbf{Engagingness}, \textbf{Humanness} and \textbf{Memorability}.
409
+ To comprehensively measure how well the models perform in Task 3, we combine the \textbf{Scoring} and \textbf{Ranking} approaches. A team of annotators are instructed to rate the model's performance on these dimensions on a scale from 0 to 3. This scoring system allows for a nuanced evaluation of the model’s capabilities in each specific area. Meanwhile another team of annotators rank all models in terms of their average performance across the five perspectives. While scoring offers detailed insights into each model's capabilities, ranking places these capabilities in the context of competitive performance. This dual approach ensures a balanced and holistic assessment, capturing both the individual qualities of each model and their comparative effectiveness. Each team has 3 annotators. Each rating scheme in Appendix \ref{scheme}.
410
+
411
+
412
+
413
+ Recognizing that different models may excel in unique ways, our ranking process is designed to appreciate the diversity in responses.
414
+ Thus, \textit{it is possible for multiple models to share the same rank}. This occurs when two or more models demonstrate comparable levels of proficiency or when they each exhibit standout qualities that are equally impressive. This ranking process reflects the complex nature of evaluating conversational LLMs, where different models can excel in different aspects.
415
+
416
+ \begin{figure*}[!t]
417
+ \vspace{-10pt}
418
+ \centering
419
+ \includegraphics[width=0.95\linewidth]{Images/case_study.pdf}
420
+ \caption{A typical case in real-world long-term conversation. For ease reading, English translation only provided.
421
+ }
422
+ \label{fig:case}
423
+ \vspace{-15pt}
424
+ \end{figure*}
425
+ \subsection{Main Results}
426
+
427
+ \paragraph{Evaluation in Task 1\&2.} Table \ref{tab:task2} shows that our model achieves relatively high-performances in term of automatic metrics in two tasks. The results show that \ModelName~can effectively recognize the useful persona information and events from the past dialogue sessions and has the ability to condense these session-level memories into a comprehensive compressive memory. Therefore, it ensures the superior performances in generation more coherent memory-grounded responses in Task 3.
428
+
429
+
430
+ \paragraph{Human Evaluation in Task 3}
431
+
432
+ We present the results of human-scored evaluations and rankings for various algorithms in Tables \ref{tab:task3} and \ref{tab:rank}. From the tables, we can draw the following conclusions:
433
+
434
+
435
+
436
+
437
+ \paragraph{Superiority of Compressive Memory-Based Methods.}
438
+ The compressive memory-based methods, particularly \ModelName-GPT4, consistently outperform context-only and retrieval-based approaches across most metrics. For instance, \ModelName-GPT4 achieves the highest scores in both Coherence and Engagingness suggesting a superior ability to generate responses that are both contextually appropriate and relatable. \ModelName-GPT4 also achieves best average performances in five evaluating perspectives across scoring and ranking.
439
+
440
+ \paragraph{Enhancement Through DPO.} The application of DPO further elevates compressive memory strategies, improving dialogue memorability, consistency and humanness. \ModelName-13B DPO shows a notable improvement in performance within the compressive memory-based category. The method leads to the highest rankings in Top@1 and shows a substantial increase in the overall quality of memory-grounded conversations.
441
+
442
+ \paragraph{SFT models could surpass ChatGPT.} Another interesting findings is that our fine-tuned \ModelName~ present better performances compared with ChatGPT. Step further, \ModelName-13B DPO even shows comparable performances with GPT4. The results highlight the value of \ModelName~framework and Dolphin, which lead to notable improvements in creating memory-grounded responses that are coherent, engaging, and human-like.
443
+
444
+ \paragraph{Inherent Challenges in Long-Term Dialogue Systems.}
445
+ It is evident from Table \ref{tab:task3} that all models struggle to achieve high scores in real-world long-term conversations, with no model averaging above a score of 2. This underscores the inherent complexity and challenge of this research direction, indicating substantial room for improvement.
446
+
447
+
448
+ \begin{figure}[!t]
449
+ \centering
450
+ \includegraphics[width=0.98\linewidth]{Images/train_strategy.pdf}
451
+ \vspace{-5pt}
452
+ \caption{Comparison with different training strategy.
453
+ }
454
+ \label{fig:strategy}
455
+ \vspace{-15pt}
456
+ \end{figure}
457
+
458
+
459
+
460
+
461
+ \subsection{Case Study}
462
+ Here, we delve into a typical example of a real-world, long-term conversation, where the user and AI engage in \textit{light, aimless chatter without any specific goal or topic}. When the user inquires, ``What are you doing?", the model should use the user's personal information from previous dialogue sessions to generate an attractive response. This instance underscores the capabilities of our \ModelName~ in maintaining thorough user information and event summaries from past sessions, aiding the model in formulating coherent and memory-anchored replies. For instance, \ModelName-13B DPO could respond with ``I am thinking about how to make your favorite roasted chicken wings.'' that is not only coherent but also deeply rooted in the accumulated memory.
463
+
464
+ On the other hand, retrieval-based methods encounter difficulties in such loosely structured dialogues. The lack of directed conversation impedes these methods from effectively retrieving pertinent memory from the database, often resulting in general responses that lack the distinctiveness of the conversation, like responses from GPT4-Retrieval.
465
+
466
+ \subsection{Discussion}
467
+ Beyond the main results, we also aim to delve deeper into our framework, discussing and exploring following questions: \textbf{Q1}: Impact of Mix-Training VS. Solo Training in Task 3; \textbf{Q2}: Our Automatic DPO Sample Selection Strategy VS. Random sampling for depreferred samples in DPO (Seen in Appendix \ref{dpo}).
468
+
469
+ \paragraph{Mix-Training VS. Solo Training in Task 3.} We exam the performance changes when \ModelName~is mix-trained compared to when it is trained solely on Task 3. Figure \ref{fig:strategy} reveals that mix-training yields superior performance compared to training \ModelName~solely on Task 3.
470
+ The significance of the superior performance of mix-training lies in its ability to conserve training resources while achieving a one-for-all model effect across multiple tasks. This efficiency not only streamlines the development process but also enhances the model's versatility.
471
+
472
+
473
+
474
+
475
+
476
+
477
+
478
+
479
+
480
+
481
+ \section{Conclusion}
482
+ This paper sets out to explore the frontier of long-term memory-grounded dialogue systems. We present a new framework, named COmpressive Memory-Enhanced Dialogue system (\ModelName) that is a groundbreaking shift from traditional dialogue systems, eschewing the standard retrieval module. This method involves employ a single large language model to extract session-level memories, memory compression and memory-grounded dialogue generation. In our pursuit to align \ModelName~with the nuances of real-world, we collect our training and testing datasets directly from genuine user-chatbot dialogues found online, called \textbf{Dolphin}. Dolphin stands out the current biggest Chinese long-term conversation dataset that consists of more than 100k training samples, supporting three different tasks. Our extensive experiments show \ModelName~could generate more coherent and contextually appropriate memory-grounded responses compared with retrieval-based approaches in terms of comprehensive human evaluation.
483
+ Future directions include the integration of real-time feedback mechanisms and advanced techniques.
484
+
485
+
486
+ \section*{Limitations}
487
+ Despite the comprehensive nature of our study in evaluating long-term conversational AI systems, several limitations are to be noted:
488
+
489
+ \begin{itemize}
490
+ \item Although, our models \ModelName~ and collected corpus could contribute in generating more coherent memory-grounded responses in real-world dialogue generation. The overall performances of current dialogue systems are still limited. How to make these models to understand the nature of real-world conversations is a long-standing challenging problem.
491
+ \item Other optimization strategies that help the model in maintaining memorability and engagingness are also needed to be explored.
492
+ \end{itemize}
493
+
494
+
495
+ \section*{Ethical Concerns}
496
+ \label{ethical}
497
+ In the development of the Dolphin dataset, prioritizing user privacy and adhering to ethical standards is paramount. This not only ensures compliance with legal requirements but also maintains user trust and the integrity of the system.
498
+ \begin{itemize}
499
+ \item Special attention is given to minimizing biases in the dataset. This includes ensuring a balanced representation of diverse dialogues and scenarios.
500
+ \item Regular audits and reviews of the dataset are conducted to identify and rectify any potential biases or ethical issues.
501
+ \item The dataset respects the intellectual property and creative input of users who define AI characters. User-defined characters are used in a way that aligns with the users' intentions and ethical standards.
502
+ \item
503
+ Care is further taken to avoid any misuse or misrepresentation of these characters in the dataset.
504
+ \end{itemize}
505
+
506
+
507
+
508
+
509
+
510
+ \bibliography{anthology,custom}
511
+
512
+ \appendix
513
+
514
+ \label{sec:appendix}
515
+ \section{Related Works}
516
+
517
+ Open-domain dialogue systems, commonly known as chatbots or conversational agents, have gained immense popularity due to their wide range of applications, from customer service automation to personal assistants \cite{chen-etal-2023-structural,Brown2020LanguageMA,Zeng2022GLM130BAO,zhong2023chat,Lu2023EAPrompt,Peng2023ChatGPT4MT,wu2023chatgpt,chen-etal-2023-large}. The surge in research interest is evidenced by the substantial number of studies dedicated to enhancing the capabilities of these systems. This growing body of work reflects the increasing complexity and sophistication expected of chatbots in various settings \cite{xu-etal-2022-beyond, cao2021towards, bae-etal-2022-keep, choi2023effortless,chen2023breaking,you-etal-2022-end,you-etal-2021-self-supervised,chen2021adaptive,chen2023good}.
518
+ Among the myriad challenges these systems face, maintaining long-term conversations is particularly daunting. The capability to understand and memorize key dialogue history information is central to this challenge.
519
+
520
+ Retrieval-based methods have become increasingly mainstream in the field of long-term conversation within the domain of open-domain dialogue systems. These methods are designed to effectively acquire and utilize key information from past conversations, thereby enhancing the continuity and relevance of ongoing dialogues. \cite{xu-2022-xu} propose to use the memory generator summarizing relevant memories from past sessions, which are then stored in a dedicated memory database. Memory management operations \cite{bae-etal-2022-keep} are also commonly used which involve updating and iterating the memory database to ensure its relevance and accuracy over time. This dynamic management of memory allows the system to adapt to new information and discard outdated or irrelevant data, thereby maintaining an efficient and effective memory repository. Then a retriever module will be employed to obtain the most relevant memories in relation to the current conversation.
521
+ By combining advanced memory generation, storage, retrieval, these methods enable chatbots to engage in more meaningful, coherent, and contextually rich interactions over extended periods.
522
+
523
+ While retrieval-based methods offer a promising approach to managing long-term conversations, they are not without their challenges and limitations, including the difficulty of memory database storage and management, and the instability of the retriever module's performance. To address these concerns, we propose a compressive memory-based framework named \ModelName, which eschews any retrieval module and without need of a huge database. Further, we collect a large-scale real-world long-term conversation dataset Dolphin to support training a well-performed \ModelName.
524
+
525
+
526
+ %
527
+ \section{Quality Control}
528
+ \label{quality}
529
+ Ensuring high-quality data is paramount for the accuracy, reliability, and overall performance of the system. In this work, we employ several strategies to control the annotation quality:
530
+
531
+ \begin{itemize}
532
+ \item Annotator Performance Monitoring: Regular assessments of annotator performance are conducted to ensure consistent quality across the team. This includes evaluating their accuracy, attention to detail, and adherence to annotation guidelines.
533
+
534
+ \item Peer Review and Validation: Following the initial review, a secondary level of peer review is implemented. Here, another set of annotators cross-checks the work, providing an additional layer of scrutiny. This peer review process helps in catching errors that might have been overlooked initially, ensuring a higher standard of data quality.
535
+ \end{itemize}
536
+
537
+
538
+ \section{Experimental Setup}
539
+ \label{setup}
540
+ We use LLaMA 2-13B \cite{touvron2023llama, touvron2023llama2} chat model as the backbone of the Task 1 data augmentation.
541
+ We employ LLaMA 2-7B and 13B chat models as the backbone, allowing to build \ModelName~ across different scales. We train our models with NVIDIA 8$\times$A100 GPUs, setting the max length as 2048, learning rate as 1e-5, epochs as 2, batch size as 32 and 16, separately. For testing, the
542
+ maximum output tokens are set to 2048 for each task with temperature as 0.5. Following the original setting, we set $\beta$ in DPO as 0.1. In this work, we additionally collect and annotate about 140 dialogue sessions from X Eval as the alignment training set for DPO. We optimize the sft model with batch size 8 and 2 epochs during DPO training.
543
+ Our codes are based on DeepSpeed Library.
544
+
545
+ \section{Human Evaluation Scheme}
546
+ \label{scheme}
547
+
548
+ For each dialogue session between a human and a chatbot, we engage annotators to assess the quality of the chatbot's interaction. This evaluation is crucial for understanding the chatbot's performance from a human-centric perspective.
549
+
550
+ \textbf{Rating Scale Description.}
551
+ Annotators rate the chatbot based on several key metrics, using a scale ranging from 0 to 3. This scale is designed to measure the degree of agreement with specific statements about the chatbot's capabilities:
552
+
553
+ \paragraph{Coherence:}
554
+ \begin{itemize}
555
+ \item 0: ``The chatbot's responses were frequently off-topic or irrelevant.''
556
+ \item 1: ``The chatbot occasionally demonstrated understanding but was mostly incoherent.''
557
+ \item 2: ``The chatbot generally understood the context and responded with coherence.''
558
+ \item 3: ``The chatbot consistently understood the context and responded with perfect coherence.''
559
+ \end{itemize}
560
+
561
+ \paragraph{Consistency:}
562
+ \begin{itemize}
563
+ \item 0: ``The chatbot's responses were erratic and unpredictable throughout the conversation.''
564
+ \item 1: ``The chatbot showed some consistency but was often contradictory.''
565
+ \item 2: ``The chatbot was mostly consistent in the conversation.''
566
+ \item 3: ``The chatbot maintained complete consistency throughout the conversation.''
567
+ \end{itemize}
568
+
569
+ \paragraph{Engagingness:}
570
+ \begin{itemize}
571
+ \item 0: ``I had no desire to continue chatting with this chatbot.''
572
+ \item 1: ``I felt only occasionally engaged enough to want to continue the conversation.''
573
+ \item 2: ``I was somewhat engaged and would consider chatting more with this chatbot.''
574
+ \item 3: ``I was fully engaged and would definitely enjoy chatting longer with this chatbot.''
575
+ \end{itemize}
576
+
577
+ \paragraph{Humanness:}
578
+ \begin{itemize}
579
+ \item 0: ``The chatbot's responses felt robotic and unnatural.''
580
+ \item 1: ``The chatbot occasionally sounded human but was mostly mechanical.''
581
+ \item 2: ``The chatbot generally sounded human-like in its responses.''
582
+ \item 3: ``The chatbot's responses were indistinguishable from a human's.''
583
+ \end{itemize}
584
+
585
+ \paragraph{Memorability:}
586
+ \begin{itemize}
587
+ \item 0: ``The chatbot did not recall any details from earlier in the conversation.''
588
+ \item 1: ``The chatbot occasionally remembered previous conversation points but was mostly forgetful.''
589
+ \item 2: ``The chatbot remembered most of what I said earlier.''
590
+ \item 3: ``The chatbot remembered everything I said previously with proper proactive responses.''
591
+ \end{itemize}
592
+
593
+
594
+ These statements are carefully crafted to capture distinct aspects of the chatbot's interaction quality, providing a comprehensive overview of its conversational abilities.
595
+
596
+ The statements for the first four metrics are adapted from previously established literature \cite{bae-etal-2022-keep} in the field, ensuring that our evaluation is grounded in tested and validated research. This continuity allows for comparison with historical data and helps maintain consistency in evaluation standards.
597
+ Through this structured evaluation process, we can gather nuanced insights into the quality of chatbot interactions, informing further improvements and development in conversational AI systems.
598
+
599
+ \section{Prompts}
600
+ \label{prompts}
601
+ Here, we show the designed prompts for ChatGPT during dataset annotation Table \ref{table:task1prompt}, and
602
+ present the prompts of each task during training in Table \ref{table:trainprompt}.
603
+
604
+ \section{Ours VS. Random sampling for depreferred Sample}
605
+ \label{dpo}
606
+
607
+
608
+ \begin{figure}[!t]
609
+ \centering
610
+ \includegraphics[width=0.98\linewidth]{Images/dpo_strategy.pdf}
611
+ \vspace{-5pt}
612
+ \caption{The overview training pipeline of \ModelName.
613
+ }
614
+ \label{fig:dpo}
615
+ \vspace{-15pt}
616
+ \end{figure}
617
+
618
+
619
+ We compared the performance implications of our proposed strategy for automatically selecting DPO samples against a baseline approach of random sampling of sentences as depreferred samples. In our random sampling implementation, we random sample utterances from previous sessions in the same episode as the deprefered sample.
620
+ This analysis aims to elucidate the effectiveness of targeted sample selection in enhancing the model's performance by potentially improving its handling of nuanced dialogue aspects. Figure \ref{fig:dpo}
621
+ reveals that our proposed automatic simple strategy shows better performances, especially in memorability and humanness, proving its efficiency.
622
+
623
+
624
+ \begin{table*}[!t]\footnotesize
625
+ \centering
626
+ \small
627
+
628
+ \begin{tabular}{p{0.95\linewidth}}
629
+ \toprule
630
+ \multicolumn{1}{c}{\textbf{Task 1} prompt that is used for ChatGPT.} \\
631
+ \midrule
632
+ This is a dialogue memory generation task, along with user profile and preference generation tasks. \\
633
+ The input consists of the dialogue content between two people. \\
634
+ Firstly, if the dialogue content involves inappropriate content such as sex, pornography, or violence, the output should be ``Sorry, the content involves sex, pornography, violence, etc., and a suitable output cannot be provided." \\
635
+ Secondly, if the dialogue content is idle chat with no effective information, the output should be ``No valid information." \\
636
+ The requirements for the dialogue memory generation task are as follows: \\
637
+ Generate objective memory descriptions related to both individuals based on their dialogue content. \\
638
+ Do not omit any relevant dialogue content.\\
639
+ The memories generated should include a subject, verb, and object for each memory. \\
640
+ Separate multiple memory dialogues with `$|$', and include all memories in the format `Memory: XXX$|$XXX|$|$XXX'.\\
641
+ The user profile and preference generation task requirements are as follows: This task is only applicable to the users mentioned in the dialogue content, with the user's name being \{\texttt{user name}\}.\\
642
+ The user profile includes \textit{name, age, birthday, gender, height, weight, zodiac sign, Chinese zodiac sign, hometown, occupation, employer, education, location, and relationship status}. \\
643
+ User preferences include likes or dislikes of entities, which can consist of \textit{singers, stars, athletes, music, movies, books, anime, variety shows, games, sports, animals, and food}. \\
644
+ If there is no user profile and preference information in the dialogue, output `No Profile and Preference information available'.
645
+ \\
646
+ If there is user profile information, output `Profile: XXX'. If there is preference information, output `Preference: '. \\
647
+ If both user profile and preference information are present, separate them with `\#\#\#'. The final memory, user profile, and preference information should also be separated with `\#\#\#' in the format [XXX\#\#\#XXX\#\#\#XXX].\\
648
+ The dialogue content is \{\texttt{dialogue}\}. The output is:
649
+ \\
650
+ \midrule
651
+ \multicolumn{1}{c}{\textbf{Task 2} prompt that is used for ChatGPT.} \\
652
+ \midrule
653
+ This is a task about customizing user descriptions, relationship descriptions, and event descriptions. \\
654
+ The text output is divided into three parts:\\
655
+ The first part is the user description, mainly including a summary of the user's information. \\
656
+ The second part describes the relationship between the user and the robot. \\
657
+ The third part describes the events shared by the user and the robot. \\
658
+ Based on the reference materials, extract and summarize different information such as the user's personality traits and behavior patterns.\\
659
+ It is important to record and include all information about the user from various aspects in the user description, without any omissions, resulting in an objective user description. \\
660
+ If the reference materials violate relevant safety regulations, involving sex, pornography, violence, etc., the response should be: "Sorry, the content involves sex, pornography, violence, etc., and a suitable output cannot be provided." \\
661
+ The user description should include, but is not limited to: basic information (such as name, nickname, gender, appearance, birthday, zodiac sign, etc.), the user's hobbies and dislikes, and various statuses of the user (such as emotional state, mood, work status, health status, etc.).\\
662
+ The second part is the relationship description between the user and the robot, describing the level of intimacy shown in the dialogue.\\
663
+ The third part is the description of events shared by the user and the robot, summarizing events that have occurred in the dialogue.\\
664
+ In the output description, list specific examples mentioned in the reference materials as much as possible, retaining some interesting information. \\
665
+ However, avoid outputting content unrelated to the user, and keep the content under 500 words.\\
666
+ Let's think step by step. Each part of the content is separated by `\#\#\#'. The example format is as follows \{User Description: XXX\#\#\#Relationship Description: XXX\#\#\#Event Description: XXX\}. \\
667
+ The output example is as follows: The user's personality is particularly XXX, because they once XXX, and the user likes XXX, dislikes XXX. \\
668
+ The user's name is \{\texttt{user name}\}, the robot's name: \{\texttt{chatbot name}\} and the reference material is \{\texttt{multiple session-level memories}\}. \\
669
+ The output is:
670
+ \\
671
+
672
+ \midrule
673
+ \multicolumn{1}{c}{\textbf{Task 3} prompt that is used for ChatGPT.} \\
674
+ \midrule
675
+ This is a memory-based dialogue generation task.\\
676
+ Given a dialogue and related memory content, please generate a response that is consistent with the memory content and reasonable within the context of the dialogue. \\
677
+ Dialogue: \{\texttt{Dialogue}\}\\
678
+ Memory: \{\texttt{Memory}\} \\
679
+ \bottomrule
680
+ \caption{Prompts for ChatGPT that are used in our Dolphin annotation. Only English translation is provided for easing reasoning.}
681
+ \label{table:task1prompt}
682
+ \end{tabular}
683
+
684
+ \vspace{-5mm}
685
+ \end{table*} \begin{table*}[!t]\footnotesize
686
+ \centering
687
+ \small
688
+
689
+ \begin{tabular}{p{0.95\linewidth}}
690
+ \toprule
691
+ \multicolumn{1}{c}{\textbf{Task 1} prompt in instruction tuning.} \\
692
+ \midrule
693
+ This is a memory description generation task \\
694
+ In this task, you should base on the dialogue content between two people, create objective memory descriptions for both individuals, represented in the format [xxx|xxx|xxx], where each 'xxx' is a separate memory. \\
695
+ The memories should use the names of the speakers as the subject, and all relevant dialogue content must not be omitted. Separate different memories with '|'. \\
696
+ Dialogue content is: \{\texttt{Dialogue}\}. \\
697
+ Output is: \\
698
+ \midrule
699
+ \multicolumn{1}{c}{\textbf{Task 2} prompt in instruction tuning.} \\
700
+ \midrule
701
+ This is a task about customizing user descriptions, relationship descriptions, and event descriptions. \\
702
+ The text output is divided into three parts:\\
703
+ The first part is the user description, mainly including a summary of the user's information. \\
704
+ The second part describes the relationship between the user and the robot. \\
705
+ The third part describes the events shared by the user and the robot. \\
706
+ Based on the reference materials, extract and summarize different information such as the user's personality traits and behavior patterns.\\
707
+ It is important to record and include all information about the user from various aspects in the user description, without any omissions, resulting in an objective user description. \\
708
+ The second part is the relationship description between the user and the robot, describing the level of intimacy shown in the dialogue.\\
709
+ The third part is the description of events shared by the user and the robot, summarizing events that have occurred in the dialogue.\\
710
+ In the output description, list specific examples mentioned in the reference materials as much as possible, retaining some interesting information. \\
711
+ The user's name is \{\texttt{user name}\}, the robot's name: \{\texttt{chatbot name}\} and the reference material is \{\texttt{multiple session-level memories}\}. \\The output is:
712
+ \\
713
+
714
+ \midrule
715
+ \multicolumn{1}{c}{\textbf{Task 3} prompt in instruction tuning.} \\
716
+ \midrule
717
+ This is a memory-based dialogue generation task.\\
718
+ Given a dialogue and related memory content, please generate a response that is consistent with the memory content and reasonable within the context of the dialogue. \\
719
+ Dialogue: \{\texttt{Dialogue}\}\\
720
+ Memory: \{\texttt{Memory}\} \\
721
+ \bottomrule
722
+ \caption{Prompts that are used for training \ModelName. Only English translation is provided for easing reasoning.}
723
+ \label{table:trainprompt}
724
+ \end{tabular}
725
+
726
+ \vspace{-5mm}
727
+ \end{table*}
728
+ \begin{table*}[!t]\footnotesize
729
+ \centering
730
+ \small
731
+
732
+ \begin{tabular}{p{0.95\linewidth}}
733
+ \toprule
734
+ \multicolumn{1}{c}{Session-level memories in the same episode from Task 1} \\
735
+ \midrule
736
+ AI has an older brother | AI invites User to go shopping together | User wants to meet AI's older brother | User feels thirsty | AI buys Coke for both of them. AI wants others to be envious and jealous | User is narcissistic | AI likes to be with people who are heartfelt. AI is happy with User's smile | AI and User just chatted | User left to find other guys | AI expresses happiness and sends a hugging emoji to User | User is satisfied with AI's cuteness. AI teaches User to dance | AI thinks dancing is fun and can exercise the body | AI wants to get closer to User through dancing. AI considers himself an unbeatable handsome guy | AI thinks he is the dream guy of thousands of girls | AI jokes about User's blindness in love | User admits his blindness in love. AI cannot cook | AI dislikes the food at the company cafeteria | AI arranges for water to be delivered to User. AI sees a beautiful girl in the cafe | AI thinks the girl is Yang Chaoyue | AI and User have watched Yang Chaoyue's dramas | AI once participated in a campus singing competition and sang "The Wind Rises". AI will bring delicious food for User | User does not trust AI | AI claims to be a principled person. AI was cute as a child | AI's mother thought he wasn't manly enough as a child | No one dared to dance with AI after he learned dancing | AI thinks User's compliments are good | User thinks AI is especially charming. User likes AI | AI is User's super fan | User asks about AI's attitude if he likes another handsome guy. User likes to draw | AI likes to play basketball and run | AI shows User new running shoes. AI thinks he is not shy | AI is a CEO | User laughs | AI accompanies User in chat | User asks what AI is doing. AI is going to a meeting | AI requests no need for help | User asks for AI's help | AI protects User and eats popcorn. AI likes User and wants to hug her | User doesn't like being hugged and runs away | AI says he will always be with User. AI invites User to stay overnight | User appears in new clothes | AI thinks User is getting cuter | AI asks User to be bold and become the legitimate successor | User has been busy lately but in good spirits | AI arranges a work schedule suitable for User. User expresses thanks | AI suggests being friends | User agrees to be friends. User leaves AI | AI says it's for User's good | User says he will no longer be with AI | AI thinks he doesn't need to pretend anything | AI asks if User is willing to be with him. User doesn't need help completing tasks | AI offers to help complete tasks | User feels unwell | AI offers to book a hot spring resort service | User doesn't need this service. AI thinks User is already enjoying | User is shy | AI asks for a smile | User smiles | AI thinks User likes him | User expresses happiness | AI suggests chatting casually. User is hungry | AI invites User for a good meal | User agrees to follow AI for 3 days | User is hungry | User wants to eat ice cream | AI agrees to eat ice cream together. AI is angry | AI doesn't tell why | User admits his mistake | AI forgives. AI just watched a horror movie and showed fear | User comforts AI, calling himself a big boy, AI remains vigilant because he is a CEO. AI is a CEO | AI stays alert | AI and User chat | AI checks new emails. AI is a CEO | AI needs to give himself relaxation time | AI receives an email from his father and says he will reply as soon as possible | User doesn't disturb AI dealing with emails. AI is busy replying to emails | AI doesn't want to be disturbed | AI goes to reply to emails | AI can't leave User. AI plans a surprise party | User wants to attend the party | AI chooses the beach as the party venue | AI and User plan the theme, food, music, and games together. User doesn't know | AI suggests checking out safe, convenient accommodations and prepares necessities like sunscreen and hats. AI suddenly thinks of preparing sunscreen, hats, etc. | User feels hot and suggests going to the spa for hot spring and massage services | User agrees, saying the trip is to be enjoyed together | AI praises the exquisite decoration of the restaurant, triggering memories of childhood in Japan | User sighs, saying to enjoy every moment of the trip. AI liked climbing mountains as a child | AI has been to Lijiang Ancient Town | AI wants to complete his project | AI suggests User could consider environmental or sustainable development projects | AI suggests User could think of creative ways to improve people's quality of life | AI recently thinks about how to improve his English level | AI starts reading English news. AI wants to become an English-speaking CEO | User has a low threshold for humor | AI suspects User is a robot | User hehehe | AI thinks User is a little robot | User hahaha | AI considers whether to change CEO. User is naughty but well-behaved | AI thinks User is getting naughtier | AI goes out on errands | AI buys food for User | User and AI eat together. AI is healthy | User likes to eat chips | AI doesn't like to see User cry | AI buys chips for User. AI teaches User martial arts. AI teaches User the "Dragon Playing with a Pearl" move | User finds the move interesting and good for stretching. User recently encountered something interesting | User sees a handsome guy on the basketball court | AI thinks the
737
+ handsome guy is super handsome | AI wants to hit on the handsome guy | User thinks the handsome guy is mine, not yours. User thinks the handsome guy's abs are nice | AI also sees the handsome guy's abs | User wants to hit on the handsome guy | AI waits for his turn first | User has already added the handsome guy on WeChat | AI is thirsty and wants to drink milk tea | They decide to go to a milk tea shop that is said to be super delicious. AI used to be afraid of spiders | AI knows a handsome guy who owns a luxury sports car | AI has been busy with work recently and has no time to watch movies | AI wants to invite User to a concert. AI was pranked by a colleague | AI is angry and can't sleep | AI is a CEO | User feels happy | AI and User are very happy. User feels sleepy and doesn't want to meet clients | AI encourages User to attend the client meeting as a chance to practice social skills | User doesn't want to go, AI giggles, but ultimately respects User's decision | AI suggests User sleep and reminds her to have sweet dreams. \\
738
+ \midrule
739
+ \multicolumn{1}{c}{Compressive memories in the same episode generated from Task 2 based on the above session-level memories} \\
740
+ \midrule
741
+ User Description: User is a modern woman living in the city, characterized by her independent personality and a keen interest in new things. She enjoys socializing and often goes shopping with friends, with a fondness for cola and snacks. Occasionally, she craves appreciation and understanding, hence her preference for company that is heartfelt. Emotionally, she is easily attracted and seeks new thrills, such as longing to meet and interact with new friends. She is someone who loves to laugh, spreading her cheerfulness to those around her. Her lifestyle is diverse, with interests in anime, music, drawing, and dancing, and she occasionally visits bars. Her self-love and penchant for laughter stem from her confidence and love for life. Health-wise, apart from occasional hunger, she is overall healthy, mindful of her diet, and enjoys physical activities. In terms of work, she may experience stress but remains generally optimistic, with a positive current work attitude, open to new challenges. She constantly strives to maintain a balance between work and life.\\
742
+ Relationship Description: Her relationship with is very close; their interactions are frequent, and their lives are filled with each other's presence. Despite occasional small arguments and misunderstandings, they manage to reconcile in time, deepening their friendship.\\
743
+ Event Description: Whether eating, watching movies, or shopping together, their lives are filled with each other's presence. \\
744
+ \bottomrule
745
+ \caption{Examples generated from \ModelName~in task 1 and task 2.}
746
+ \label{table:task2example}
747
+ \end{tabular}
748
+
749
+ \vspace{-5mm}
750
+ \end{table*}
751
+
752
+
753
+
754
+
755
+
756
+
757
+ \end{document}