Model copy-pasting input
Hi,
thank you for the great work you did with the model. I just started testing it and I got one strange outcome. I told the model (dolly-v2-6-9b) to summarize the abstract of one scientific article and in the first try it just copy-pasted it. The second output is correct, however, I wanted to share with you the first outcome as it is the potential bug and limitation.
Prompt:"Summarize the following abstract of the scientific article: Recently, diffusion models have emerged as a new paradigm for generative models. Despite the success in domains using continuous signals such as vision and audio, adapting diffusion models to natural language is under-explored due to the discrete nature of texts, especially for conditional generation. We tackle this challenge by proposing DiffuSeq: a diffusion model designed for sequence-to-sequence (Seq2Seq) text generation tasks. Upon extensive evaluation over a wide range of Seq2Seq tasks, we find DiffuSeq achieving comparable or even better performance than six established baselines, including a state-of-the-art model that is based on pre-trained language models. Apart from quality, an intriguing property of DiffuSeq is its high diversity during generation, which is desired in many Seq2Seq tasks. We further include a theoretical analysis revealing the connection between DiffuSeq and autoregressive/non-autoregressive models. Bringing together theoretical analysis and empirical evidence, we demonstrate the great potential of diffusion models in complex conditional language generation tasks."
First Output:"Recently, diffusion models have emerged as a new paradigm for generative models. Despite the success in domains using continuous signals such as vision and audio, adapting diffusion models to natural language is under-explored due to the discrete nature of texts, especially for conditional generation.\nWe tackle this challenge by proposing DiffuSeq: a diffusion model designed for sequence-to-sequence (Seq2Seq) text generation tasks. Upon extensive evaluation over a wide range of Seq2Seq tasks, we find DiffuSeq achieving comparable or even better performance than six established baselines, including a state-of-the-art model that is based on pre-trained language models. Apart from quality, an intriguing property of DiffuSeq is its high diversity during generation, which is desired in many Seq2Seq tasks. We further include a theoretical analysis revealing the connection between DiffuSeq and autoregressive/non-autoregressive models. Bringing together theoretical analysis and empirical evidence, we demonstrate the great potential of diffusion models in complex conditional language generation tasks."
Second Output:"This paper proposes DiffuSeq, a new diffusion model for sequence to sequence (Seq2Seq) text generation tasks. It shows that DiffuSeq achieves competitive performance compared to the state-of-the-art model on several benchmarks. In addition, the analysis provides a new insight on the connection between DiffuSeq and autoregressive/non-autoregressive models."
I do further experiments (now with summarization of multiple film reviews) and this tendency of the model to copy-paste parts of the input is clearly visible. I guess that your dataset may have multiple examples of summaries where people to provide more examples just copy-pasted parts of the input that seemed the most relevant instead of writing a true abstractive summary.
Hey thanks for reporting this. I can try some experiments. Can you also try formatting the prompt like this?
instruction
Input:
<context goes here>
This is how the model was trained when there was context usually.
I updated the pipeline code and also updated the model card with new example, including how to use the pipeline with LangChain. You might find this easier to use. You can follow the model card for how to create the llm_context_chain
I use below. I did some tests with your example and also found the 7b tends to repeats or restates the text slightly differently. This could mean there is underrepresentation of summarization examples in the training data. The dolly-v2-12b
model seems to do a better job at this. Maybe a larger model is needed for this type of task.
context = """Recently, diffusion models have emerged as a new paradigm for generative models. Despite the success in domains using continuous signals such as vision and audio, adapting diffusion models to natural language is under-explored due to the discrete nature of texts, especially for conditional generation. We tackle this challenge by proposing DiffuSeq: a diffusion model designed for sequence-to-sequence (Seq2Seq) text generation tasks. Upon extensive evaluation over a wide range of Seq2Seq tasks, we find DiffuSeq achieving comparable or even better performance than six established baselines, including a state-of-the-art model that is based on pre-trained language models. Apart from quality, an intriguing property of DiffuSeq is its high diversity during generation, which is desired in many Seq2Seq tasks. We further include a theoretical analysis revealing the connection between DiffuSeq and autoregressive/non-autoregressive models. Bringing together theoretical analysis and empirical evidence, we demonstrate the great potential of diffusion models in complex conditional language generation tasks."""
for _ in range(3):
print(llm_context_chain.predict(instruction="Summarize the following abstract of the scientific article", context=context).lstrip())
print("=========")
Samples from 12b model:
DiffuSeq is a conditional text generation model that adapts diffusion models to discrete signals such as text. DiffuSeq achieves comparable or even better performance than several baselines on a wide range of Seq2Seq tasks. An intriguing property of DiffuSeq is its high diversity during generation, which is desired in many Seq2Seq tasks.
=========
DiffuSeq is a diffusion model designed for sequence-to-sequence (Seq2Seq) text generation tasks. To the best of our knowledge, it is the first end-to-end diffusive model trained from scratch for this task. We conduct extensive experiments on a wide range of Seq2Seq tasks and find that DiffuSeq outperforms six strong baselines and even a state-of-the-art model based on pre-trained language models on all the evaluated metrics. A interesting property of DiffuSeq is its high diversity during generation, which is desired in many Seq2Seq tasks. We further provide a theoretical analysis that reveals the connection between DiffuSeq and autoregressive/non-autoregressive models and show that DiffuSeq is capable of producing more diverse and better quality responses compared with the previous models by including additional priors.
=========
DiffuSeq is a diffusion model designed for sequence-to-sequence text generation tasks. It achieves comparable or even better performance than six established baselines on a number of Seq2Seq tasks. Extensive experiments demonstrate that DiffuSeq generates text with high diversity, which is desired in many Seq2Seq tasks.
=========
DiffuSeq is a sequence-to-sequence text generation model that is based on the diffusion model. It has comparable or even better performance than several state-of-the-art models on a wide range of text generation tasks. The model achieves this performance through including a theoretical analysis that reveals the model is a kind of autoregressive model. In addition, the model also has a quality property that is desired in many text generation tasks.
=========
A new sequence to sequence text generation model, DiffuSeq, is proposed that achieves comparable or better performance than established baselines on a number of text generation tasks. The model is shown to achieve this performance through sharing the same variationallower bound from the VELA framework, however, the input and output codewords are represented as independent diffusion processes, allowing the model to capture the discrete nature of text, especially for conditional generation. The model is shown to have a intuitive connection to both autoregressive models and non-autoregressive models, and analysis of the model can be used to explain empirical results on the performance of the model.
=========
DiffuSeq is a neural network model achieving comparable or better performance than six state-of-the-art Seq2Seq models on a range of Seq2Seq tasks. The DiffuSeq model is a diffusion model, which can generate sequences by propagating a token in a one-step discrete manner. The model is evaluated on a dataset with a wide range of Seq2Seq tasks including summarization, question answering, recommendation, and language generation. The analysis reveals that the DiffuSeq model is related to autoregressive/non-autoregressive models, and the theory can help understand the empirical results of the model.
=========
I'll go ahead and close this but take note to look into more summarization examples in the future.
Hey thanks for reporting this. I can try some experiments. Can you also try formatting the prompt like this?
instruction Input: <context goes here>
This is how the model was trained when there was context usually.
From my experiments, it is even worst.
The model response is just a continuation of the context text, and the initial part of the output is again a copy of the original context tail.