MTL-story
The MTL-story model was proposed in MVP: Multi-task Supervised Pre-training for Natural Language Generation by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found https://github.com/RUCAIBox/MVP.
Model Description
MTL-story is supervised pre-trained using a mixture of labeled story generation datasets. It is a variant (Single) of our main MVP model. It follows a standard Transformer encoder-decoder architecture.
MTL-story is specially designed for story generation tasks, such as ROCStories and WritingPrompts.
Example
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-story")
>>> inputs = tokenizer(
... "Given the story title: I think all public schools should have a uniform dress code.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs, max_length=1024)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["I don't know about you, but I don't think it would be a good idea to have a uniform dress code in public schools. I think it's a waste of time and money. If you're going to have uniform dress codes, you need to make sure that the uniforms are appropriate for the school and that the students are comfortable in them. If they're not comfortable, then they shouldn't be allowed to wear them."]
Related Models
MVP: https://huggingface.co/RUCAIBox/mvp.
Prompt-based models:
- MVP-multi-task: https://huggingface.co/RUCAIBox/mvp-multi-task.
- MVP-summarization: https://huggingface.co/RUCAIBox/mvp-summarization.
- MVP-open-dialog: https://huggingface.co/RUCAIBox/mvp-open-dialog.
- MVP-data-to-text: https://huggingface.co/RUCAIBox/mvp-data-to-text.
- MVP-story: https://huggingface.co/RUCAIBox/mvp-story.
- MVP-question-answering: https://huggingface.co/RUCAIBox/mvp-question-answering.
- MVP-question-generation: https://huggingface.co/RUCAIBox/mvp-question-generation.
- MVP-task-dialog: https://huggingface.co/RUCAIBox/mvp-task-dialog.
Multi-task models:
- MTL-summarization: https://huggingface.co/RUCAIBox/mtl-summarization.
- MTL-open-dialog: https://huggingface.co/RUCAIBox/mtl-open-dialog.
- MTL-data-to-text: https://huggingface.co/RUCAIBox/mtl-data-to-text.
- MTL-story: https://huggingface.co/RUCAIBox/mtl-story.
- MTL-question-answering: https://huggingface.co/RUCAIBox/mtl-question-answering.
- MTL-question-generation: https://huggingface.co/RUCAIBox/mtl-question-generation.
- MTL-task-dialog: https://huggingface.co/RUCAIBox/mtl-task-dialog.
Citation
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
- Downloads last month
- 19
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.