|
--- |
|
tags: |
|
- generated_from_trainer |
|
- lora |
|
datasets: |
|
- samsum |
|
metrics: |
|
- rouge |
|
model-index: |
|
- name: flan-t5-xxl-lora-peft-samsum |
|
results: |
|
- task: |
|
name: Sequence-to-sequence Language Modeling |
|
type: text2text-generation |
|
dataset: |
|
name: samsum |
|
type: samsum |
|
config: samsum |
|
split: train |
|
args: samsum |
|
metrics: |
|
- name: Rouge1 |
|
type: rouge |
|
value: 51.2543 |
|
library_name: adapter-transformers |
|
--- |
|
# flan-t5-xxl-lora-peft-samsum |
|
|
|
This model is a fine-tuned (with Lora and peft) version of [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl) on the samsum dataset (cc-by-nc-nd-4.0). |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.27.2 |
|
- Pytorch 1.13.1+cu117 |
|
- Datasets 2.9.0 |
|
- Peft |
|
- Lora |