File size: 2,903 Bytes
78ff981 8a514eb 78ff981 8a514eb 78ff981 8a514eb 78ff981 c3da2c2 8a514eb c3da2c2 8a514eb c3da2c2 8a514eb c3da2c2 8a514eb c3da2c2 8a514eb c3da2c2 8a514eb 78ff981 64dffd6 5d4c0d8 64dffd6 78ff981 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
---
language: en
tags:
- bart
- seq2seq
- summarization
license: apache-2.0
datasets:
- samsum
widget:
- text: "Jeff: Can I train a \U0001F917 Transformers model on Amazon SageMaker? \n\
Philipp: Sure you can use the new Hugging Face Deep Learning Container. \nJeff:\
\ ok.\nJeff: and how can I get started? \nJeff: where can I find documentation?\
\ \nPhilipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face\n"
model-index:
- name: bart-base-samsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: 'SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization'
type: samsum
metrics:
- name: Validation ROUGE-1
type: rouge-1
value: 46.6619
- name: Validation ROUGE-2
type: rouge-2
value: 23.3285
- name: Validation ROUGE-L
type: rouge-l
value: 39.4811
- name: Test ROUGE-1
type: rouge-1
value: 44.9932
- name: Test ROUGE-2
type: rouge-2
value: 21.7286
- name: Test ROUGE-L
type: rouge-l
value: 38.1921
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 45.0148
verified: true
- name: ROUGE-2
type: rouge
value: 21.6861
verified: true
- name: ROUGE-L
type: rouge
value: 38.1728
verified: true
- name: ROUGE-LSUM
type: rouge
value: 41.2794
verified: true
- name: loss
type: loss
value: 1.597476601600647
verified: true
- name: gen_len
type: gen_len
value: 17.6606
verified: true
---
## `bart-base-samsum`
This model was obtained by fine-tuning `facebook/bart-base` on Samsum dataset.
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="lidiya/bart-base-samsum")
conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
'''
summarizer(conversation)
```
## Training procedure
- Colab notebook: https://colab.research.google.com/drive/1RInRjLLso9E2HG_xjA6j8JO3zXzSCBRF?usp=sharing
## Results
| key | value |
| --- | ----- |
| eval_rouge1 | 46.6619 |
| eval_rouge2 | 23.3285 |
| eval_rougeL | 39.4811 |
| eval_rougeLsum | 43.0482 |
| test_rouge1 | 44.9932 |
| test_rouge2 | 21.7286 |
| test_rougeL | 38.1921 |
| test_rougeLsum | 41.2672 |
|