bart-base-samsum / README.md
lewtun's picture
lewtun HF staff
Fix typo in ROUGE metrics
c3da2c2
|
raw
history blame
No virus
2.9 kB
metadata
language: en
tags:
  - bart
  - seq2seq
  - summarization
license: apache-2.0
datasets:
  - samsum
widget:
  - text: >
      Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker? 

      Philipp: Sure you can use the new Hugging Face Deep Learning Container. 

      Jeff: ok.

      Jeff: and how can I get started? 

      Jeff: where can I find documentation? 

      Philipp: ok, ok you can find everything here.
      https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
model-index:
  - name: bart-base-samsum
    results:
      - task:
          name: Abstractive Text Summarization
          type: abstractive-text-summarization
        dataset:
          name: >-
            SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive
            Summarization
          type: samsum
        metrics:
          - name: Validation ROUGE-1
            type: rouge-1
            value: 46.6619
          - name: Validation ROUGE-2
            type: rouge-2
            value: 23.3285
          - name: Validation ROUGE-L
            type: rouge-l
            value: 39.4811
          - name: Test ROUGE-1
            type: rouge-1
            value: 44.9932
          - name: Test ROUGE-2
            type: rouge-2
            value: 21.7286
          - name: Test ROUGE-L
            type: rouge-l
            value: 38.1921
      - task:
          type: summarization
          name: Summarization
        dataset:
          name: samsum
          type: samsum
          config: samsum
          split: test
        metrics:
          - name: ROUGE-1
            type: rouge
            value: 45.0148
            verified: true
          - name: ROUGE-2
            type: rouge
            value: 21.6861
            verified: true
          - name: ROUGE-L
            type: rouge
            value: 38.1728
            verified: true
          - name: ROUGE-LSUM
            type: rouge
            value: 41.2794
            verified: true
          - name: loss
            type: loss
            value: 1.597476601600647
            verified: true
          - name: gen_len
            type: gen_len
            value: 17.6606
            verified: true

bart-base-samsum

This model was obtained by fine-tuning facebook/bart-base on Samsum dataset.

Usage

from transformers import pipeline

summarizer = pipeline("summarization", model="lidiya/bart-base-samsum")
conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker? 
Philipp: Sure you can use the new Hugging Face Deep Learning Container. 
Jeff: ok.
Jeff: and how can I get started? 
Jeff: where can I find documentation? 
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face                                           
'''
summarizer(conversation)

Training procedure

Results

key value
eval_rouge1 46.6619
eval_rouge2 23.3285
eval_rougeL 39.4811
eval_rougeLsum 43.0482
test_rouge1 44.9932
test_rouge2 21.7286
test_rougeL 38.1921
test_rougeLsum 41.2672