Update README.md
Browse files
README.md
CHANGED
@@ -6,8 +6,6 @@ The BART model was pre-trained on the CNN-DailyMail dataset, but it was re-train
|
|
6 |
|
7 |
According to huggingface, BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
|
8 |
|
9 |
-
This version of BART was fine-tuned for summarization on the Amazon Review data, which hosts a large collection of user reviews on the Amazon online website.
|
10 |
-
|
11 |
## Intended uses & limitations
|
12 |
|
13 |
This model is intended to be used for summarizing user reviews on websites.
|
|
|
6 |
|
7 |
According to huggingface, BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
|
8 |
|
|
|
|
|
9 |
## Intended uses & limitations
|
10 |
|
11 |
This model is intended to be used for summarizing user reviews on websites.
|