Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -6,6 +6,9 @@ The BART model was pre-trained on the CNN-DailyMail dataset, but it was re-train
6
 
7
  According to huggingface, BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
8
 
 
 
 
9
  ## Intended uses & limitations
10
 
11
  This model is intended to be used for summarizing user reviews on websites.
 
6
 
7
  According to huggingface, BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
8
 
9
+ ## Datasets
10
+ Link: [Amazon Reviews Corpus](https://huggingface.co/datasets/amazon_reviews_multi)
11
+
12
  ## Intended uses & limitations
13
 
14
  This model is intended to be used for summarizing user reviews on websites.