Create new file
Browse files
README.md
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#### Pre-trained BART Model fine-tune on WikiLingua dataset
|
2 |
+
The repository for storing the fine-tuned BART model (by sshleifer) using the WikiLingua dataset (English)
|
3 |
+
|
4 |
+
**Purpose:** Examine the performance of a fine-tuned model research purposes
|
5 |
+
**Observation:**
|
6 |
+
- Pre-trained model was trained on the XSum dataset, which summarize a not-too-long documents into one-liner summary
|
7 |
+
- Fine-tuning this model using WikiLingua is appropriate since the summaries for that dataset are also short
|
8 |
+
- In the end, however, the model cannot capture much clearer key points, but instead it mostly extracts the opening sentence
|
9 |
+
- Some data pre-processing and models' hyperparameter are also need to be tuned more properly.
|