File size: 5,853 Bytes
3204d55 5b92b6d 3204d55 2d75219 a3bb3f7 2d75219 5b92b6d 3204d55 2d75219 3204d55 050293a d2a03af 4841b2f 7912966 050293a 3204d55 7912966 3204d55 050293a b9b8c45 bd40a51 b9b8c45 3204d55 050293a 2d75219 050293a 51e9a15 050293a 3204d55 050293a 3204d55 050293a 3204d55 050293a 2d75219 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
---
language:
- id
license: apache-2.0
library_name: transformers
tags:
- generated_from_keras_callback
datasets:
- csebuetnlp/xlsum
- id_liputan6
metrics:
- rouge
pipeline_tag: summarization
base_model: pegasus_indonesian_base-pretrain
model-index:
- name: pegasus_indonesian_base-finetune
results: []
---
# pegasus_indonesian_base-finetune
Github : [PegasusAnthony](https://github.com/nicholaswilven/PEGASUSAnthony/tree/master)
This model is a fine-tuned version of [pegasus_indonesian_base-pretrain](https://huggingface.co/thonyyy/pegasus_indonesian_base-pretrain) on [Indosum](https://paperswithcode.com/dataset/indosum), [Liputan6](https://paperswithcode.com/dataset/liputan6) and [XLSum](https://huggingface.co/datasets/csebuetnlp/xlsum).
If you want to try the model quickly, go to [this notebook](https://colab.research.google.com/drive/1b55Xidp0eVwSNyaSqQuu-bYi-M4itVqF?usp=sharing).
## Intended uses & limitations
This model is uncased, can't read special characters except "," and "." and may not performance well on non-news article text.
## Performance
| datasets | rouge-1 | rouge-2 | rouge-L | BERTScore |
| ---- | ---- | ---- | ---- | --- |
| Indosum | 52.43 | 41.23 | 48.18 | 80.68 |
| Liputan6 | 38.27 | 20.22 | 31.26 | 76.31 |
| XLSum | 26.97 | 9.99 | 21.70 | 73.62|
## Training and evaluation data
Finetune dataset:
1.[Indosum](https://paperswithcode.com/dataset/indosum)
2.[Liputan6](https://paperswithcode.com/dataset/liputan6)
3.[XLSum](https://huggingface.co/datasets/csebuetnlp/xlsum)
## Usage
```python
# Load model and tokenizer
from transformers import TFPegasusForConditionalGeneration, PegasusTokenizerFast
model_name = "thonyyy/pegasus_indonesian_base-finetune"
model = TFPegasusForConditionalGeneration.from_pretrained(model_name)
tokenizer = PegasusTokenizerFast.from_pretrained(model_name)
# Main function to clean text, removes link, bullet point, non ASCII char, parantheses,
# punctuation except "," and ".", numbers with dot (enumerating), extra whitespaces, too short sentences.
import re
import unicodedata
def text_cleaning(input_string):
lowercase = input_string.lower()
remove_link = re.sub(r'(https?:\/\/)?([\da-z\.-]+)\.([a-z\.]{2,6})([\/\w\.-]*)', '', lowercase).replace("&","&")
remove_bullet = "\n".join([T for T in remove_link.split('\n') if '•' not in T and "baca juga:" not in T])
remove_accented = unicodedata.normalize('NFKD', remove_bullet).encode('ascii', 'ignore').decode('utf-8', 'ignore')
remove_parentheses = re.sub("([\(\|]).*?([\)\|])", "\g<1>\g<2>", remove_accented)
remove_punc = re.sub(r"[^\w\d.\s]+",' ', remove_parentheses)
remove_num_dot = re.sub(r"(?<=\d)\.|\.(?=\d)|(?<=#)\.","", remove_punc)
remove_extra_whitespace = re.sub(r'^\s*|\s\s*', ' ', remove_num_dot).strip()
return ".".join([s for s in remove_extra_whitespace.strip().split('.') if len(s.strip())>10]).replace("_","")
# Article to summarize
sample_article="""
Dana Moneter Internasional (IMF) menilai Indonesia telah menunjukkan pemulihan ekonomi yang baik pasca pandemi melalui kinerja makroekonomi yang kuat, didukung penerapan kebijakan moneter dan fiskal secara berhati-hati. Kebijakan forward looking dan sinergi telah berhasil membawa Indonesia menghadapi tantangan global pada tahun 2022 dengan pertumbuhan yang sehat, tekanan inflasi yang menurun, dan sistem keuangan yang stabil. Bank Indonesia menyambut baik hasil asesmen IMF atas perekonomian Indonesia dalam laporan Article IV Consultation tahun 2023 yang dirilis hari ini (26/6).
Dewan Direktur IMF menyampaikan apresiasi dan catatan positif terhadap berbagai kebijakan yang ditempuh otoritas Indonesia selama tahun 2022. Pertama, keberhasilan otoritas untuk kembali kepada batas maksimal defisit fiskal 3%, lebih cepat dari yang diperkirakan dan komitmen otoritas untuk menerapkan disiplin fiskal. Kedua, penerapan kebijakan moneter yang memadai untuk menjaga stabilitas harga. Ketiga, ketahanan sektor keuangan yang tetap terjaga. Keempat, penerapan UU Cipta Kerja serta UU Pengembangan dan Penguatan Sektor Keuangan, dengan memastikan implementasi yang tepat dan keberlanjutan momentum reformasi untuk mendorong kemudahan berinvestasi, meningkatkan pendalaman pasar keuangan, dan memitigasi dampak scarring dari pandemi. Kelima, strategi diversifikasi Indonesia yang fokus pada upaya hilirisasi dalam rangka meningkatkan nilai tambah ekspor. Keenam, komitmen otoritas untuk mengurangi emisi gas rumah kaca dan deforestasi.
"""
# Generate summary
x = tokenizer(text_cleaning(sample_article), return_tensors = 'tf')
y = model.generate(**x)
suummary = tokenizer.batch_decode(y, skip_special_tokens=True)
print(summary)
```
## Training procedure
For replication, go to GitHub page
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 0.00013661868, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': True}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Train Lr | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-------------:|:-----:|
| 2.3484 | 0.0859 | 1.6304 | 0.1080 | 0.00013661868 | 1 |
| 1.6196 | 0.1079 | 1.4097 | 0.1153 | 0.00013661868 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
### Special Thanks
Research supported with Cloud TPUs from Google’s TPU Research Cloud (TRC) |