File size: 3,222 Bytes
02641fc 49a616a 044be66 4f9fe62 044be66 02641fc 7c0ce31 147243d b526e16 1464a2a ed79965 bb43682 9409e79 ef86d02 3c00210 bc7adc9 072fb1c 89be7b0 acb764b 95408cf 8d1dce4 2cb04b0 49a616a b0126dd 4f9fe62 a2ec5fe 6560a32 f0cfce2 ca36ff9 044be66 02641fc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
---
license: mit
base_model: indobenchmark/indobert-large-p1
tags:
- generated_from_keras_callback
model-index:
- name: aditnnda/gacoan_reviewer
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aditnnda/gacoan_reviewer
This model is a fine-tuned version of [indobenchmark/indobert-large-p1](https://huggingface.co/indobenchmark/indobert-large-p1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0001
- Validation Loss: 0.4435
- Train Accuracy: 0.9386
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3550, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2553 | 0.1732 | 0.9331 | 0 |
| 0.0938 | 0.1571 | 0.9400 | 1 |
| 0.0310 | 0.2345 | 0.9386 | 2 |
| 0.0138 | 0.3288 | 0.9358 | 3 |
| 0.0140 | 0.3345 | 0.9177 | 4 |
| 0.0033 | 0.3502 | 0.9386 | 5 |
| 0.0118 | 0.3387 | 0.9344 | 6 |
| 0.0269 | 0.4487 | 0.9024 | 7 |
| 0.0188 | 0.3228 | 0.9331 | 8 |
| 0.0017 | 0.3581 | 0.9372 | 9 |
| 0.0020 | 0.4125 | 0.9233 | 10 |
| 0.0021 | 0.4143 | 0.9247 | 11 |
| 0.0011 | 0.4353 | 0.9303 | 12 |
| 0.0002 | 0.4285 | 0.9344 | 13 |
| 0.0005 | 0.4350 | 0.9344 | 14 |
| 0.0002 | 0.4340 | 0.9344 | 15 |
| 0.0002 | 0.4026 | 0.9400 | 16 |
| 0.0001 | 0.4123 | 0.9414 | 17 |
| 0.0001 | 0.4228 | 0.9414 | 18 |
| 0.0001 | 0.4294 | 0.9386 | 19 |
| 0.0001 | 0.4385 | 0.9386 | 20 |
| 0.0001 | 0.4411 | 0.9386 | 21 |
| 0.0001 | 0.4423 | 0.9386 | 22 |
| 0.0001 | 0.4431 | 0.9386 | 23 |
| 0.0001 | 0.4435 | 0.9386 | 24 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|