File size: 2,025 Bytes
fc884b7 c1ae499 fab4122 fc884b7 fab4122 fc884b7 fab4122 c1ae499 fab4122 fc884b7 c1ae499 fc884b7 2121989 fc884b7 2121989 fc884b7 c1ae499 fc884b7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
---
license: apache-2.0
tags:
- classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
base_model: bert-base-uncased
model-index:
- name: sentence-acceptability
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- type: accuracy
value: 0.8216682646212847
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-acceptability
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8257
- Accuracy: 0.8217
## Model description
This model classifies English sentences according to two different labels: 1 if the sentence is grammatically acceptable and 0 if the sentence is grammatically unacceptable.
## Training and evaluation data
The model was trained on the "cola" split of the glue dataset, using the 8551 instances of its "train" split.
For the evaluation, the 1043 sentences of the "evaluation" split were used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4868 | 1.0 | 1069 | 0.6279 | 0.7862 |
| 0.3037 | 2.0 | 2138 | 0.6184 | 0.8140 |
| 0.177 | 3.0 | 3207 | 0.8257 | 0.8217 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|