dangkhoa99's picture
Librarian Bot: Add base_model information to model (#6)
7102921
metadata
language:
  - en
license: mit
library_name: transformers
tags:
  - generated_from_trainer
datasets:
  - squad_v2
metrics:
  - exact_match
  - f1
base_model: roberta-base
model-index:
  - name: dangkhoa99/roberta-base-finetuned-squad-v2
    results: []

roberta-base-finetuned-squad-v2

This model is a fine-tuned version of roberta-base on the squad_v2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9173

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3

Training results

| Training Loss | Epoch | Step  | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8796        | 1.0   | 8239  | 0.8010          |
| 0.6474        | 2.0   | 16478 | 0.8260          |
| 0.5056        | 3.0   | 24717 | 0.9173          |

Performance

Evaluated on the SQuAD 2.0 dev set with the QuestionAnsweringEvaluator

'exact': 80.28299503074201
'f1': 83.54728996177538

'total': 11873
'HasAns_exact': 78.77867746288798
'HasAns_f1': 85.31662849462904
'HasAns_total': 5928
'NoAns_exact': 81.7830109335576
'NoAns_f1': 81.7830109335576
'NoAns_total': 5945
'best_exact': 80.28299503074201
'best_exact_thresh': 0.9989414811134338
'best_f1': 83.54728996177576
'best_f1_thresh': 0.9989414811134338
'total_time_in_seconds': 220.1965392809998
'samples_per_second': 53.92001181657305
'latency_in_seconds': 0.01854599000092645

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3