eraldoluis's picture
First version
1e5d80c
|
raw
history blame
1.61 kB
metadata
license: apache-2.0
language: br
library_name: transformers
tags:
  - question-answering
datasets:
  - eraldoluis/faquad
metrics:
  - squad
model-index:
  - name: faquad-bert-base-portuguese-cased
    results:
      - task:
          type: extractive-qa
          name: Extractive Question-Answering
        dataset:
          type: eraldoluis/faquad
          name: FaQuAD
          split: eval
        metrics:
          - type: squad
            value:
              - f1: 83.0912959832023
                exact_match: 74.53169347209082
            name: Eval F1 and ExactMatch
            verified: false

tmp_exs_faquad

This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased on the FaQuAD dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

The model was trained on the train split and evaluated on the eval split.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

Framework versions

  • Transformers 4.21.3
  • Pytorch 1.12.1+cu113
  • Datasets 2.4.0
  • Tokenizers 0.12.1