Edit model card

NER NER-finetuning-BERT

This is the BERT-cased model for NER google-bert/bert-base-cased using the CONLL2002 dataset. The results were as follows:

  • Precision: 0.8265
  • Recall: 0.8443
  • F1: 0.8353
  • Accuracy: 0.9786

Model description

Fine-Tuned BERT-cased for Named Entity Recognition (NER) Overview: This model is a fine-tuned version of the bert-cased pre-trained model specifically tailored for the task of Named Entity Recognition (NER). BERT (Bidirectional Encoder Representations from Transformers) is a state-of-the-art transformer-based model designed to understand the context of words in a sentence by considering both the left and right surrounding words. The bert-cased variant ensures that the model distinguishes between uppercase and lowercase letters, preserving the case sensitivity which is crucial for NER tasks.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • evaluation_strategy="epoch",
  • save_strategy="epoch",
  • learning_rate=2e-5,
  • num_train_epochs=4,
  • per_device_train_batch_size=16,
  • weight_decay=0.01,

Training results

Epoch Training Loss Validation Loss
1 0.005700 0.258581
2 0.004600 0.248794
3 0.002800 0.257513
4 0.002100 0.275097

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
108M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).