NER-finetuned-BETO / README.md
Seb00927's picture
Upload README.md with huggingface_hub
0808747 verified
|
raw
history blame
1.93 kB
metadata
language:
  - es
license: apache-2.0
datasets:
  - eriktks/conll2002
metrics:
  - precision
  - recall
  - f1
  - accuracy
pipeline_tag: token-classification

Model Name: NER-finetuned-BETO

This is a BERT model fine-tuned for Named Entity Recognition (NER).

Model Description

This is a fine-tuned BERT model for Named Entity Recognition (NER) task using CONLL2002 dataset.

In the first part, the dataset must be pre-processed in order to give it to the model. This is done using the 🤗 Transformers and BERT tokenizers. Once this is done, finetuning is applied from bert-base-cased and using the 🤗 AutoModelForTokenClassification.

Finally, the model is trained obtaining the neccesary metrics for evaluating its performance (Precision, Recall, F1 and Accuracy)

Summary of executed tests can be found in: https://docs.google.com/spreadsheets/d/1lI7skNIvRurwq3LA5ps7JFK5TxToEx4s7Kaah3ezyQc/edit?usp=sharing

Model can be found in: https://huggingface.co/paulrojasg/bert-finetuned-ner-1

Github repository: https://github.com/paulrojasg/nlp_4th_workshop

Training

Training Details

  • Epochs: 5
  • Learning Rate: 2e-05
  • Weight Decay: 0.01
  • Batch Size (Train): 16
  • Batch Size (Eval): 8

Training Metrics

Epoch Training Loss Validation Loss Precision Recall F1 Score Accuracy
1 0.0507 0.1354 0.8310 0.8518 0.8413 0.9700
2 0.0292 0.1598 0.8331 0.8433 0.8382 0.9684
3 0.0172 0.1565 0.8392 0.8550 0.8470 0.9705
4 0.0136 0.1812 0.8456 0.8534 0.8495 0.9698
5 0.0088 0.1861 0.8395 0.8543 0.8468 0.9699

Authors

Made by:

  • Paul Rodrigo Rojas Guerrero
  • Jose Luis Hincapie Bucheli
  • Sebastián Idrobo Avirama

With help from: