Edit model card

Model Card for Model ID

roBERTa-base model was fine-tuned on 50% training English only split of MultiNERD dataset and later evaluated on full test split of the same.
The finetuning script can be fetched from fintuning.py.

Various other model were tested on the same selection of dataset and the best checkpoint was uploaded. The detailed configuration summary can be found in Appendix section of report.

Model Details

Model Description

Head over to github repo for all the scripts used to finetune and evalute token-classification model. The model is ready to use on Kaggle too!

  • Developed by: Jayant Yadav

Uses

Token-classification of the following entities are possible:

Class Description Examples
PER (person) People Ray Charles, Jessica Alba, Leonardo DiCaprio, Roger Federer, Anna Massey.
ORG (organization) Associations, companies, agencies, institutions, nationalities and religious or political groups University of Edinburgh, San Francisco Giants, Google, Democratic Party.
LOC (location) Physical locations (e.g. mountains, bodies of water), geopolitical entities (e.g. cities, states), and facilities (e.g. bridges, buildings, airports). Rome, Lake Paiku, Chrysler Building, Mount Rushmore, Mississippi River.
ANIM (animal) Breeds of dogs, cats and other animals, including their scientific names. Maine Coon, African Wild Dog, Great White Shark, New Zealand Bellbird.
BIO (biological) Genus of fungus, bacteria and protoctists, families of viruses, and other biological entities. Herpes Simplex Virus, Escherichia Coli, Salmonella, Bacillus Anthracis.
CEL (celestial) Planets, stars, asteroids, comets, nebulae, galaxies and other astronomical objects. Sun, Neptune, Asteroid 187 Lamberta, Proxima Centauri, V838 Monocerotis.
DIS (disease) Physical, mental, infectious, non-infectious, deficiency, inherited, degenerative, social and self-inflicted diseases. Alzheimer’s Disease, Cystic Fibrosis, Dilated Cardiomyopathy, Arthritis.
EVE (event) Sport events, battles, wars and other events. American Civil War, 2003 Wimbledon Championships, Cannes Film Festival.
FOOD (food) Foods and drinks. Carbonara, Sangiovese, Cheddar Beer Fondue, Pizza Margherita.
INST (instrument) Technological instruments, mechanical instruments, musical instruments, and other tools. Spitzer Space Telescope, Commodore 64, Skype, Apple Watch, Fender Stratocaster.
MEDIA (media) Titles of films, books, magazines, songs and albums, fictional characters and languages. Forbes, American Psycho, Kiss Me Once, Twin Peaks, Disney Adventures.
PLANT (plant) Types of trees, flowers, and other plants, including their scientific names. Salix, Quercus Petraea, Douglas Fir, Forsythia, Artemisia Maritima.
MYTH (mythological) Mythological and religious entities. Apollo, Persephone, Aphrodite, Saint Peter, Pope Gregory I, Hercules.
TIME (time) Specific and well-defined time intervals, such as eras, historical periods, centuries, years and important days. No months and days of the week. Renaissance, Middle Ages, Christmas, Great Depression, 17th Century, 2012.
VEHI (vehicle) Cars, motorcycles and other vehicles. Ferrari Testarossa, Suzuki Jimny, Honda CR-X, Boeing 747, Fairey Fulmar.

Bias, Risks, and Limitations

Only trained on English split of MultiNERD dataset. Therefore will not perform well on other languages.

How to Get Started with the Model

Use the code below to get started with the model:

from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained("jayant-yadav/roberta-base-multinerd")
model = AutoModelForTokenClassification.from_pretrained("jayant-yadav/roberta-base-multinerd")

nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"

ner_results = nlp(example)
print(ner_results)

Training Details

Training Data

50% of train split of MultiNERD dataset was used to finetune the model.

Training Procedure

Preprocessing

English dataset was filterd out : train_dataset = train_dataset.filter(lambda x: x['lang'] == 'en')

Training Hyperparameters

The following hyperparameters were used during training:

learning_rate: 5e-05
train_batch_size: 32
eval_batch_size: 32
seed: 42
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
lr_scheduler_type: linear
lr_scheduler_warmup_ratio: 0.1
num_epochs: 1

Evaluation

Evaluation was perfored on 50% of evaluation split of MultiNERD dataset.

Testing Data & Metrics

Testing Data

Tested on Full test split of MultiNERD dataset.

Metrics

Model versions and checkpoint were evaluated using F1, Precision and Recall.
For this seqeval metric was used: metric = load_metric("seqeval").

Results

Entity Precision Recall F1 score Support
ANIM 0.71 0.77 0.739 1604
BIO 0.5 0.125 0.2 8
CEL 0.738 0.756 0.746 41
DIS 0.737 0.772 0.754 759
EVE 0.952 0.968 0.960 352
FOOD 0.679 0.545 0.605 566
INST 0.75 0.75 0.75 12
LOC 0.994 0.991 0.993 12024
MEDIA 0.940 0.969 0.954 458
ORG 0.977 0.981 0.979 3309
PER 0.992 0.995 0.993 5265
PLANT 0.617 0.730 0.669 894
MYTH 0.647 0.687 0.666 32
TIME 0.825 0.820 0.822 289
VEHI 0.812 0.812 0.812 32
Overall 0.939 0.947 0.943

Technical Specifications

Model Architecture and Objective

Follows the same as RoBERTa-BASE

Compute Infrastructure

Hardware

Kaggle - GPU T4x2
Google Colab - GPU T4x1

Software

pandas==1.5.3
numpy==1.23.5
seqeval==1.2.2
datasets==2.15.0
huggingface_hub==0.19.4
transformers[torch]==4.35.2
evaluate==0.4.1
matplotlib==3.7.1
collections
torch==2.0.0

Model Card Contact

jayant-yadav

Downloads last month
24
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jayant-yadav/roberta-base-multinerd

Finetuned
(1308)
this model

Dataset used to train jayant-yadav/roberta-base-multinerd

Evaluation results