Edit model card

SpanMarker with roberta-base on my-data

This is a SpanMarker model that can be used for Named Entity Recognition. This SpanMarker model uses roberta-base as the underlying encoder.

Model Details

Model Description

  • Model Type: SpanMarker
  • Encoder: roberta-base
  • Maximum Sequence Length: 256 tokens
  • Maximum Entity Length: 8 words
  • Language: en
  • License: cc-by-sa-4.0

Model Sources

Model Labels

Label Examples
Data "Depth time - series", "an overall mitochondrial", "defect"
Material "the subject 's fibroblasts", "COXI , COXII and COXIII subunits", "cross - shore measurement locations"
Method "in vitro", "EFSA", "an approximation"
Process "a significant reduction of synthesis", "translation", "intake"

Evaluation

Metrics

Label Precision Recall F1
all 0.6935 0.6732 0.6832
Data 0.6348 0.5979 0.6158
Material 0.7688 0.7612 0.765
Method 0.4286 0.45 0.4390
Process 0.6985 0.6780 0.6881

Uses

Direct Use for Inference

from span_marker import SpanMarkerModel

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Run inference
entities = model.predict("In situ Peak Force Tapping AFM was employed for determining morphology and nano - mechanical properties of the surface layer .")

Downstream Use

You can finetune this model on your own dataset.

Click to expand
from span_marker import SpanMarkerModel, Trainer

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")

# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003

# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
    model=model,
    train_dataset=dataset["train"],
    eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span_marker_model_id-finetuned")

Training Details

Training Set Metrics

Training set Min Median Max
Sentence length 3 25.6049 106
Entities per sentence 0 5.2439 22

Training Hyperparameters

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training Results

Epoch Step Validation Loss Validation Precision Validation Recall Validation F1 Validation Accuracy
2.0134 300 0.0540 0.6882 0.5687 0.6228 0.7743
4.0268 600 0.0546 0.6854 0.6737 0.6795 0.8092
6.0403 900 0.0599 0.6941 0.6927 0.6934 0.8039
8.0537 1200 0.0697 0.7096 0.6947 0.7020 0.8190

Framework Versions

  • Python: 3.10.12
  • SpanMarker: 1.5.0
  • Transformers: 4.36.2
  • PyTorch: 2.0.1+cu118
  • Datasets: 2.16.1
  • Tokenizers: 0.15.0

Citation

BibTeX

@software{Aarsen_SpanMarker,
    author = {Aarsen, Tom},
    license = {Apache-2.0},
    title = {{SpanMarker for Named Entity Recognition}},
    url = {https://github.com/tomaarsen/SpanMarkerNER}
}
Downloads last month
2
Safetensors
Model size
125M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for zhang19991111/roberta-base-spanmarker-STEM-NER

Finetuned
(1303)
this model

Evaluation results