Edit model card

Fine-Tuned Named Entity Recognition (NER) Model - SK_Morph_BLM (NER Tags)

Model Overview

This model is a fine-tuned version of the SK_Morph_BLM model for tokenization and Named Entity Recognition (NER). For this task, we utilized the manually annotated WikiGoldSK dataset, which was created from 412 articles from the Slovak Wikipedia. The dataset contains annotations for four main categories of entities: Person (PER), Location (LOC), Organization (ORG), and Miscellaneous (MISC).

NER Tags

Each token in the dataset is annotated with one of the following NER tags:

  • O (0): Regular text (not an entity)
  • B-PER (1): Beginning of a person entity
  • I-PER (2): Continuation of a person entity
  • B-LOC (3): Beginning of a location entity
  • I-LOC (4): Continuation of a location entity
  • B-ORG (5): Beginning of an organization entity
  • I-ORG (6): Continuation of an organization entity
  • B-MISC (7): Beginning of a miscellaneous entity
  • I-MISC (8): Continuation of a miscellaneous entity

Dataset Details

The WikiGoldSK dataset, which contains a total of 6,633 sequences, was adapted for this NER task. The dataset was originally split into training, validation, and test sets, but for our research, we combined all parts and evaluated the model using stratified 10-fold cross-validation. Each token in the text, including words and punctuation, was annotated with the appropriate NER tag.

Fine-Tuning Hyperparameters

The following hyperparameters were used during the fine-tuning process:

  • Learning Rate: 3e-05
  • Training Batch Size: 64
  • Evaluation Batch Size: 64
  • Seed: 42
  • Optimizer: Adam (default)
  • Number of Epochs: 10

Model Performance

The model was evaluated using stratified 10-fold cross-validation, achieving a weighted F1-score with a median value of 0.9605.

Model Usage

This model is suitable for tokenization and NER tasks in Slovak text. It is specifically designed for applications requiring accurate identification and categorization of named entities in various Slovak texts.

Example Usage

Below is an example of how to use the fine-tuned SK_Morph_BLM-ner model in a Python script:

import torch
from transformers import RobertaForTokenClassification
from huggingface_hub import hf_hub_download, snapshot_download
import json

class TokenClassifier:
    def __init__(self, model, tokenizer):
        self.model = RobertaForTokenClassification.from_pretrained(model, num_labels=10)
        
        repo_path = snapshot_download(repo_id = tokenizer)
        sys.path.append(repo_path)

        from SKMT_lib_v2.SKMT_BPE import SKMorfoTokenizer
        self.tokenizer = SKMorfoTokenizer()
        
        byte_utf8_mapping_path = hf_hub_download(repo_id=tokenizer, filename="byte_utf8_mapping.json")
        with open(byte_utf8_mapping_path, "r", encoding="utf-8") as f:
            self.byte_utf8_mapping = json.load(f)

    def decode(self, tokens):
        decoded_tokens = []
        for token in tokens:
            for k, v in self.byte_utf8_mapping.items():
                if k in token:
                    token = token.replace(k, v)
                token = token.replace("Ġ"," ")
            decoded_tokens.append(token)
        return decoded_tokens

    def tokenize_text(self, text):
        encoded_text = self.tokenizer.tokenize(text.lower(), max_length=256, return_tensors='pt', return_subword=False)
        return encoded_text

    def classify_tokens(self, text):
        encoded_text = self.tokenize_text(text)
        tokens = self.tokenizer.convert_list_ids_to_tokens(encoded_text['input_ids'].squeeze().tolist())
        
        with torch.no_grad():
            output = self.model(**encoded_text)
            logits = output.logits
            predictions = torch.argmax(logits, dim=-1)
            
            # Použitie masky založenej na attention mask
            active_loss = encoded_text['attention_mask'].view(-1) == 1
            active_logits = logits.view(-1, self.model.config.num_labels)[active_loss]
            active_predictions = predictions.view(-1)[active_loss]

            probabilities = torch.softmax(active_logits, dim=-1)
            
            results = []
            for token, pred, prob in zip(self.decode(tokens), active_predictions.tolist(), probabilities.tolist()):
                if token not in ['<s>', '</s>', '<pad>']:
                    result = f"Token: {token: <10}  NER tag: ({self.model.config.id2label[pred]} = {max(prob):.4f})"
                    results.append(result)
                    
        return results

# Instantiate the NER classifier with the specified tokenizer and model
classifier = TokenClassifier(tokenizer="daviddrzik/SK_Morph_BLM", model="daviddrzik/SK_Morph_BLM-ner")

# Tokenize the input text
text_to_classify = "Dávid Držík je interný doktorand na Fakulte prírodných vied a informatiky UKF v Nitre na Slovensku."

# Classify the NER tags of the tokenized text
classification_results = classifier.classify_tokens(text_to_classify)
print(f"============= NER Token Classification =============")
print("Text to classify:", text_to_classify)
for classification_result in classification_results:
    print(classification_result)

Example Output Here is the output when running the above example:

============= NER Token Classification =============
Text to classify: Dávid Držík je interný doktorand na Fakulte prírodných vied a informatiky UKF v Nitre na Slovensku.
Token:  dávid      NER tag: (B-PER = 0.9924)
Token:  drž        NER tag: (I-PER = 0.9040)
Token: ík          NER tag: (I-PER = 0.7020)
Token:  je         NER tag: (O = 0.9985)
Token:  intern     NER tag: (O = 0.9978)
Token: ý           NER tag: (O = 0.9976)
Token:  doktorand  NER tag: (O = 0.9986)
Token:  na         NER tag: (O = 0.9989)
Token:  fakulte    NER tag: (B-ORG = 0.9857)
Token:  prírodných  NER tag: (I-ORG = 0.9585)
Token:  vied       NER tag: (I-ORG = 0.9905)
Token:  a          NER tag: (I-ORG = 0.9607)
Token:  informatiky  NER tag: (I-ORG = 0.9773)
Token:  uk         NER tag: (I-ORG = 0.9490)
Token: f           NER tag: (I-ORG = 0.9946)
Token:  v          NER tag: (I-ORG = 0.9865)
Token:  nitre      NER tag: (B-LOC = 0.6015)
Token:  na         NER tag: (O = 0.9555)
Token:  slovensku  NER tag: (B-LOC = 0.9661)
Token: .           NER tag: (O = 0.9972)
Downloads last month
2
Safetensors
Model size
58.4M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for daviddrzik/SK_Morph_BLM-ner

Finetuned
(7)
this model

Dataset used to train daviddrzik/SK_Morph_BLM-ner