File size: 2,305 Bytes
5324d02
 
 
 
 
 
 
 
 
 
3ca367c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
license: apache-2.0
datasets:
- eriktks/conll2003
language:
- en
base_model:
- chandar-lab/NeoBERT
tags:
- ner
---

# ✨ NeoBERT for NER

This repository hosts an NeoBERT model that was fine-tuned on the CoNLL-2003 NER dataset.

Please notice the following caveats:

* ⚠️ Work in progress, as e.g. new hyper-parameter changes or bug fixes for the implemented `NeoBERTForTokenClassification` class can occur.
* ⚠️ At the moment, don't expect BERT-like performance, more experiments are needed

## 📝 Implementation

An own `NeoBERTForTokenClassification` class was implemented to conduct experiments with Transformers.

For all experiments, Transformers in version `4.50.0.dev0` is currently used including a recent built of `xFormers`, as NeoBERT depends on that for the `SwiGLU` implementation.

For following code (based on the [PyTorch Token Classification example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification)
can be used for fine-tuning:

```bash
python3 run_ner.py \
  --model_name_or_path /home/stefan/Repositories/NeoBERT \
  --dataset_name conll2003 \
  --output_dir ./neobert-conll2003-lr1e-05-e10-bs16-1 \
  --seed 1 \
  --do_train \
  --do_eval \
  --per_device_train_batch_size 16 \
  --num_train_epochs 10 \
  --learning_rate 1e-05 \
  --eval_strategy epoch \
  --save_strategy epoch \
  --overwrite_output_dir \
  --trust_remote_code True \
  --load_best_model_at_end \
  --metric_for_best_model "eval_f1" \
  --greater_is_better True
```

## 📊 Performance

A very basic hyper-parameter search is performanced for five different seeds, with reported averaged micro F1-Score on the development set of CoNLL-2003:

| Configuration         | Run 1     | Run 2 | Run 3 | Run 4 | Run 5 | Avg.  |
|:--------------------- | ---------:| -----:| -----:| -----:| -----:| -----:|
| `bs=16,e=10,lr=1e-05` | **95.71** | 95.42 | 95.53 | 95.56 | 95.43 | 95.53 |
| `bs=16,e=10,lr=2e-05` | 95.25     | 95.33 | 95.28 | 95.35 | 95.26 | 95.29 |
| `bs=16,e=10,lr=3e-05` | 94.98     | 95.22 | 94.86 | 94.72 | 94.93 | 94.94 |
| `bs=16,e=10,lr=4e-05` | 94.61     | 94.39 | 94.57 | 94.65 | 94.87 | 94.61 |
| `bs=16,e=10,lr=5e-05` | 93.82     | 93.94 | 94.36 | 91.14 | 94.38 | 94.15 |

The performance of the current uploaded model is marked in bold.