readme: add initial version
Browse files
README.md
CHANGED
@@ -8,4 +8,56 @@ base_model:
|
|
8 |
- chandar-lab/NeoBERT
|
9 |
tags:
|
10 |
- ner
|
11 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
- chandar-lab/NeoBERT
|
9 |
tags:
|
10 |
- ner
|
11 |
+
---
|
12 |
+
|
13 |
+
# ✨ NeoBERT for NER
|
14 |
+
|
15 |
+
This repository hosts an NeoBERT model that was fine-tuned on the CoNLL-2003 NER dataset.
|
16 |
+
|
17 |
+
Please notice the following caveats:
|
18 |
+
|
19 |
+
* ⚠️ Work in progress, as e.g. new hyper-parameter changes or bug fixes for the implemented `NeoBERTForTokenClassification` class can occur.
|
20 |
+
* ⚠️ At the moment, don't expect BERT-like performance, more experiments are needed
|
21 |
+
|
22 |
+
## 📝 Implementation
|
23 |
+
|
24 |
+
An own `NeoBERTForTokenClassification` class was implemented to conduct experiments with Transformers.
|
25 |
+
|
26 |
+
For all experiments, Transformers in version `4.50.0.dev0` is currently used including a recent built of `xFormers`, as NeoBERT depends on that for the `SwiGLU` implementation.
|
27 |
+
|
28 |
+
For following code (based on the [PyTorch Token Classification example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification)
|
29 |
+
can be used for fine-tuning:
|
30 |
+
|
31 |
+
```bash
|
32 |
+
python3 run_ner.py \
|
33 |
+
--model_name_or_path /home/stefan/Repositories/NeoBERT \
|
34 |
+
--dataset_name conll2003 \
|
35 |
+
--output_dir ./neobert-conll2003-lr1e-05-e10-bs16-1 \
|
36 |
+
--seed 1 \
|
37 |
+
--do_train \
|
38 |
+
--do_eval \
|
39 |
+
--per_device_train_batch_size 16 \
|
40 |
+
--num_train_epochs 10 \
|
41 |
+
--learning_rate 1e-05 \
|
42 |
+
--eval_strategy epoch \
|
43 |
+
--save_strategy epoch \
|
44 |
+
--overwrite_output_dir \
|
45 |
+
--trust_remote_code True \
|
46 |
+
--load_best_model_at_end \
|
47 |
+
--metric_for_best_model "eval_f1" \
|
48 |
+
--greater_is_better True
|
49 |
+
```
|
50 |
+
|
51 |
+
## 📊 Performance
|
52 |
+
|
53 |
+
A very basic hyper-parameter search is performanced for five different seeds, with reported averaged micro F1-Score on the development set of CoNLL-2003:
|
54 |
+
|
55 |
+
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|
56 |
+
|:--------------------- | ---------:| -----:| -----:| -----:| -----:| -----:|
|
57 |
+
| `bs=16,e=10,lr=1e-05` | **95.71** | 95.42 | 95.53 | 95.56 | 95.43 | 95.53 |
|
58 |
+
| `bs=16,e=10,lr=2e-05` | 95.25 | 95.33 | 95.28 | 95.35 | 95.26 | 95.29 |
|
59 |
+
| `bs=16,e=10,lr=3e-05` | 94.98 | 95.22 | 94.86 | 94.72 | 94.93 | 94.94 |
|
60 |
+
| `bs=16,e=10,lr=4e-05` | 94.61 | 94.39 | 94.57 | 94.65 | 94.87 | 94.61 |
|
61 |
+
| `bs=16,e=10,lr=5e-05` | 93.82 | 93.94 | 94.36 | 91.14 | 94.38 | 94.15 |
|
62 |
+
|
63 |
+
The performance of the current uploaded model is marked in bold.
|