philschmid HF staff commited on
Commit
cc95e1a
1 Parent(s): b9b7d39

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -13
README.md CHANGED
@@ -39,25 +39,30 @@ should probably proofread and complete it, then remove this comment. -->
39
 
40
  # distilroberta-base-ner-wikiann-conll2003-3-class
41
 
42
- This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the wikiann-conll2003 dataset.
43
- It achieves the following results on the evaluation set:
44
- - Loss: 0.0520
45
- - Precision: 0.9625
46
- - Recall: 0.9667
47
- - F1: 0.9646
48
- - Accuracy: 0.9914
49
 
50
- ## Model description
51
 
52
- More information needed
53
 
54
- ## Intended uses & limitations
 
 
55
 
56
- More information needed
 
57
 
58
- ## Training and evaluation data
 
59
 
60
- More information needed
 
 
61
 
62
  ## Training procedure
63
 
@@ -75,6 +80,19 @@ The following hyperparameters were used during training:
75
 
76
  ### Training results
77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
 
80
  ### Framework versions
 
39
 
40
  # distilroberta-base-ner-wikiann-conll2003-3-class
41
 
42
+ This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the wikiann and conll2003 dataset. It consists out of the classes of wikiann.
43
+
44
+ O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4) B-LOC (5), I-LOC (6).
45
+
46
+ eval F1-Score: **96,25** (merged dataset)
47
+ test F1-Score: **92,41** (merged dataset)
48
+
49
 
 
50
 
51
+ ## Model Usage
52
 
53
+ ```python
54
+ from transformers import AutoTokenizer, AutoModelForTokenClassification
55
+ from transformers import pipeline
56
 
57
+ tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-wikiann-conll2003-3-class")
58
+ model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-wikiann-conll2003-3-class")
59
 
60
+ nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True)
61
+ example = "My name is Philipp and live in Germany"
62
 
63
+ nlp(example)
64
+
65
+ ```
66
 
67
  ## Training procedure
68
 
 
80
 
81
  ### Training results
82
 
83
+ It achieves the following results on the evaluation set:
84
+ - Loss: 0.0520
85
+ - Precision: 0.9625
86
+ - Recall: 0.9667
87
+ - F1: 0.9646
88
+ - Accuracy: 0.9914
89
+
90
+ It achieves the following results on the test set:
91
+ - Loss: 0.141
92
+ - Precision: 0.917
93
+ - Recall: 0.9313
94
+ - F1: 0.9241
95
+ - Accuracy: 0.9807
96
 
97
 
98
  ### Framework versions