Feature | Description |
---|---|
Name | en_scispaCy_aaa_repair_type_classification |
Version | 0.0.0 |
spaCy | >=3.7.4,<3.8.0 |
Default Pipeline | tok2vec , tagger , attribute_ruler , lemmatizer , parser , ner , textcat_multilabel |
Components | tok2vec , tagger , attribute_ruler , lemmatizer , parser , ner , textcat_multilabel |
Vectors | 4087446 keys, 50000 unique vectors (200 dimensions) |
Sources | n/a |
License | n/a |
Author | n/a |
Label Scheme
View label scheme (101 labels for 4 components)
Component | Labels |
---|---|
tagger |
$ , '' , , , -LRB- , -RRB- , . , : , ADD , AFX , CC , CD , DT , EX , FW , HYPH , IN , JJ , JJR , JJS , LS , MD , NFP , NN , NNP , NNPS , NNS , PDT , POS , PRP , PRP$ , RB , RBR , RBS , RP , SYM , TO , UH , VB , VBD , VBG , VBN , VBP , VBZ , WDT , WP , WP$ , WRB , XX , ```` |
parser |
ROOT , acl , acl:relcl , acomp , advcl , advmod , amod , amod@nmod , appos , attr , aux , auxpass , case , cc , cc:preconj , ccomp , compound , compound:prt , conj , cop , csubj , dative , dep , det , det:predet , dobj , expl , intj , mark , meta , mwe , neg , nmod , nmod:npmod , nmod:poss , nmod:tmod , nsubj , nsubjpass , nummod , parataxis , pcomp , pobj , preconj , predet , prep , punct , quantmod , xcomp |
ner |
ENTITY |
textcat_multilabel |
Primary AAA Repair , Revision AAA Repair , Non-AAA |
Accuracy
Type | Score |
---|---|
TAG_ACC |
0.00 |
LEMMA_ACC |
0.00 |
DEP_UAS |
0.00 |
DEP_LAS |
0.00 |
DEP_LAS_PER_TYPE |
0.00 |
SENTS_P |
0.00 |
SENTS_R |
0.00 |
SENTS_F |
0.00 |
ENTS_F |
0.00 |
ENTS_P |
0.00 |
ENTS_R |
0.00 |
ENTS_PER_TYPE |
0.00 |
CATS_SCORE |
97.93 |
CATS_MICRO_P |
92.39 |
CATS_MICRO_R |
93.41 |
CATS_MICRO_F |
92.90 |
CATS_MACRO_P |
90.15 |
CATS_MACRO_R |
93.13 |
CATS_MACRO_F |
91.46 |
CATS_MACRO_AUC |
97.93 |
TOK2VEC_LOSS |
307.11 |
TEXTCAT_MULTILABEL_LOSS |
100.03 |
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Evaluation results
- NER Precisionself-reported0.000
- NER Recallself-reported0.000
- NER F Scoreself-reported0.000
- TAG (XPOS) Accuracyself-reported0.000
- Lemma Accuracyself-reported0.000
- Unlabeled Attachment Score (UAS)self-reported0.000
- Labeled Attachment Score (LAS)self-reported0.000
- Sentences F-Scoreself-reported0.000