Edit model card

Details: https://spacy.io/models/ja#ja_core_news_lg

Japanese pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler.

Feature Description
Name ja_core_news_lg
Version 3.7.0
spaCy >=3.7.0,<3.8.0
Default Pipeline tok2vec, morphologizer, parser, attribute_ruler, ner
Components tok2vec, morphologizer, parser, senter, attribute_ruler, ner
Vectors 480443 keys, 480443 unique vectors (300 dimensions)
Sources UD Japanese GSD v2.8 (Omura, Mai; Miyao, Yusuke; Kanayama, Hiroshi; Matsuda, Hiroshi; Wakasa, Aya; Yamashita, Kayo; Asahara, Masayuki; Tanaka, Takaaki; Murawaki, Yugo; Matsumoto, Yuji; Mori, Shinsuke; Uematsu, Sumire; McDonald, Ryan; Nivre, Joakim; Zeman, Daniel)
UD Japanese GSD v2.8 NER (Megagon Labs Tokyo)
chiVe: Japanese Word Embedding with Sudachi & NWJC (chive-1.1-mc90-500k) (Works Applications)
License CC BY-SA 4.0
Author Explosion

Label Scheme

View label scheme (65 labels for 3 components)
Component Labels
morphologizer POS=NOUN, POS=ADP, POS=VERB, POS=SCONJ, POS=AUX, POS=PUNCT, POS=PART, POS=DET, POS=NUM, POS=ADV, POS=PRON, POS=ADJ, POS=PROPN, POS=CCONJ, POS=SYM, POS=NOUN|Polarity=Neg, POS=AUX|Polarity=Neg, POS=SPACE, POS=INTJ, POS=SCONJ|Polarity=Neg
parser ROOT, acl, advcl, advmod, amod, aux, case, cc, ccomp, compound, cop, csubj, dep, det, dislocated, fixed, mark, nmod, nsubj, nummod, obj, obl, punct
ner CARDINAL, DATE, EVENT, FAC, GPE, LANGUAGE, LAW, LOC, MONEY, MOVEMENT, NORP, ORDINAL, ORG, PERCENT, PERSON, PET_NAME, PHONE, PRODUCT, QUANTITY, TIME, TITLE_AFFIX, WORK_OF_ART

Accuracy

Type Score
TOKEN_ACC 99.37
TOKEN_P 97.64
TOKEN_R 97.88
TOKEN_F 97.76
POS_ACC 97.42
MORPH_ACC 0.00
MORPH_MICRO_P 34.01
MORPH_MICRO_R 98.04
MORPH_MICRO_F 50.51
SENTS_P 95.56
SENTS_R 97.63
SENTS_F 96.59
DEP_UAS 92.12
DEP_LAS 90.90
TAG_ACC 97.13
LEMMA_ACC 96.70
ENTS_P 73.88
ENTS_R 68.68
ENTS_F 71.19
Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results