Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
KoichiYasuoka
/
roberta-classical-chinese-large-sentence-segmentation
like
1
Token Classification
Transformers
PyTorch
Literary Chinese
roberta
classical chinese
literary chinese
ancient chinese
sentence segmentation
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
2
Train
Deploy
Use this model
1fc1cb3
roberta-classical-chinese-large-sentence-segmentation
/
special_tokens_map.json
KoichiYasuoka
initial release
41bb348
over 3 years ago
raw
Copy download link
history
blame
Safe
112 Bytes
{
"unk_token"
:
"[UNK]"
,
"sep_token"
:
"[SEP]"
,
"pad_token"
:
"[PAD]"
,
"cls_token"
:
"[CLS]"
,
"mask_token"
:
"[MASK]"
}