Spaces:
Sleeping
Sleeping
ybbwcwaps
commited on
Commit
•
d0226b2
1
Parent(s):
05c7e0c
some bert
Browse files- FakeVD/Models/bert-base-chinese/.gitattributes +10 -0
- FakeVD/Models/bert-base-chinese/README.md +75 -0
- FakeVD/Models/bert-base-chinese/config.json +25 -0
- FakeVD/Models/bert-base-chinese/tokenizer.json +0 -0
- FakeVD/Models/bert-base-chinese/tokenizer_config.json +1 -0
- FakeVD/Models/bert-base-chinese/vocab.txt +0 -0
FakeVD/Models/bert-base-chinese/.gitattributes
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.tar.gz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
10 |
+
model.safetensors filter=lfs diff=lfs merge=lfs -text
|
FakeVD/Models/bert-base-chinese/README.md
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: zh
|
3 |
+
---
|
4 |
+
|
5 |
+
# Bert-base-chinese
|
6 |
+
|
7 |
+
## Table of Contents
|
8 |
+
- [Model Details](#model-details)
|
9 |
+
- [Uses](#uses)
|
10 |
+
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
|
11 |
+
- [Training](#training)
|
12 |
+
- [Evaluation](#evaluation)
|
13 |
+
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
|
14 |
+
|
15 |
+
|
16 |
+
## Model Details
|
17 |
+
|
18 |
+
### Model Description
|
19 |
+
|
20 |
+
This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).
|
21 |
+
|
22 |
+
- **Developed by:** HuggingFace team
|
23 |
+
- **Model Type:** Fill-Mask
|
24 |
+
- **Language(s):** Chinese
|
25 |
+
- **License:** [More Information needed]
|
26 |
+
- **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.
|
27 |
+
|
28 |
+
### Model Sources
|
29 |
+
- **Paper:** [BERT](https://arxiv.org/abs/1810.04805)
|
30 |
+
|
31 |
+
## Uses
|
32 |
+
|
33 |
+
#### Direct Use
|
34 |
+
|
35 |
+
This model can be used for masked language modeling
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
## Risks, Limitations and Biases
|
40 |
+
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
|
41 |
+
|
42 |
+
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
|
43 |
+
|
44 |
+
|
45 |
+
## Training
|
46 |
+
|
47 |
+
#### Training Procedure
|
48 |
+
* **type_vocab_size:** 2
|
49 |
+
* **vocab_size:** 21128
|
50 |
+
* **num_hidden_layers:** 12
|
51 |
+
|
52 |
+
#### Training Data
|
53 |
+
[More Information Needed]
|
54 |
+
|
55 |
+
## Evaluation
|
56 |
+
|
57 |
+
#### Results
|
58 |
+
|
59 |
+
[More Information Needed]
|
60 |
+
|
61 |
+
|
62 |
+
## How to Get Started With the Model
|
63 |
+
```python
|
64 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
65 |
+
|
66 |
+
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
|
67 |
+
|
68 |
+
model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese")
|
69 |
+
|
70 |
+
```
|
71 |
+
|
72 |
+
|
73 |
+
|
74 |
+
|
75 |
+
|
FakeVD/Models/bert-base-chinese/config.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"BertForMaskedLM"
|
4 |
+
],
|
5 |
+
"attention_probs_dropout_prob": 0.1,
|
6 |
+
"directionality": "bidi",
|
7 |
+
"hidden_act": "gelu",
|
8 |
+
"hidden_dropout_prob": 0.1,
|
9 |
+
"hidden_size": 768,
|
10 |
+
"initializer_range": 0.02,
|
11 |
+
"intermediate_size": 3072,
|
12 |
+
"layer_norm_eps": 1e-12,
|
13 |
+
"max_position_embeddings": 512,
|
14 |
+
"model_type": "bert",
|
15 |
+
"num_attention_heads": 12,
|
16 |
+
"num_hidden_layers": 12,
|
17 |
+
"pad_token_id": 0,
|
18 |
+
"pooler_fc_size": 768,
|
19 |
+
"pooler_num_attention_heads": 12,
|
20 |
+
"pooler_num_fc_layers": 3,
|
21 |
+
"pooler_size_per_head": 128,
|
22 |
+
"pooler_type": "first_token_transform",
|
23 |
+
"type_vocab_size": 2,
|
24 |
+
"vocab_size": 21128
|
25 |
+
}
|
FakeVD/Models/bert-base-chinese/tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
FakeVD/Models/bert-base-chinese/tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"do_lower_case": false, "model_max_length": 512}
|
FakeVD/Models/bert-base-chinese/vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|