YuanTang96
commited on
Commit
•
d9ecb2c
1
Parent(s):
2960904
first
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- params_weight/MiniGPT_3D_stage_3/MiniGPT_3D_stage_3.pth +3 -0
- params_weight/MiniGPT_3D_stage_4/MiniGPT_3D_stage_4.pth +3 -0
- params_weight/Phi_2/LICENSE +71 -0
- params_weight/Phi_2/README.md +36 -0
- params_weight/Phi_2/added_tokens.json +40 -0
- params_weight/Phi_2/config.json +36 -0
- params_weight/Phi_2/config_backup.json +28 -0
- params_weight/Phi_2/generation_config.json +4 -0
- params_weight/Phi_2/gitattributes +35 -0
- params_weight/Phi_2/merges.txt +0 -0
- params_weight/Phi_2/pytorch_model.bin +3 -0
- params_weight/Phi_2/special_tokens_map.json +5 -0
- params_weight/Phi_2/tokenizer.json +0 -0
- params_weight/Phi_2/tokenizer_config.json +323 -0
- params_weight/Phi_2/vocab.json +0 -0
- params_weight/TinyGPT_V_stage_3/TinyGPT-V_for_Stage3.pth +3 -0
- params_weight/all-mpnet-base-v2/1_Pooling/config.json +7 -0
- params_weight/all-mpnet-base-v2/README.md +176 -0
- params_weight/all-mpnet-base-v2/config.json +23 -0
- params_weight/all-mpnet-base-v2/config_sentence_transformers.json +7 -0
- params_weight/all-mpnet-base-v2/data_config.json +1452 -0
- params_weight/all-mpnet-base-v2/gitattributes +27 -0
- params_weight/all-mpnet-base-v2/modules.json +20 -0
- params_weight/all-mpnet-base-v2/pytorch_model.bin +3 -0
- params_weight/all-mpnet-base-v2/sentence_bert_config.json +4 -0
- params_weight/all-mpnet-base-v2/special_tokens_map.json +1 -0
- params_weight/all-mpnet-base-v2/tokenizer.json +0 -0
- params_weight/all-mpnet-base-v2/tokenizer_config.json +1 -0
- params_weight/all-mpnet-base-v2/train_script.py +344 -0
- params_weight/all-mpnet-base-v2/vocab.txt +0 -0
- params_weight/bert-base-uncased/LICENSE +201 -0
- params_weight/bert-base-uncased/README.md +251 -0
- params_weight/bert-base-uncased/config.json +23 -0
- params_weight/bert-base-uncased/flax_model.msgpack +3 -0
- params_weight/bert-base-uncased/gitattributes +11 -0
- params_weight/bert-base-uncased/model.onnx +3 -0
- params_weight/bert-base-uncased/model.safetensors +3 -0
- params_weight/bert-base-uncased/pytorch_model.bin +3 -0
- params_weight/bert-base-uncased/rust_model.ot +3 -0
- params_weight/bert-base-uncased/tf_model.h5 +3 -0
- params_weight/bert-base-uncased/tokenizer.json +0 -0
- params_weight/bert-base-uncased/tokenizer_config.json +3 -0
- params_weight/bert-base-uncased/vocab.txt +0 -0
- params_weight/pc_encoder/point_model.pth +3 -0
- params_weight/sup-simcse-roberta-large/README.md +168 -0
- params_weight/sup-simcse-roberta-large/config.json +26 -0
- params_weight/sup-simcse-roberta-large/flax_model.msgpack +3 -0
- params_weight/sup-simcse-roberta-large/gitattributes +17 -0
- params_weight/sup-simcse-roberta-large/merges.txt +0 -0
- params_weight/sup-simcse-roberta-large/pytorch_model.bin +3 -0
params_weight/MiniGPT_3D_stage_3/MiniGPT_3D_stage_3.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ab94ff86d04e7edffe20b936f359656619c03815760f55086f98e49503bafe0e
|
3 |
+
size 569394287
|
params_weight/MiniGPT_3D_stage_4/MiniGPT_3D_stage_4.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f0f278eb5cd8dab32859561fcdea5aebed71de8cb56f29151358bc1655480305
|
3 |
+
size 4933149
|
params_weight/Phi_2/LICENSE
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
MICROSOFT RESEARCH LICENSE TERMS
|
2 |
+
|
3 |
+
IF YOU LIVE IN THE UNITED STATES, PLEASE READ THE “BINDING ARBITRATION AND CLASS ACTION WAIVER” SECTION BELOW. IT AFFECTS HOW DISPUTES ARE RESOLVED.
|
4 |
+
|
5 |
+
These license terms are an agreement between you and Microsoft Corporation (or one of its affiliates). They apply to the source code, object code, machine learning models, or data (collectively “Materials”) that accompany this license. IF YOU COMPLY WITH THESE LICENSE TERMS, YOU HAVE THE RIGHTS BELOW. BY USING THE MATERIALS, YOU ACCEPT THESE TERMS.
|
6 |
+
|
7 |
+
1) INSTALLATION AND USE RIGHTS TO THE MATERIALS.
|
8 |
+
|
9 |
+
Subject to the terms of this agreement, you have the below rights, if applicable, to use the Materials solely for non-commercial, non-revenue generating, research purposes:
|
10 |
+
|
11 |
+
a) Source Code. If source code is included, you may use and modify the source code, but you may not distribute the source code.
|
12 |
+
|
13 |
+
b) Object Code. If object code is included, you may use the object code, but you may not distribute the object code.
|
14 |
+
|
15 |
+
c) Models. If machine learning model(s) are included, you may use the model(s), but you may not distribute the models.
|
16 |
+
|
17 |
+
d) Data. If data is included, you may use and modify the data, but your use and modification must be consistent with the consent under which the data was provided and/or gathered and you may not distribute the data or your modifications to the data.
|
18 |
+
|
19 |
+
2) SCOPE OF LICENSE. The Materials are licensed, not sold. Microsoft reserves all other rights. Unless applicable law gives you more rights despite this limitation, you will not (and have no right to):
|
20 |
+
|
21 |
+
a) work around any technical limitations in the Materials that only allow you to use it in certain ways;
|
22 |
+
|
23 |
+
b) reverse engineer, decompile or disassemble the Materials;
|
24 |
+
|
25 |
+
c) remove, minimize, block, or modify any notices of Microsoft or its suppliers in the Materials;
|
26 |
+
|
27 |
+
d) use the Materials in any way that is against the law or to create or propagate malware; or
|
28 |
+
|
29 |
+
e) share, publish, distribute or lend the Materials, provide the Materials as a stand-alone hosted solution for others to use, or transfer the Materials or this agreement to any third party.
|
30 |
+
|
31 |
+
3) PERSONAL DATA. If the data (set forth in Section 1(c) above) includes or is found to include any data that enables any ability to identify an individual (“Personal Data”), you will not use such Personal Data for any purpose other than was authorized and consented to by the data subject/research participant. You will not use Personal Data to contact any person. You will keep Personal Data in strict confidence. You will not share any Personal Data that is collected or in your possession with any third party for any reason and as required under the original consent agreement. Further, you will destroy the Personal Data and any backup or copies, immediately upon the completion of your research.
|
32 |
+
|
33 |
+
4) LICENSE TO MICROSOFT. Notwithstanding the limitations in Section 1, you may distribute your modifications back to Microsoft, and if you do provide Microsoft with modifications of the Materials, you hereby grant Microsoft, without any restrictions or limitations, a non-exclusive, perpetual, irrevocable, royalty-free, assignable and sub-licensable license, to reproduce, publicly perform or display, install, use, modify, post, distribute, make and have made, sell and transfer such modifications and derivatives for any purpose.
|
34 |
+
|
35 |
+
5) PUBLICATION. You may publish (or present papers or articles) on your results from using the Materials provided that no material or substantial portion of the Materials is included in any such publication or presentation.
|
36 |
+
|
37 |
+
6) FEEDBACK. Any feedback about the Materials provided by you to us is voluntarily given, and Microsoft shall be free to use the feedback as it sees fit without obligation or restriction of any kind, even if the
|
38 |
+
|
39 |
+
feedback is designated by you as confidential. Such feedback shall be considered a contribution and licensed to Microsoft under the terms of Section 4 above.
|
40 |
+
|
41 |
+
7) EXPORT RESTRICTIONS. You must comply with all domestic and international export laws and regulations that apply to the Materials, which include restrictions on destinations, end users, and end use. For further information on export restrictions, visit (aka.ms/exporting).
|
42 |
+
|
43 |
+
8) SUPPORT SERVICES. Microsoft is not obligated under this agreement to provide any support services for the Materials. Any support provided is “as is”, “with all faults”, and without warranty of any kind.
|
44 |
+
|
45 |
+
9) BINDING ARBITRATION AND CLASS ACTION WAIVER. This Section applies if you live in (or, if a business, your principal place of business is in) the United States. If you and Microsoft have a dispute, you and Microsoft agree to try for 60 days to resolve it informally. If you and Microsoft can’t, you and Microsoft agree to binding individual arbitration before the American Arbitration Association under the Federal Arbitration Act (“FAA”), and not to sue in court in front of a judge or jury. Instead, a neutral arbitrator will decide. Class action lawsuits, class-wide arbitrations, private attorney-general actions, and any other proceeding where someone acts in a representative capacity are not allowed; nor is combining individual proceedings without the consent of all parties. The complete Arbitration Agreement contains more terms and is at aka.ms/arb-agreement-1. You and Microsoft agree to these terms.
|
46 |
+
|
47 |
+
10) ENTIRE AGREEMENT. This agreement, and any other terms Microsoft may provide for supplements, updates, or third-party applications, is the entire agreement for the Materials.
|
48 |
+
|
49 |
+
11) APPLICABLE LAW AND PLACE TO RESOLVE DISPUTES. If you acquired the Materials in the United States or Canada, the laws of the state or province where you live (or, if a business, where your principal place of business is located) govern the interpretation of this agreement, claims for its breach, and all other claims (including consumer protection, unfair competition, and tort claims), regardless of conflict of laws principles, except that the FAA governs everything related to arbitration. If you acquired the Materials in any other country, its laws apply, except that the FAA governs everything related to arbitration. If U.S. federal jurisdiction exists, you and Microsoft consent to exclusive jurisdiction and venue in the federal court in King County, Washington for all disputes heard in court (excluding arbitration). If not, you and Microsoft consent to exclusive jurisdiction and venue in the Superior Court of King County, Washington for all disputes heard in court (excluding arbitration).
|
50 |
+
|
51 |
+
12) CONSUMER RIGHTS; REGIONAL VARIATIONS. This agreement describes certain legal rights. You may have other rights, including consumer rights, under the laws of your state, province, or country. Separate and apart from your relationship with Microsoft, you may also have rights with respect to the party from which you acquired the Materials. This agreement does not change those other rights if the laws of your state, province, or country do not permit it to do so. For example, if you acquired the Materials in one of the below regions, or mandatory country law applies, then the following provisions apply to you:
|
52 |
+
|
53 |
+
a) Australia. You have statutory guarantees under the Australian Consumer Law and nothing in this agreement is intended to affect those rights.
|
54 |
+
|
55 |
+
b) Canada. If you acquired this software in Canada, you may stop receiving updates by turning off the automatic update feature, disconnecting your device from the Internet (if and when you re-connect to the Internet, however, the Materials will resume checking for and installing updates), or uninstalling the Materials. The product documentation, if any, may also specify how to turn off updates for your specific device or software.
|
56 |
+
|
57 |
+
c) Germany and Austria.
|
58 |
+
|
59 |
+
i. Warranty. The properly licensed software will perform substantially as described in any Microsoft materials that accompany the Materials. However, Microsoft gives no contractual guarantee in relation to the licensed software.
|
60 |
+
|
61 |
+
ii. Limitation of Liability. In case of intentional conduct, gross negligence, claims based on the Product Liability Act, as well as, in case of death or personal or physical injury, Microsoft is liable according to the statutory law.
|
62 |
+
|
63 |
+
Subject to the foregoing clause (ii), Microsoft will only be liable for slight negligence if Microsoft is in breach of such material contractual obligations, the fulfillment of which facilitate the due performance of this agreement, the breach of which would endanger the purpose of this agreement and the compliance with which a party may constantly trust in (so-called "cardinal obligations"). In other cases of slight negligence, Microsoft will not be liable for slight negligence.
|
64 |
+
|
65 |
+
13) DISCLAIMER OF WARRANTY. THE MATERIALS ARE LICENSED “AS IS.” YOU BEAR THE RISK OF USING THEM. MICROSOFT GIVES NO EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. TO THE EXTENT PERMITTED UNDER APPLICABLE LAWS, MICROSOFT EXCLUDES ALL IMPLIED WARRANTIES, INCLUDING MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT.
|
66 |
+
|
67 |
+
14) LIMITATION ON AND EXCLUSION OF DAMAGES. IF YOU HAVE ANY BASIS FOR RECOVERING DAMAGES DESPITE THE PRECEDING DISCLAIMER OF WARRANTY, YOU CAN RECOVER FROM MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO U.S. $5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
|
68 |
+
|
69 |
+
This limitation applies to (a) anything related to the Materials, services, content (including code) on third party Internet sites, or third party applications; and (b) claims for breach of contract, warranty, guarantee, or condition; strict liability, negligence, or other tort; or any other claim; in each case to the extent permitted by applicable law.
|
70 |
+
|
71 |
+
It also applies even if Microsoft knew or should have known about the possibility of the damages. The above limitation or exclusion may not apply to you because your state, province, or country may not allow the exclusion or limitation of incidental, consequential, or other damages.
|
params_weight/Phi_2/README.md
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: microsoft-research-license
|
4 |
+
license_link: LICENSE
|
5 |
+
---
|
6 |
+
|
7 |
+
**DISCLAIMER**: I don't own the weights to this model, this is a property of Microsoft and taken from their official repository : [microsoft/phi-2](https://huggingface.co/microsoft/phi-2).
|
8 |
+
The sole purpose of this repository is to use this model through the `transformers` API or to load and use the model using the HuggingFace `transformers` library.
|
9 |
+
|
10 |
+
|
11 |
+
# Usage
|
12 |
+
|
13 |
+
First make sure you have the latest version of the `transformers` installed.
|
14 |
+
|
15 |
+
```
|
16 |
+
pip install -U transformers
|
17 |
+
```
|
18 |
+
|
19 |
+
Then use the transformers library to load the model from the library itself
|
20 |
+
|
21 |
+
```python
|
22 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
23 |
+
|
24 |
+
model = AutoModelForCausalLM.from_pretrained("susnato/phi-2")
|
25 |
+
tokenizer = AutoTokenizer.from_pretrained("susnato/phi-2")
|
26 |
+
|
27 |
+
inputs = tokenizer('''def print_prime(n):
|
28 |
+
"""
|
29 |
+
Print all primes between 1 and n
|
30 |
+
"""''', return_tensors="pt", return_attention_mask=False)
|
31 |
+
|
32 |
+
outputs = model.generate(**inputs, max_length=200)
|
33 |
+
text = tokenizer.batch_decode(outputs)[0]
|
34 |
+
print(text)
|
35 |
+
|
36 |
+
```
|
params_weight/Phi_2/added_tokens.json
ADDED
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"\t\t": 50294,
|
3 |
+
"\t\t\t": 50293,
|
4 |
+
"\t\t\t\t": 50292,
|
5 |
+
"\t\t\t\t\t": 50291,
|
6 |
+
"\t\t\t\t\t\t": 50290,
|
7 |
+
"\t\t\t\t\t\t\t": 50289,
|
8 |
+
"\t\t\t\t\t\t\t\t": 50288,
|
9 |
+
"\t\t\t\t\t\t\t\t\t": 50287,
|
10 |
+
" ": 50286,
|
11 |
+
" ": 50285,
|
12 |
+
" ": 50284,
|
13 |
+
" ": 50283,
|
14 |
+
" ": 50282,
|
15 |
+
" ": 50281,
|
16 |
+
" ": 50280,
|
17 |
+
" ": 50279,
|
18 |
+
" ": 50278,
|
19 |
+
" ": 50277,
|
20 |
+
" ": 50276,
|
21 |
+
" ": 50275,
|
22 |
+
" ": 50274,
|
23 |
+
" ": 50273,
|
24 |
+
" ": 50272,
|
25 |
+
" ": 50271,
|
26 |
+
" ": 50270,
|
27 |
+
" ": 50269,
|
28 |
+
" ": 50268,
|
29 |
+
" ": 50267,
|
30 |
+
" ": 50266,
|
31 |
+
" ": 50265,
|
32 |
+
" ": 50264,
|
33 |
+
" ": 50263,
|
34 |
+
" ": 50262,
|
35 |
+
" ": 50261,
|
36 |
+
" ": 50260,
|
37 |
+
" ": 50259,
|
38 |
+
" ": 50258,
|
39 |
+
" ": 50257
|
40 |
+
}
|
params_weight/Phi_2/config.json
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"PhiForCausalLM"
|
4 |
+
],
|
5 |
+
"bos_token_id": 1,
|
6 |
+
"eos_token_id": 2,
|
7 |
+
"hidden_act": "gelu_new",
|
8 |
+
"hidden_size": 2560,
|
9 |
+
"initializer_range": 0.02,
|
10 |
+
"intermediate_size": 10240,
|
11 |
+
"max_position_embeddings": 2048,
|
12 |
+
"model_type": "pointllm",
|
13 |
+
"num_attention_heads": 32,
|
14 |
+
"num_hidden_layers": 32,
|
15 |
+
"pretraining_tp": 1,
|
16 |
+
"resid_pdrop": 0.1,
|
17 |
+
"embd_pdrop": 0.0,
|
18 |
+
"layer_norm_eps": 1e-05,
|
19 |
+
"rope_scaling": null,
|
20 |
+
"rope_theta": 10000.0,
|
21 |
+
"partial_rotary_factor": 0.4,
|
22 |
+
"qk_layernorm": false,
|
23 |
+
"tie_word_embeddings": false,
|
24 |
+
"torch_dtype": "float16",
|
25 |
+
"transformers_version": "4.35.2",
|
26 |
+
"use_cache": true,
|
27 |
+
"vocab_size": 51200,
|
28 |
+
"point_backbone": "PointBERT",
|
29 |
+
"point_backbone_ckpt": "",
|
30 |
+
"point_backbone_config_name": "PointTransformer_8192point_2layer",
|
31 |
+
"use_color": true,
|
32 |
+
"mm_use_point_start_end": true,
|
33 |
+
"DEFAULT_POINT_PATCH_TOKEN": "<point_patch>",
|
34 |
+
"DEFAULT_POINT_START_TOKEN": "<point_start>",
|
35 |
+
"DEFAULT_POINT_END_TOKEN": "<point_end>"
|
36 |
+
}
|
params_weight/Phi_2/config_backup.json
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"PhiForCausalLM"
|
4 |
+
],
|
5 |
+
"bos_token_id": 1,
|
6 |
+
"eos_token_id": 2,
|
7 |
+
"hidden_act": "gelu_new",
|
8 |
+
"hidden_size": 2560,
|
9 |
+
"initializer_range": 0.02,
|
10 |
+
"intermediate_size": 10240,
|
11 |
+
"max_position_embeddings": 2048,
|
12 |
+
"model_type": "phi",
|
13 |
+
"num_attention_heads": 32,
|
14 |
+
"num_hidden_layers": 32,
|
15 |
+
"pretraining_tp": 1,
|
16 |
+
"resid_pdrop": 0.1,
|
17 |
+
"embd_pdrop": 0.0,
|
18 |
+
"layer_norm_eps": 1e-05,
|
19 |
+
"rope_scaling": null,
|
20 |
+
"rope_theta": 10000.0,
|
21 |
+
"partial_rotary_factor": 0.4,
|
22 |
+
"qk_layernorm": false,
|
23 |
+
"tie_word_embeddings": false,
|
24 |
+
"torch_dtype": "float16",
|
25 |
+
"transformers_version": "4.35.2",
|
26 |
+
"use_cache": true,
|
27 |
+
"vocab_size": 51200
|
28 |
+
}
|
params_weight/Phi_2/generation_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"transformers_version": "4.35.2"
|
4 |
+
}
|
params_weight/Phi_2/gitattributes
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
params_weight/Phi_2/merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
params_weight/Phi_2/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:039628734f71c2a2dacc551449f65b9351183fb0d950e78d6a870ffaf115ed9f
|
3 |
+
size 5559473539
|
params_weight/Phi_2/special_tokens_map.json
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "<|endoftext|>",
|
3 |
+
"eos_token": "<|endoftext|>",
|
4 |
+
"unk_token": "<|endoftext|>"
|
5 |
+
}
|
params_weight/Phi_2/tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
params_weight/Phi_2/tokenizer_config.json
ADDED
@@ -0,0 +1,323 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"added_tokens_decoder": {
|
4 |
+
"50256": {
|
5 |
+
"content": "<|endoftext|>",
|
6 |
+
"lstrip": false,
|
7 |
+
"normalized": false,
|
8 |
+
"rstrip": false,
|
9 |
+
"single_word": false,
|
10 |
+
"special": true
|
11 |
+
},
|
12 |
+
"50257": {
|
13 |
+
"content": " ",
|
14 |
+
"lstrip": false,
|
15 |
+
"normalized": true,
|
16 |
+
"rstrip": false,
|
17 |
+
"single_word": false,
|
18 |
+
"special": false
|
19 |
+
},
|
20 |
+
"50258": {
|
21 |
+
"content": " ",
|
22 |
+
"lstrip": false,
|
23 |
+
"normalized": true,
|
24 |
+
"rstrip": false,
|
25 |
+
"single_word": false,
|
26 |
+
"special": false
|
27 |
+
},
|
28 |
+
"50259": {
|
29 |
+
"content": " ",
|
30 |
+
"lstrip": false,
|
31 |
+
"normalized": true,
|
32 |
+
"rstrip": false,
|
33 |
+
"single_word": false,
|
34 |
+
"special": false
|
35 |
+
},
|
36 |
+
"50260": {
|
37 |
+
"content": " ",
|
38 |
+
"lstrip": false,
|
39 |
+
"normalized": true,
|
40 |
+
"rstrip": false,
|
41 |
+
"single_word": false,
|
42 |
+
"special": false
|
43 |
+
},
|
44 |
+
"50261": {
|
45 |
+
"content": " ",
|
46 |
+
"lstrip": false,
|
47 |
+
"normalized": true,
|
48 |
+
"rstrip": false,
|
49 |
+
"single_word": false,
|
50 |
+
"special": false
|
51 |
+
},
|
52 |
+
"50262": {
|
53 |
+
"content": " ",
|
54 |
+
"lstrip": false,
|
55 |
+
"normalized": true,
|
56 |
+
"rstrip": false,
|
57 |
+
"single_word": false,
|
58 |
+
"special": false
|
59 |
+
},
|
60 |
+
"50263": {
|
61 |
+
"content": " ",
|
62 |
+
"lstrip": false,
|
63 |
+
"normalized": true,
|
64 |
+
"rstrip": false,
|
65 |
+
"single_word": false,
|
66 |
+
"special": false
|
67 |
+
},
|
68 |
+
"50264": {
|
69 |
+
"content": " ",
|
70 |
+
"lstrip": false,
|
71 |
+
"normalized": true,
|
72 |
+
"rstrip": false,
|
73 |
+
"single_word": false,
|
74 |
+
"special": false
|
75 |
+
},
|
76 |
+
"50265": {
|
77 |
+
"content": " ",
|
78 |
+
"lstrip": false,
|
79 |
+
"normalized": true,
|
80 |
+
"rstrip": false,
|
81 |
+
"single_word": false,
|
82 |
+
"special": false
|
83 |
+
},
|
84 |
+
"50266": {
|
85 |
+
"content": " ",
|
86 |
+
"lstrip": false,
|
87 |
+
"normalized": true,
|
88 |
+
"rstrip": false,
|
89 |
+
"single_word": false,
|
90 |
+
"special": false
|
91 |
+
},
|
92 |
+
"50267": {
|
93 |
+
"content": " ",
|
94 |
+
"lstrip": false,
|
95 |
+
"normalized": true,
|
96 |
+
"rstrip": false,
|
97 |
+
"single_word": false,
|
98 |
+
"special": false
|
99 |
+
},
|
100 |
+
"50268": {
|
101 |
+
"content": " ",
|
102 |
+
"lstrip": false,
|
103 |
+
"normalized": true,
|
104 |
+
"rstrip": false,
|
105 |
+
"single_word": false,
|
106 |
+
"special": false
|
107 |
+
},
|
108 |
+
"50269": {
|
109 |
+
"content": " ",
|
110 |
+
"lstrip": false,
|
111 |
+
"normalized": true,
|
112 |
+
"rstrip": false,
|
113 |
+
"single_word": false,
|
114 |
+
"special": false
|
115 |
+
},
|
116 |
+
"50270": {
|
117 |
+
"content": " ",
|
118 |
+
"lstrip": false,
|
119 |
+
"normalized": true,
|
120 |
+
"rstrip": false,
|
121 |
+
"single_word": false,
|
122 |
+
"special": false
|
123 |
+
},
|
124 |
+
"50271": {
|
125 |
+
"content": " ",
|
126 |
+
"lstrip": false,
|
127 |
+
"normalized": true,
|
128 |
+
"rstrip": false,
|
129 |
+
"single_word": false,
|
130 |
+
"special": false
|
131 |
+
},
|
132 |
+
"50272": {
|
133 |
+
"content": " ",
|
134 |
+
"lstrip": false,
|
135 |
+
"normalized": true,
|
136 |
+
"rstrip": false,
|
137 |
+
"single_word": false,
|
138 |
+
"special": false
|
139 |
+
},
|
140 |
+
"50273": {
|
141 |
+
"content": " ",
|
142 |
+
"lstrip": false,
|
143 |
+
"normalized": true,
|
144 |
+
"rstrip": false,
|
145 |
+
"single_word": false,
|
146 |
+
"special": false
|
147 |
+
},
|
148 |
+
"50274": {
|
149 |
+
"content": " ",
|
150 |
+
"lstrip": false,
|
151 |
+
"normalized": true,
|
152 |
+
"rstrip": false,
|
153 |
+
"single_word": false,
|
154 |
+
"special": false
|
155 |
+
},
|
156 |
+
"50275": {
|
157 |
+
"content": " ",
|
158 |
+
"lstrip": false,
|
159 |
+
"normalized": true,
|
160 |
+
"rstrip": false,
|
161 |
+
"single_word": false,
|
162 |
+
"special": false
|
163 |
+
},
|
164 |
+
"50276": {
|
165 |
+
"content": " ",
|
166 |
+
"lstrip": false,
|
167 |
+
"normalized": true,
|
168 |
+
"rstrip": false,
|
169 |
+
"single_word": false,
|
170 |
+
"special": false
|
171 |
+
},
|
172 |
+
"50277": {
|
173 |
+
"content": " ",
|
174 |
+
"lstrip": false,
|
175 |
+
"normalized": true,
|
176 |
+
"rstrip": false,
|
177 |
+
"single_word": false,
|
178 |
+
"special": false
|
179 |
+
},
|
180 |
+
"50278": {
|
181 |
+
"content": " ",
|
182 |
+
"lstrip": false,
|
183 |
+
"normalized": true,
|
184 |
+
"rstrip": false,
|
185 |
+
"single_word": false,
|
186 |
+
"special": false
|
187 |
+
},
|
188 |
+
"50279": {
|
189 |
+
"content": " ",
|
190 |
+
"lstrip": false,
|
191 |
+
"normalized": true,
|
192 |
+
"rstrip": false,
|
193 |
+
"single_word": false,
|
194 |
+
"special": false
|
195 |
+
},
|
196 |
+
"50280": {
|
197 |
+
"content": " ",
|
198 |
+
"lstrip": false,
|
199 |
+
"normalized": true,
|
200 |
+
"rstrip": false,
|
201 |
+
"single_word": false,
|
202 |
+
"special": false
|
203 |
+
},
|
204 |
+
"50281": {
|
205 |
+
"content": " ",
|
206 |
+
"lstrip": false,
|
207 |
+
"normalized": true,
|
208 |
+
"rstrip": false,
|
209 |
+
"single_word": false,
|
210 |
+
"special": false
|
211 |
+
},
|
212 |
+
"50282": {
|
213 |
+
"content": " ",
|
214 |
+
"lstrip": false,
|
215 |
+
"normalized": true,
|
216 |
+
"rstrip": false,
|
217 |
+
"single_word": false,
|
218 |
+
"special": false
|
219 |
+
},
|
220 |
+
"50283": {
|
221 |
+
"content": " ",
|
222 |
+
"lstrip": false,
|
223 |
+
"normalized": true,
|
224 |
+
"rstrip": false,
|
225 |
+
"single_word": false,
|
226 |
+
"special": false
|
227 |
+
},
|
228 |
+
"50284": {
|
229 |
+
"content": " ",
|
230 |
+
"lstrip": false,
|
231 |
+
"normalized": true,
|
232 |
+
"rstrip": false,
|
233 |
+
"single_word": false,
|
234 |
+
"special": false
|
235 |
+
},
|
236 |
+
"50285": {
|
237 |
+
"content": " ",
|
238 |
+
"lstrip": false,
|
239 |
+
"normalized": true,
|
240 |
+
"rstrip": false,
|
241 |
+
"single_word": false,
|
242 |
+
"special": false
|
243 |
+
},
|
244 |
+
"50286": {
|
245 |
+
"content": " ",
|
246 |
+
"lstrip": false,
|
247 |
+
"normalized": true,
|
248 |
+
"rstrip": false,
|
249 |
+
"single_word": false,
|
250 |
+
"special": false
|
251 |
+
},
|
252 |
+
"50287": {
|
253 |
+
"content": "\t\t\t\t\t\t\t\t\t",
|
254 |
+
"lstrip": false,
|
255 |
+
"normalized": true,
|
256 |
+
"rstrip": false,
|
257 |
+
"single_word": false,
|
258 |
+
"special": false
|
259 |
+
},
|
260 |
+
"50288": {
|
261 |
+
"content": "\t\t\t\t\t\t\t\t",
|
262 |
+
"lstrip": false,
|
263 |
+
"normalized": true,
|
264 |
+
"rstrip": false,
|
265 |
+
"single_word": false,
|
266 |
+
"special": false
|
267 |
+
},
|
268 |
+
"50289": {
|
269 |
+
"content": "\t\t\t\t\t\t\t",
|
270 |
+
"lstrip": false,
|
271 |
+
"normalized": true,
|
272 |
+
"rstrip": false,
|
273 |
+
"single_word": false,
|
274 |
+
"special": false
|
275 |
+
},
|
276 |
+
"50290": {
|
277 |
+
"content": "\t\t\t\t\t\t",
|
278 |
+
"lstrip": false,
|
279 |
+
"normalized": true,
|
280 |
+
"rstrip": false,
|
281 |
+
"single_word": false,
|
282 |
+
"special": false
|
283 |
+
},
|
284 |
+
"50291": {
|
285 |
+
"content": "\t\t\t\t\t",
|
286 |
+
"lstrip": false,
|
287 |
+
"normalized": true,
|
288 |
+
"rstrip": false,
|
289 |
+
"single_word": false,
|
290 |
+
"special": false
|
291 |
+
},
|
292 |
+
"50292": {
|
293 |
+
"content": "\t\t\t\t",
|
294 |
+
"lstrip": false,
|
295 |
+
"normalized": true,
|
296 |
+
"rstrip": false,
|
297 |
+
"single_word": false,
|
298 |
+
"special": false
|
299 |
+
},
|
300 |
+
"50293": {
|
301 |
+
"content": "\t\t\t",
|
302 |
+
"lstrip": false,
|
303 |
+
"normalized": true,
|
304 |
+
"rstrip": false,
|
305 |
+
"single_word": false,
|
306 |
+
"special": false
|
307 |
+
},
|
308 |
+
"50294": {
|
309 |
+
"content": "\t\t",
|
310 |
+
"lstrip": false,
|
311 |
+
"normalized": true,
|
312 |
+
"rstrip": false,
|
313 |
+
"single_word": false,
|
314 |
+
"special": false
|
315 |
+
}
|
316 |
+
},
|
317 |
+
"bos_token": "<|endoftext|>",
|
318 |
+
"clean_up_tokenization_spaces": true,
|
319 |
+
"eos_token": "<|endoftext|>",
|
320 |
+
"model_max_length": 2048,
|
321 |
+
"tokenizer_class": "CodeGenTokenizer",
|
322 |
+
"unk_token": "<|endoftext|>"
|
323 |
+
}
|
params_weight/Phi_2/vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
params_weight/TinyGPT_V_stage_3/TinyGPT-V_for_Stage3.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a7b70bb29a827792ed2f17655ec5bba2f964e7a08fb718edb914004edc2fd5c8
|
3 |
+
size 543640119
|
params_weight/all-mpnet-base-v2/1_Pooling/config.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"word_embedding_dimension": 768,
|
3 |
+
"pooling_mode_cls_token": false,
|
4 |
+
"pooling_mode_mean_tokens": true,
|
5 |
+
"pooling_mode_max_tokens": false,
|
6 |
+
"pooling_mode_mean_sqrt_len_tokens": false
|
7 |
+
}
|
params_weight/all-mpnet-base-v2/README.md
ADDED
@@ -0,0 +1,176 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: sentence-similarity
|
3 |
+
tags:
|
4 |
+
- sentence-transformers
|
5 |
+
- feature-extraction
|
6 |
+
- sentence-similarity
|
7 |
+
language: en
|
8 |
+
license: apache-2.0
|
9 |
+
datasets:
|
10 |
+
- s2orc
|
11 |
+
- flax-sentence-embeddings/stackexchange_xml
|
12 |
+
- ms_marco
|
13 |
+
- gooaq
|
14 |
+
- yahoo_answers_topics
|
15 |
+
- code_search_net
|
16 |
+
- search_qa
|
17 |
+
- eli5
|
18 |
+
- snli
|
19 |
+
- multi_nli
|
20 |
+
- wikihow
|
21 |
+
- natural_questions
|
22 |
+
- trivia_qa
|
23 |
+
- embedding-data/sentence-compression
|
24 |
+
- embedding-data/flickr30k-captions
|
25 |
+
- embedding-data/altlex
|
26 |
+
- embedding-data/simple-wiki
|
27 |
+
- embedding-data/QQP
|
28 |
+
- embedding-data/SPECTER
|
29 |
+
- embedding-data/PAQ_pairs
|
30 |
+
- embedding-data/WikiAnswers
|
31 |
+
|
32 |
+
---
|
33 |
+
|
34 |
+
|
35 |
+
# all-mpnet-base-v2
|
36 |
+
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
37 |
+
|
38 |
+
## Usage (Sentence-Transformers)
|
39 |
+
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
40 |
+
|
41 |
+
```
|
42 |
+
pip install -U sentence-transformers
|
43 |
+
```
|
44 |
+
|
45 |
+
Then you can use the model like this:
|
46 |
+
```python
|
47 |
+
from sentence_transformers import SentenceTransformer
|
48 |
+
sentences = ["This is an example sentence", "Each sentence is converted"]
|
49 |
+
|
50 |
+
model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')
|
51 |
+
embeddings = model.encode(sentences)
|
52 |
+
print(embeddings)
|
53 |
+
```
|
54 |
+
|
55 |
+
## Usage (HuggingFace Transformers)
|
56 |
+
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
|
57 |
+
|
58 |
+
```python
|
59 |
+
from transformers import AutoTokenizer, AutoModel
|
60 |
+
import torch
|
61 |
+
import torch.nn.functional as F
|
62 |
+
|
63 |
+
#Mean Pooling - Take attention mask into account for correct averaging
|
64 |
+
def mean_pooling(model_output, attention_mask):
|
65 |
+
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
|
66 |
+
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
67 |
+
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
68 |
+
|
69 |
+
|
70 |
+
# Sentences we want sentence embeddings for
|
71 |
+
sentences = ['This is an example sentence', 'Each sentence is converted']
|
72 |
+
|
73 |
+
# Load model from HuggingFace Hub
|
74 |
+
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2')
|
75 |
+
model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2')
|
76 |
+
|
77 |
+
# Tokenize sentences
|
78 |
+
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
79 |
+
|
80 |
+
# Compute token embeddings
|
81 |
+
with torch.no_grad():
|
82 |
+
model_output = model(**encoded_input)
|
83 |
+
|
84 |
+
# Perform pooling
|
85 |
+
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
|
86 |
+
|
87 |
+
# Normalize embeddings
|
88 |
+
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
|
89 |
+
|
90 |
+
print("Sentence embeddings:")
|
91 |
+
print(sentence_embeddings)
|
92 |
+
```
|
93 |
+
|
94 |
+
## Evaluation Results
|
95 |
+
|
96 |
+
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2)
|
97 |
+
|
98 |
+
------
|
99 |
+
|
100 |
+
## Background
|
101 |
+
|
102 |
+
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
|
103 |
+
contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
|
104 |
+
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
|
105 |
+
|
106 |
+
We developped this model during the
|
107 |
+
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
|
108 |
+
organized by Hugging Face. We developped this model as part of the project:
|
109 |
+
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
|
110 |
+
|
111 |
+
## Intended uses
|
112 |
+
|
113 |
+
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
|
114 |
+
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
|
115 |
+
|
116 |
+
By default, input text longer than 384 word pieces is truncated.
|
117 |
+
|
118 |
+
|
119 |
+
## Training procedure
|
120 |
+
|
121 |
+
### Pre-training
|
122 |
+
|
123 |
+
We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure.
|
124 |
+
|
125 |
+
### Fine-tuning
|
126 |
+
|
127 |
+
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
|
128 |
+
We then apply the cross entropy loss by comparing with true pairs.
|
129 |
+
|
130 |
+
#### Hyper parameters
|
131 |
+
|
132 |
+
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
|
133 |
+
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
|
134 |
+
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
|
135 |
+
|
136 |
+
#### Training data
|
137 |
+
|
138 |
+
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
|
139 |
+
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
|
140 |
+
|
141 |
+
|
142 |
+
| Dataset | Paper | Number of training tuples |
|
143 |
+
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
|
144 |
+
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
|
145 |
+
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
|
146 |
+
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
|
147 |
+
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
|
148 |
+
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
|
149 |
+
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
|
150 |
+
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
|
151 |
+
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
|
152 |
+
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
|
153 |
+
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
|
154 |
+
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
|
155 |
+
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
|
156 |
+
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
|
157 |
+
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
|
158 |
+
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
|
159 |
+
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
|
160 |
+
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
|
161 |
+
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
|
162 |
+
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
|
163 |
+
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
|
164 |
+
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
|
165 |
+
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
|
166 |
+
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
|
167 |
+
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
|
168 |
+
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
|
169 |
+
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
|
170 |
+
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
|
171 |
+
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
|
172 |
+
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
|
173 |
+
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
|
174 |
+
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
|
175 |
+
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
|
176 |
+
| **Total** | | **1,170,060,424** |
|
params_weight/all-mpnet-base-v2/config.json
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "microsoft/mpnet-base",
|
3 |
+
"architectures": [
|
4 |
+
"MPNetForMaskedLM"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"bos_token_id": 0,
|
8 |
+
"eos_token_id": 2,
|
9 |
+
"hidden_act": "gelu",
|
10 |
+
"hidden_dropout_prob": 0.1,
|
11 |
+
"hidden_size": 768,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 3072,
|
14 |
+
"layer_norm_eps": 1e-05,
|
15 |
+
"max_position_embeddings": 514,
|
16 |
+
"model_type": "mpnet",
|
17 |
+
"num_attention_heads": 12,
|
18 |
+
"num_hidden_layers": 12,
|
19 |
+
"pad_token_id": 1,
|
20 |
+
"relative_attention_num_buckets": 32,
|
21 |
+
"transformers_version": "4.8.2",
|
22 |
+
"vocab_size": 30527
|
23 |
+
}
|
params_weight/all-mpnet-base-v2/config_sentence_transformers.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"__version__": {
|
3 |
+
"sentence_transformers": "2.0.0",
|
4 |
+
"transformers": "4.6.1",
|
5 |
+
"pytorch": "1.8.1"
|
6 |
+
}
|
7 |
+
}
|
params_weight/all-mpnet-base-v2/data_config.json
ADDED
@@ -0,0 +1,1452 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"name": "stackexchange_title_body/skeptics.stackexchange.com.jsonl.gz",
|
4 |
+
"lines": 10009,
|
5 |
+
"weight": 1
|
6 |
+
},
|
7 |
+
{
|
8 |
+
"name": "stackexchange_TitleBody_Answer/islam.stackexchange.com.jsonl.gz",
|
9 |
+
"lines": 10052,
|
10 |
+
"weight": 1
|
11 |
+
},
|
12 |
+
{
|
13 |
+
"name": "stackexchange_Title_Answer/islam.stackexchange.com.jsonl.gz",
|
14 |
+
"lines": 10052,
|
15 |
+
"weight": 1
|
16 |
+
},
|
17 |
+
{
|
18 |
+
"name": "stackexchange_TitleBody_Answer/anime.stackexchange.com.jsonl.gz",
|
19 |
+
"lines": 10131,
|
20 |
+
"weight": 1
|
21 |
+
},
|
22 |
+
{
|
23 |
+
"name": "stackexchange_Title_Answer/anime.stackexchange.com.jsonl.gz",
|
24 |
+
"lines": 10131,
|
25 |
+
"weight": 1
|
26 |
+
},
|
27 |
+
{
|
28 |
+
"name": "stackexchange_title_body/writers.stackexchange.com.jsonl.gz",
|
29 |
+
"lines": 10157,
|
30 |
+
"weight": 1
|
31 |
+
},
|
32 |
+
{
|
33 |
+
"name": "stackexchange_title_body/astronomy.stackexchange.com.jsonl.gz",
|
34 |
+
"lines": 10462,
|
35 |
+
"weight": 1
|
36 |
+
},
|
37 |
+
{
|
38 |
+
"name": "stackexchange_title_body/vi.stackexchange.com.jsonl.gz",
|
39 |
+
"lines": 10551,
|
40 |
+
"weight": 1
|
41 |
+
},
|
42 |
+
{
|
43 |
+
"name": "stackexchange_TitleBody_Answer/french.stackexchange.com.jsonl.gz",
|
44 |
+
"lines": 10578,
|
45 |
+
"weight": 1
|
46 |
+
},
|
47 |
+
{
|
48 |
+
"name": "stackexchange_Title_Answer/french.stackexchange.com.jsonl.gz",
|
49 |
+
"lines": 10578,
|
50 |
+
"weight": 1
|
51 |
+
},
|
52 |
+
{
|
53 |
+
"name": "stackexchange_title_body/cstheory.stackexchange.com.jsonl.gz",
|
54 |
+
"lines": 10642,
|
55 |
+
"weight": 1
|
56 |
+
},
|
57 |
+
{
|
58 |
+
"name": "stackexchange_TitleBody_Answer/civicrm.stackexchange.com.jsonl.gz",
|
59 |
+
"lines": 10648,
|
60 |
+
"weight": 1
|
61 |
+
},
|
62 |
+
{
|
63 |
+
"name": "stackexchange_Title_Answer/civicrm.stackexchange.com.jsonl.gz",
|
64 |
+
"lines": 10648,
|
65 |
+
"weight": 1
|
66 |
+
},
|
67 |
+
{
|
68 |
+
"name": "stackexchange_TitleBody_Answer/expressionengine.stackexchange.com.jsonl.gz",
|
69 |
+
"lines": 10742,
|
70 |
+
"weight": 1
|
71 |
+
},
|
72 |
+
{
|
73 |
+
"name": "stackexchange_Title_Answer/expressionengine.stackexchange.com.jsonl.gz",
|
74 |
+
"lines": 10742,
|
75 |
+
"weight": 1
|
76 |
+
},
|
77 |
+
{
|
78 |
+
"name": "stackexchange_title_body/engineering.stackexchange.com.jsonl.gz",
|
79 |
+
"lines": 10753,
|
80 |
+
"weight": 1
|
81 |
+
},
|
82 |
+
{
|
83 |
+
"name": "stackexchange_TitleBody_Answer/history.stackexchange.com.jsonl.gz",
|
84 |
+
"lines": 10766,
|
85 |
+
"weight": 1
|
86 |
+
},
|
87 |
+
{
|
88 |
+
"name": "stackexchange_Title_Answer/history.stackexchange.com.jsonl.gz",
|
89 |
+
"lines": 10766,
|
90 |
+
"weight": 1
|
91 |
+
},
|
92 |
+
{
|
93 |
+
"name": "stackexchange_title_body/french.stackexchange.com.jsonl.gz",
|
94 |
+
"lines": 10794,
|
95 |
+
"weight": 1
|
96 |
+
},
|
97 |
+
{
|
98 |
+
"name": "stackexchange_TitleBody_Answer/politics.stackexchange.com.jsonl.gz",
|
99 |
+
"lines": 11047,
|
100 |
+
"weight": 1
|
101 |
+
},
|
102 |
+
{
|
103 |
+
"name": "stackexchange_Title_Answer/politics.stackexchange.com.jsonl.gz",
|
104 |
+
"lines": 11047,
|
105 |
+
"weight": 1
|
106 |
+
},
|
107 |
+
{
|
108 |
+
"name": "stackexchange_title_body/economics.stackexchange.com.jsonl.gz",
|
109 |
+
"lines": 11115,
|
110 |
+
"weight": 1
|
111 |
+
},
|
112 |
+
{
|
113 |
+
"name": "stackexchange_TitleBody_Answer/craftcms.stackexchange.com.jsonl.gz",
|
114 |
+
"lines": 11236,
|
115 |
+
"weight": 1
|
116 |
+
},
|
117 |
+
{
|
118 |
+
"name": "stackexchange_Title_Answer/craftcms.stackexchange.com.jsonl.gz",
|
119 |
+
"lines": 11236,
|
120 |
+
"weight": 1
|
121 |
+
},
|
122 |
+
{
|
123 |
+
"name": "stackexchange_title_body/anime.stackexchange.com.jsonl.gz",
|
124 |
+
"lines": 11444,
|
125 |
+
"weight": 1
|
126 |
+
},
|
127 |
+
{
|
128 |
+
"name": "stackexchange_TitleBody_Answer/christianity.stackexchange.com.jsonl.gz",
|
129 |
+
"lines": 11498,
|
130 |
+
"weight": 1
|
131 |
+
},
|
132 |
+
{
|
133 |
+
"name": "stackexchange_Title_Answer/christianity.stackexchange.com.jsonl.gz",
|
134 |
+
"lines": 11498,
|
135 |
+
"weight": 1
|
136 |
+
},
|
137 |
+
{
|
138 |
+
"name": "stackexchange_TitleBody_Answer/softwarerecs.stackexchange.com.jsonl.gz",
|
139 |
+
"lines": 11761,
|
140 |
+
"weight": 1
|
141 |
+
},
|
142 |
+
{
|
143 |
+
"name": "stackexchange_Title_Answer/softwarerecs.stackexchange.com.jsonl.gz",
|
144 |
+
"lines": 11761,
|
145 |
+
"weight": 1
|
146 |
+
},
|
147 |
+
{
|
148 |
+
"name": "stackexchange_TitleBody_Answer/boardgames.stackexchange.com.jsonl.gz",
|
149 |
+
"lines": 11805,
|
150 |
+
"weight": 1
|
151 |
+
},
|
152 |
+
{
|
153 |
+
"name": "stackexchange_Title_Answer/boardgames.stackexchange.com.jsonl.gz",
|
154 |
+
"lines": 11805,
|
155 |
+
"weight": 1
|
156 |
+
},
|
157 |
+
{
|
158 |
+
"name": "stackexchange_title_body/islam.stackexchange.com.jsonl.gz",
|
159 |
+
"lines": 11853,
|
160 |
+
"weight": 1
|
161 |
+
},
|
162 |
+
{
|
163 |
+
"name": "stackexchange_title_body/expressionengine.stackexchange.com.jsonl.gz",
|
164 |
+
"lines": 11866,
|
165 |
+
"weight": 1
|
166 |
+
},
|
167 |
+
{
|
168 |
+
"name": "stackexchange_title_body/politics.stackexchange.com.jsonl.gz",
|
169 |
+
"lines": 11894,
|
170 |
+
"weight": 1
|
171 |
+
},
|
172 |
+
{
|
173 |
+
"name": "stackexchange_title_body/history.stackexchange.com.jsonl.gz",
|
174 |
+
"lines": 12021,
|
175 |
+
"weight": 1
|
176 |
+
},
|
177 |
+
{
|
178 |
+
"name": "stackexchange_title_body/christianity.stackexchange.com.jsonl.gz",
|
179 |
+
"lines": 12108,
|
180 |
+
"weight": 1
|
181 |
+
},
|
182 |
+
{
|
183 |
+
"name": "stackexchange_title_body/boardgames.stackexchange.com.jsonl.gz",
|
184 |
+
"lines": 12149,
|
185 |
+
"weight": 1
|
186 |
+
},
|
187 |
+
{
|
188 |
+
"name": "flickr30k_captions.jsonl.gz",
|
189 |
+
"lines": 317695,
|
190 |
+
"weight": 1
|
191 |
+
},
|
192 |
+
{
|
193 |
+
"name": "coco_captions.jsonl.gz",
|
194 |
+
"lines": 828395,
|
195 |
+
"weight": 1
|
196 |
+
},
|
197 |
+
{
|
198 |
+
"name": "codesearchnet.jsonl.gz",
|
199 |
+
"lines": 1151414,
|
200 |
+
"weight": 1
|
201 |
+
},
|
202 |
+
{
|
203 |
+
"name": "stackexchange_title_body/civicrm.stackexchange.com.jsonl.gz",
|
204 |
+
"lines": 12543,
|
205 |
+
"weight": 2
|
206 |
+
},
|
207 |
+
{
|
208 |
+
"name": "stackexchange_title_body/craftcms.stackexchange.com.jsonl.gz",
|
209 |
+
"lines": 12574,
|
210 |
+
"weight": 2
|
211 |
+
},
|
212 |
+
{
|
213 |
+
"name": "stackexchange_TitleBody_Answer/networkengineering.stackexchange.com.jsonl.gz",
|
214 |
+
"lines": 12590,
|
215 |
+
"weight": 2
|
216 |
+
},
|
217 |
+
{
|
218 |
+
"name": "stackexchange_Title_Answer/networkengineering.stackexchange.com.jsonl.gz",
|
219 |
+
"lines": 12590,
|
220 |
+
"weight": 2
|
221 |
+
},
|
222 |
+
{
|
223 |
+
"name": "stackexchange_TitleBody_Answer/space.stackexchange.com.jsonl.gz",
|
224 |
+
"lines": 12893,
|
225 |
+
"weight": 2
|
226 |
+
},
|
227 |
+
{
|
228 |
+
"name": "stackexchange_Title_Answer/space.stackexchange.com.jsonl.gz",
|
229 |
+
"lines": 12893,
|
230 |
+
"weight": 2
|
231 |
+
},
|
232 |
+
{
|
233 |
+
"name": "stackexchange_TitleBody_Answer/quant.stackexchange.com.jsonl.gz",
|
234 |
+
"lines": 12933,
|
235 |
+
"weight": 2
|
236 |
+
},
|
237 |
+
{
|
238 |
+
"name": "stackexchange_Title_Answer/quant.stackexchange.com.jsonl.gz",
|
239 |
+
"lines": 12933,
|
240 |
+
"weight": 2
|
241 |
+
},
|
242 |
+
{
|
243 |
+
"name": "stackexchange_TitleBody_Answer/philosophy.stackexchange.com.jsonl.gz",
|
244 |
+
"lines": 13114,
|
245 |
+
"weight": 2
|
246 |
+
},
|
247 |
+
{
|
248 |
+
"name": "stackexchange_Title_Answer/philosophy.stackexchange.com.jsonl.gz",
|
249 |
+
"lines": 13114,
|
250 |
+
"weight": 2
|
251 |
+
},
|
252 |
+
{
|
253 |
+
"name": "stackexchange_TitleBody_Answer/gardening.stackexchange.com.jsonl.gz",
|
254 |
+
"lines": 13246,
|
255 |
+
"weight": 2
|
256 |
+
},
|
257 |
+
{
|
258 |
+
"name": "stackexchange_Title_Answer/gardening.stackexchange.com.jsonl.gz",
|
259 |
+
"lines": 13246,
|
260 |
+
"weight": 2
|
261 |
+
},
|
262 |
+
{
|
263 |
+
"name": "stackexchange_title_body/hinduism.stackexchange.com.jsonl.gz",
|
264 |
+
"lines": 13450,
|
265 |
+
"weight": 2
|
266 |
+
},
|
267 |
+
{
|
268 |
+
"name": "stackexchange_title_body/networkengineering.stackexchange.com.jsonl.gz",
|
269 |
+
"lines": 13454,
|
270 |
+
"weight": 2
|
271 |
+
},
|
272 |
+
{
|
273 |
+
"name": "stackexchange_TitleBody_Answer/german.stackexchange.com.jsonl.gz",
|
274 |
+
"lines": 13733,
|
275 |
+
"weight": 2
|
276 |
+
},
|
277 |
+
{
|
278 |
+
"name": "stackexchange_Title_Answer/german.stackexchange.com.jsonl.gz",
|
279 |
+
"lines": 13733,
|
280 |
+
"weight": 2
|
281 |
+
},
|
282 |
+
{
|
283 |
+
"name": "stackexchange_title_body/german.stackexchange.com.jsonl.gz",
|
284 |
+
"lines": 13950,
|
285 |
+
"weight": 2
|
286 |
+
},
|
287 |
+
{
|
288 |
+
"name": "stackexchange_title_body/philosophy.stackexchange.com.jsonl.gz",
|
289 |
+
"lines": 14829,
|
290 |
+
"weight": 2
|
291 |
+
},
|
292 |
+
{
|
293 |
+
"name": "stackexchange_title_body/gardening.stackexchange.com.jsonl.gz",
|
294 |
+
"lines": 15136,
|
295 |
+
"weight": 2
|
296 |
+
},
|
297 |
+
{
|
298 |
+
"name": "stackexchange_title_body/space.stackexchange.com.jsonl.gz",
|
299 |
+
"lines": 15142,
|
300 |
+
"weight": 2
|
301 |
+
},
|
302 |
+
{
|
303 |
+
"name": "stackexchange_TitleBody_Answer/bicycles.stackexchange.com.jsonl.gz",
|
304 |
+
"lines": 15708,
|
305 |
+
"weight": 2
|
306 |
+
},
|
307 |
+
{
|
308 |
+
"name": "stackexchange_Title_Answer/bicycles.stackexchange.com.jsonl.gz",
|
309 |
+
"lines": 15708,
|
310 |
+
"weight": 2
|
311 |
+
},
|
312 |
+
{
|
313 |
+
"name": "stackexchange_TitleBody_Answer/law.stackexchange.com.jsonl.gz",
|
314 |
+
"lines": 16133,
|
315 |
+
"weight": 2
|
316 |
+
},
|
317 |
+
{
|
318 |
+
"name": "stackexchange_Title_Answer/law.stackexchange.com.jsonl.gz",
|
319 |
+
"lines": 16133,
|
320 |
+
"weight": 2
|
321 |
+
},
|
322 |
+
{
|
323 |
+
"name": "stackexchange_TitleBody_Answer/arduino.stackexchange.com.jsonl.gz",
|
324 |
+
"lines": 16281,
|
325 |
+
"weight": 2
|
326 |
+
},
|
327 |
+
{
|
328 |
+
"name": "stackexchange_Title_Answer/arduino.stackexchange.com.jsonl.gz",
|
329 |
+
"lines": 16281,
|
330 |
+
"weight": 2
|
331 |
+
},
|
332 |
+
{
|
333 |
+
"name": "stackexchange_title_body/bicycles.stackexchange.com.jsonl.gz",
|
334 |
+
"lines": 16353,
|
335 |
+
"weight": 2
|
336 |
+
},
|
337 |
+
{
|
338 |
+
"name": "stackexchange_TitleBody_Answer/emacs.stackexchange.com.jsonl.gz",
|
339 |
+
"lines": 16830,
|
340 |
+
"weight": 2
|
341 |
+
},
|
342 |
+
{
|
343 |
+
"name": "stackexchange_Title_Answer/emacs.stackexchange.com.jsonl.gz",
|
344 |
+
"lines": 16830,
|
345 |
+
"weight": 2
|
346 |
+
},
|
347 |
+
{
|
348 |
+
"name": "stackexchange_title_body/quant.stackexchange.com.jsonl.gz",
|
349 |
+
"lines": 17261,
|
350 |
+
"weight": 2
|
351 |
+
},
|
352 |
+
{
|
353 |
+
"name": "stackexchange_TitleBody_Answer/dsp.stackexchange.com.jsonl.gz",
|
354 |
+
"lines": 17430,
|
355 |
+
"weight": 2
|
356 |
+
},
|
357 |
+
{
|
358 |
+
"name": "stackexchange_Title_Answer/dsp.stackexchange.com.jsonl.gz",
|
359 |
+
"lines": 17430,
|
360 |
+
"weight": 2
|
361 |
+
},
|
362 |
+
{
|
363 |
+
"name": "stackexchange_TitleBody_Answer/puzzling.stackexchange.com.jsonl.gz",
|
364 |
+
"lines": 17448,
|
365 |
+
"weight": 2
|
366 |
+
},
|
367 |
+
{
|
368 |
+
"name": "stackexchange_Title_Answer/puzzling.stackexchange.com.jsonl.gz",
|
369 |
+
"lines": 17448,
|
370 |
+
"weight": 2
|
371 |
+
},
|
372 |
+
{
|
373 |
+
"name": "stackexchange_title_body/puzzling.stackexchange.com.jsonl.gz",
|
374 |
+
"lines": 17851,
|
375 |
+
"weight": 2
|
376 |
+
},
|
377 |
+
{
|
378 |
+
"name": "stackexchange_title_body/law.stackexchange.com.jsonl.gz",
|
379 |
+
"lines": 17941,
|
380 |
+
"weight": 2
|
381 |
+
},
|
382 |
+
{
|
383 |
+
"name": "stackexchange_TitleBody_Answer/movies.stackexchange.com.jsonl.gz",
|
384 |
+
"lines": 18243,
|
385 |
+
"weight": 2
|
386 |
+
},
|
387 |
+
{
|
388 |
+
"name": "stackexchange_Title_Answer/movies.stackexchange.com.jsonl.gz",
|
389 |
+
"lines": 18243,
|
390 |
+
"weight": 2
|
391 |
+
},
|
392 |
+
{
|
393 |
+
"name": "stackexchange_TitleBody_Answer/mechanics.stackexchange.com.jsonl.gz",
|
394 |
+
"lines": 18613,
|
395 |
+
"weight": 2
|
396 |
+
},
|
397 |
+
{
|
398 |
+
"name": "stackexchange_Title_Answer/mechanics.stackexchange.com.jsonl.gz",
|
399 |
+
"lines": 18613,
|
400 |
+
"weight": 2
|
401 |
+
},
|
402 |
+
{
|
403 |
+
"name": "stackexchange_TitleBody_Answer/aviation.stackexchange.com.jsonl.gz",
|
404 |
+
"lines": 18755,
|
405 |
+
"weight": 2
|
406 |
+
},
|
407 |
+
{
|
408 |
+
"name": "stackexchange_Title_Answer/aviation.stackexchange.com.jsonl.gz",
|
409 |
+
"lines": 18755,
|
410 |
+
"weight": 2
|
411 |
+
},
|
412 |
+
{
|
413 |
+
"name": "stackexchange_TitleBody_Answer/biology.stackexchange.com.jsonl.gz",
|
414 |
+
"lines": 19277,
|
415 |
+
"weight": 2
|
416 |
+
},
|
417 |
+
{
|
418 |
+
"name": "stackexchange_Title_Answer/biology.stackexchange.com.jsonl.gz",
|
419 |
+
"lines": 19277,
|
420 |
+
"weight": 2
|
421 |
+
},
|
422 |
+
{
|
423 |
+
"name": "stackexchange_TitleBody_Answer/crypto.stackexchange.com.jsonl.gz",
|
424 |
+
"lines": 19404,
|
425 |
+
"weight": 2
|
426 |
+
},
|
427 |
+
{
|
428 |
+
"name": "stackexchange_Title_Answer/crypto.stackexchange.com.jsonl.gz",
|
429 |
+
"lines": 19404,
|
430 |
+
"weight": 2
|
431 |
+
},
|
432 |
+
{
|
433 |
+
"name": "stackexchange_title_body/arduino.stackexchange.com.jsonl.gz",
|
434 |
+
"lines": 19553,
|
435 |
+
"weight": 2
|
436 |
+
},
|
437 |
+
{
|
438 |
+
"name": "stackexchange_TitleBody_Answer/music.stackexchange.com.jsonl.gz",
|
439 |
+
"lines": 19936,
|
440 |
+
"weight": 2
|
441 |
+
},
|
442 |
+
{
|
443 |
+
"name": "stackexchange_Title_Answer/music.stackexchange.com.jsonl.gz",
|
444 |
+
"lines": 19936,
|
445 |
+
"weight": 2
|
446 |
+
},
|
447 |
+
{
|
448 |
+
"name": "stackexchange_title_body/aviation.stackexchange.com.jsonl.gz",
|
449 |
+
"lines": 20139,
|
450 |
+
"weight": 2
|
451 |
+
},
|
452 |
+
{
|
453 |
+
"name": "stackexchange_title_body/softwarerecs.stackexchange.com.jsonl.gz",
|
454 |
+
"lines": 20142,
|
455 |
+
"weight": 2
|
456 |
+
},
|
457 |
+
{
|
458 |
+
"name": "stackexchange_title_body/movies.stackexchange.com.jsonl.gz",
|
459 |
+
"lines": 20181,
|
460 |
+
"weight": 2
|
461 |
+
},
|
462 |
+
{
|
463 |
+
"name": "stackexchange_TitleBody_Answer/datascience.stackexchange.com.jsonl.gz",
|
464 |
+
"lines": 20503,
|
465 |
+
"weight": 2
|
466 |
+
},
|
467 |
+
{
|
468 |
+
"name": "stackexchange_Title_Answer/datascience.stackexchange.com.jsonl.gz",
|
469 |
+
"lines": 20503,
|
470 |
+
"weight": 2
|
471 |
+
},
|
472 |
+
{
|
473 |
+
"name": "stackexchange_title_body/music.stackexchange.com.jsonl.gz",
|
474 |
+
"lines": 20636,
|
475 |
+
"weight": 2
|
476 |
+
},
|
477 |
+
{
|
478 |
+
"name": "stackexchange_TitleBody_Answer/japanese.stackexchange.com.jsonl.gz",
|
479 |
+
"lines": 20948,
|
480 |
+
"weight": 2
|
481 |
+
},
|
482 |
+
{
|
483 |
+
"name": "stackexchange_Title_Answer/japanese.stackexchange.com.jsonl.gz",
|
484 |
+
"lines": 20948,
|
485 |
+
"weight": 2
|
486 |
+
},
|
487 |
+
{
|
488 |
+
"name": "stackexchange_title_body/emacs.stackexchange.com.jsonl.gz",
|
489 |
+
"lines": 21055,
|
490 |
+
"weight": 2
|
491 |
+
},
|
492 |
+
{
|
493 |
+
"name": "stackexchange_title_body/dsp.stackexchange.com.jsonl.gz",
|
494 |
+
"lines": 21252,
|
495 |
+
"weight": 2
|
496 |
+
},
|
497 |
+
{
|
498 |
+
"name": "stackexchange_title_body/japanese.stackexchange.com.jsonl.gz",
|
499 |
+
"lines": 22056,
|
500 |
+
"weight": 2
|
501 |
+
},
|
502 |
+
{
|
503 |
+
"name": "stackexchange_TitleBody_Answer/bitcoin.stackexchange.com.jsonl.gz",
|
504 |
+
"lines": 22474,
|
505 |
+
"weight": 2
|
506 |
+
},
|
507 |
+
{
|
508 |
+
"name": "stackexchange_Title_Answer/bitcoin.stackexchange.com.jsonl.gz",
|
509 |
+
"lines": 22474,
|
510 |
+
"weight": 2
|
511 |
+
},
|
512 |
+
{
|
513 |
+
"name": "stackexchange_TitleBody_Answer/cooking.stackexchange.com.jsonl.gz",
|
514 |
+
"lines": 22641,
|
515 |
+
"weight": 2
|
516 |
+
},
|
517 |
+
{
|
518 |
+
"name": "stackexchange_Title_Answer/cooking.stackexchange.com.jsonl.gz",
|
519 |
+
"lines": 22641,
|
520 |
+
"weight": 2
|
521 |
+
},
|
522 |
+
{
|
523 |
+
"name": "stackexchange_title_body/mechanics.stackexchange.com.jsonl.gz",
|
524 |
+
"lines": 22868,
|
525 |
+
"weight": 2
|
526 |
+
},
|
527 |
+
{
|
528 |
+
"name": "stackexchange_TitleBody_Answer/photo.stackexchange.com.jsonl.gz",
|
529 |
+
"lines": 23204,
|
530 |
+
"weight": 2
|
531 |
+
},
|
532 |
+
{
|
533 |
+
"name": "stackexchange_Title_Answer/photo.stackexchange.com.jsonl.gz",
|
534 |
+
"lines": 23204,
|
535 |
+
"weight": 2
|
536 |
+
},
|
537 |
+
{
|
538 |
+
"name": "stackexchange_title_body/crypto.stackexchange.com.jsonl.gz",
|
539 |
+
"lines": 23231,
|
540 |
+
"weight": 2
|
541 |
+
},
|
542 |
+
{
|
543 |
+
"name": "stackexchange_title_body/cooking.stackexchange.com.jsonl.gz",
|
544 |
+
"lines": 23705,
|
545 |
+
"weight": 2
|
546 |
+
},
|
547 |
+
{
|
548 |
+
"name": "stackexchange_title_body/photo.stackexchange.com.jsonl.gz",
|
549 |
+
"lines": 23753,
|
550 |
+
"weight": 2
|
551 |
+
},
|
552 |
+
{
|
553 |
+
"name": "stackexchange_TitleBody_Answer/workplace.stackexchange.com.jsonl.gz",
|
554 |
+
"lines": 24012,
|
555 |
+
"weight": 2
|
556 |
+
},
|
557 |
+
{
|
558 |
+
"name": "stackexchange_Title_Answer/workplace.stackexchange.com.jsonl.gz",
|
559 |
+
"lines": 24012,
|
560 |
+
"weight": 2
|
561 |
+
},
|
562 |
+
{
|
563 |
+
"name": "stackexchange_TitleBody_Answer/meta.stackoverflow.com.jsonl.gz",
|
564 |
+
"lines": 24044,
|
565 |
+
"weight": 2
|
566 |
+
},
|
567 |
+
{
|
568 |
+
"name": "stackexchange_Title_Answer/meta.stackoverflow.com.jsonl.gz",
|
569 |
+
"lines": 24044,
|
570 |
+
"weight": 2
|
571 |
+
},
|
572 |
+
{
|
573 |
+
"name": "stackexchange_TitleBody_Answer/raspberrypi.stackexchange.com.jsonl.gz",
|
574 |
+
"lines": 24143,
|
575 |
+
"weight": 2
|
576 |
+
},
|
577 |
+
{
|
578 |
+
"name": "stackexchange_Title_Answer/raspberrypi.stackexchange.com.jsonl.gz",
|
579 |
+
"lines": 24143,
|
580 |
+
"weight": 2
|
581 |
+
},
|
582 |
+
{
|
583 |
+
"name": "stackexchange_title_body/workplace.stackexchange.com.jsonl.gz",
|
584 |
+
"lines": 24189,
|
585 |
+
"weight": 2
|
586 |
+
},
|
587 |
+
{
|
588 |
+
"name": "stackexchange_title_body/biology.stackexchange.com.jsonl.gz",
|
589 |
+
"lines": 24447,
|
590 |
+
"weight": 3
|
591 |
+
},
|
592 |
+
{
|
593 |
+
"name": "stackexchange_TitleBody_Answer/webapps.stackexchange.com.jsonl.gz",
|
594 |
+
"lines": 24867,
|
595 |
+
"weight": 3
|
596 |
+
},
|
597 |
+
{
|
598 |
+
"name": "stackexchange_Title_Answer/webapps.stackexchange.com.jsonl.gz",
|
599 |
+
"lines": 24867,
|
600 |
+
"weight": 3
|
601 |
+
},
|
602 |
+
{
|
603 |
+
"name": "stackexchange_title_body/bitcoin.stackexchange.com.jsonl.gz",
|
604 |
+
"lines": 25374,
|
605 |
+
"weight": 3
|
606 |
+
},
|
607 |
+
{
|
608 |
+
"name": "stackexchange_TitleBody_Answer/judaism.stackexchange.com.jsonl.gz",
|
609 |
+
"lines": 26085,
|
610 |
+
"weight": 3
|
611 |
+
},
|
612 |
+
{
|
613 |
+
"name": "stackexchange_Title_Answer/judaism.stackexchange.com.jsonl.gz",
|
614 |
+
"lines": 26085,
|
615 |
+
"weight": 3
|
616 |
+
},
|
617 |
+
{
|
618 |
+
"name": "stackexchange_TitleBody_Answer/ethereum.stackexchange.com.jsonl.gz",
|
619 |
+
"lines": 26124,
|
620 |
+
"weight": 3
|
621 |
+
},
|
622 |
+
{
|
623 |
+
"name": "stackexchange_Title_Answer/ethereum.stackexchange.com.jsonl.gz",
|
624 |
+
"lines": 26124,
|
625 |
+
"weight": 3
|
626 |
+
},
|
627 |
+
{
|
628 |
+
"name": "stackexchange_TitleBody_Answer/worldbuilding.stackexchange.com.jsonl.gz",
|
629 |
+
"lines": 26210,
|
630 |
+
"weight": 3
|
631 |
+
},
|
632 |
+
{
|
633 |
+
"name": "stackexchange_Title_Answer/worldbuilding.stackexchange.com.jsonl.gz",
|
634 |
+
"lines": 26210,
|
635 |
+
"weight": 3
|
636 |
+
},
|
637 |
+
{
|
638 |
+
"name": "stackexchange_title_body/worldbuilding.stackexchange.com.jsonl.gz",
|
639 |
+
"lines": 26763,
|
640 |
+
"weight": 3
|
641 |
+
},
|
642 |
+
{
|
643 |
+
"name": "stackexchange_TitleBody_Answer/chemistry.stackexchange.com.jsonl.gz",
|
644 |
+
"lines": 27061,
|
645 |
+
"weight": 3
|
646 |
+
},
|
647 |
+
{
|
648 |
+
"name": "stackexchange_Title_Answer/chemistry.stackexchange.com.jsonl.gz",
|
649 |
+
"lines": 27061,
|
650 |
+
"weight": 3
|
651 |
+
},
|
652 |
+
{
|
653 |
+
"name": "stackexchange_title_body/datascience.stackexchange.com.jsonl.gz",
|
654 |
+
"lines": 27397,
|
655 |
+
"weight": 3
|
656 |
+
},
|
657 |
+
{
|
658 |
+
"name": "stackexchange_TitleBody_Answer/graphicdesign.stackexchange.com.jsonl.gz",
|
659 |
+
"lines": 28083,
|
660 |
+
"weight": 3
|
661 |
+
},
|
662 |
+
{
|
663 |
+
"name": "stackexchange_Title_Answer/graphicdesign.stackexchange.com.jsonl.gz",
|
664 |
+
"lines": 28083,
|
665 |
+
"weight": 3
|
666 |
+
},
|
667 |
+
{
|
668 |
+
"name": "stackexchange_TitleBody_Answer/ux.stackexchange.com.jsonl.gz",
|
669 |
+
"lines": 28901,
|
670 |
+
"weight": 3
|
671 |
+
},
|
672 |
+
{
|
673 |
+
"name": "stackexchange_Title_Answer/ux.stackexchange.com.jsonl.gz",
|
674 |
+
"lines": 28901,
|
675 |
+
"weight": 3
|
676 |
+
},
|
677 |
+
{
|
678 |
+
"name": "stackexchange_title_body/ux.stackexchange.com.jsonl.gz",
|
679 |
+
"lines": 29403,
|
680 |
+
"weight": 3
|
681 |
+
},
|
682 |
+
{
|
683 |
+
"name": "stackexchange_TitleBody_Answer/money.stackexchange.com.jsonl.gz",
|
684 |
+
"lines": 29404,
|
685 |
+
"weight": 3
|
686 |
+
},
|
687 |
+
{
|
688 |
+
"name": "stackexchange_Title_Answer/money.stackexchange.com.jsonl.gz",
|
689 |
+
"lines": 29404,
|
690 |
+
"weight": 3
|
691 |
+
},
|
692 |
+
{
|
693 |
+
"name": "stackexchange_title_body/webapps.stackexchange.com.jsonl.gz",
|
694 |
+
"lines": 29697,
|
695 |
+
"weight": 3
|
696 |
+
},
|
697 |
+
{
|
698 |
+
"name": "stackexchange_TitleBody_Answer/cs.stackexchange.com.jsonl.gz",
|
699 |
+
"lines": 30010,
|
700 |
+
"weight": 3
|
701 |
+
},
|
702 |
+
{
|
703 |
+
"name": "stackexchange_Title_Answer/cs.stackexchange.com.jsonl.gz",
|
704 |
+
"lines": 30010,
|
705 |
+
"weight": 3
|
706 |
+
},
|
707 |
+
{
|
708 |
+
"name": "stackexchange_title_body/graphicdesign.stackexchange.com.jsonl.gz",
|
709 |
+
"lines": 30233,
|
710 |
+
"weight": 3
|
711 |
+
},
|
712 |
+
{
|
713 |
+
"name": "stackexchange_TitleBody_Answer/webmasters.stackexchange.com.jsonl.gz",
|
714 |
+
"lines": 30370,
|
715 |
+
"weight": 3
|
716 |
+
},
|
717 |
+
{
|
718 |
+
"name": "stackexchange_Title_Answer/webmasters.stackexchange.com.jsonl.gz",
|
719 |
+
"lines": 30370,
|
720 |
+
"weight": 3
|
721 |
+
},
|
722 |
+
{
|
723 |
+
"name": "stackexchange_title_body/raspberrypi.stackexchange.com.jsonl.gz",
|
724 |
+
"lines": 30625,
|
725 |
+
"weight": 3
|
726 |
+
},
|
727 |
+
{
|
728 |
+
"name": "stackexchange_title_body/money.stackexchange.com.jsonl.gz",
|
729 |
+
"lines": 32021,
|
730 |
+
"weight": 3
|
731 |
+
},
|
732 |
+
{
|
733 |
+
"name": "stackexchange_title_body/judaism.stackexchange.com.jsonl.gz",
|
734 |
+
"lines": 32028,
|
735 |
+
"weight": 3
|
736 |
+
},
|
737 |
+
{
|
738 |
+
"name": "stackexchange_TitleBody_Answer/academia.stackexchange.com.jsonl.gz",
|
739 |
+
"lines": 32137,
|
740 |
+
"weight": 3
|
741 |
+
},
|
742 |
+
{
|
743 |
+
"name": "stackexchange_Title_Answer/academia.stackexchange.com.jsonl.gz",
|
744 |
+
"lines": 32137,
|
745 |
+
"weight": 3
|
746 |
+
},
|
747 |
+
{
|
748 |
+
"name": "stackexchange_title_body/ethereum.stackexchange.com.jsonl.gz",
|
749 |
+
"lines": 32760,
|
750 |
+
"weight": 3
|
751 |
+
},
|
752 |
+
{
|
753 |
+
"name": "stackexchange_title_body/academia.stackexchange.com.jsonl.gz",
|
754 |
+
"lines": 34331,
|
755 |
+
"weight": 3
|
756 |
+
},
|
757 |
+
{
|
758 |
+
"name": "stackexchange_title_body/chemistry.stackexchange.com.jsonl.gz",
|
759 |
+
"lines": 34506,
|
760 |
+
"weight": 3
|
761 |
+
},
|
762 |
+
{
|
763 |
+
"name": "stackexchange_title_body/webmasters.stackexchange.com.jsonl.gz",
|
764 |
+
"lines": 34559,
|
765 |
+
"weight": 3
|
766 |
+
},
|
767 |
+
{
|
768 |
+
"name": "stackexchange_title_body/meta.stackoverflow.com.jsonl.gz",
|
769 |
+
"lines": 36456,
|
770 |
+
"weight": 3
|
771 |
+
},
|
772 |
+
{
|
773 |
+
"name": "stackexchange_TitleBody_Answer/travel.stackexchange.com.jsonl.gz",
|
774 |
+
"lines": 36533,
|
775 |
+
"weight": 4
|
776 |
+
},
|
777 |
+
{
|
778 |
+
"name": "stackexchange_Title_Answer/travel.stackexchange.com.jsonl.gz",
|
779 |
+
"lines": 36533,
|
780 |
+
"weight": 4
|
781 |
+
},
|
782 |
+
{
|
783 |
+
"name": "stackexchange_TitleBody_Answer/android.stackexchange.com.jsonl.gz",
|
784 |
+
"lines": 38077,
|
785 |
+
"weight": 4
|
786 |
+
},
|
787 |
+
{
|
788 |
+
"name": "stackexchange_Title_Answer/android.stackexchange.com.jsonl.gz",
|
789 |
+
"lines": 38077,
|
790 |
+
"weight": 4
|
791 |
+
},
|
792 |
+
{
|
793 |
+
"name": "stackexchange_title_body/cs.stackexchange.com.jsonl.gz",
|
794 |
+
"lines": 38314,
|
795 |
+
"weight": 4
|
796 |
+
},
|
797 |
+
{
|
798 |
+
"name": "stackexchange_TitleBody_Answer/gamedev.stackexchange.com.jsonl.gz",
|
799 |
+
"lines": 40154,
|
800 |
+
"weight": 4
|
801 |
+
},
|
802 |
+
{
|
803 |
+
"name": "stackexchange_Title_Answer/gamedev.stackexchange.com.jsonl.gz",
|
804 |
+
"lines": 40154,
|
805 |
+
"weight": 4
|
806 |
+
},
|
807 |
+
{
|
808 |
+
"name": "stackexchange_TitleBody_Answer/rpg.stackexchange.com.jsonl.gz",
|
809 |
+
"lines": 40435,
|
810 |
+
"weight": 4
|
811 |
+
},
|
812 |
+
{
|
813 |
+
"name": "stackexchange_Title_Answer/rpg.stackexchange.com.jsonl.gz",
|
814 |
+
"lines": 40435,
|
815 |
+
"weight": 4
|
816 |
+
},
|
817 |
+
{
|
818 |
+
"name": "stackexchange_title_body/travel.stackexchange.com.jsonl.gz",
|
819 |
+
"lines": 41227,
|
820 |
+
"weight": 4
|
821 |
+
},
|
822 |
+
{
|
823 |
+
"name": "stackexchange_TitleBody_Answer/codereview.stackexchange.com.jsonl.gz",
|
824 |
+
"lines": 41748,
|
825 |
+
"weight": 4
|
826 |
+
},
|
827 |
+
{
|
828 |
+
"name": "stackexchange_Title_Answer/codereview.stackexchange.com.jsonl.gz",
|
829 |
+
"lines": 41748,
|
830 |
+
"weight": 4
|
831 |
+
},
|
832 |
+
{
|
833 |
+
"name": "stackexchange_title_body/rpg.stackexchange.com.jsonl.gz",
|
834 |
+
"lines": 42303,
|
835 |
+
"weight": 4
|
836 |
+
},
|
837 |
+
{
|
838 |
+
"name": "stackexchange_title_body/codereview.stackexchange.com.jsonl.gz",
|
839 |
+
"lines": 45765,
|
840 |
+
"weight": 4
|
841 |
+
},
|
842 |
+
{
|
843 |
+
"name": "stackexchange_title_body/gamedev.stackexchange.com.jsonl.gz",
|
844 |
+
"lines": 46485,
|
845 |
+
"weight": 4
|
846 |
+
},
|
847 |
+
{
|
848 |
+
"name": "stackexchange_TitleBody_Answer/softwareengineering.stackexchange.com.jsonl.gz",
|
849 |
+
"lines": 51326,
|
850 |
+
"weight": 5
|
851 |
+
},
|
852 |
+
{
|
853 |
+
"name": "stackexchange_Title_Answer/softwareengineering.stackexchange.com.jsonl.gz",
|
854 |
+
"lines": 51326,
|
855 |
+
"weight": 5
|
856 |
+
},
|
857 |
+
{
|
858 |
+
"name": "stackexchange_TitleBody_Answer/security.stackexchange.com.jsonl.gz",
|
859 |
+
"lines": 51355,
|
860 |
+
"weight": 5
|
861 |
+
},
|
862 |
+
{
|
863 |
+
"name": "stackexchange_Title_Answer/security.stackexchange.com.jsonl.gz",
|
864 |
+
"lines": 51355,
|
865 |
+
"weight": 5
|
866 |
+
},
|
867 |
+
{
|
868 |
+
"name": "stackexchange_title_body/android.stackexchange.com.jsonl.gz",
|
869 |
+
"lines": 51608,
|
870 |
+
"weight": 5
|
871 |
+
},
|
872 |
+
{
|
873 |
+
"name": "stackexchange_TitleBody_Answer/diy.stackexchange.com.jsonl.gz",
|
874 |
+
"lines": 52896,
|
875 |
+
"weight": 5
|
876 |
+
},
|
877 |
+
{
|
878 |
+
"name": "stackexchange_Title_Answer/diy.stackexchange.com.jsonl.gz",
|
879 |
+
"lines": 52896,
|
880 |
+
"weight": 5
|
881 |
+
},
|
882 |
+
{
|
883 |
+
"name": "stackexchange_title_body/softwareengineering.stackexchange.com.jsonl.gz",
|
884 |
+
"lines": 53942,
|
885 |
+
"weight": 5
|
886 |
+
},
|
887 |
+
{
|
888 |
+
"name": "stackexchange_TitleBody_Answer/blender.stackexchange.com.jsonl.gz",
|
889 |
+
"lines": 54153,
|
890 |
+
"weight": 5
|
891 |
+
},
|
892 |
+
{
|
893 |
+
"name": "stackexchange_Title_Answer/blender.stackexchange.com.jsonl.gz",
|
894 |
+
"lines": 54153,
|
895 |
+
"weight": 5
|
896 |
+
},
|
897 |
+
{
|
898 |
+
"name": "stackexchange_TitleBody_Answer/scifi.stackexchange.com.jsonl.gz",
|
899 |
+
"lines": 54805,
|
900 |
+
"weight": 5
|
901 |
+
},
|
902 |
+
{
|
903 |
+
"name": "stackexchange_Title_Answer/scifi.stackexchange.com.jsonl.gz",
|
904 |
+
"lines": 54805,
|
905 |
+
"weight": 5
|
906 |
+
},
|
907 |
+
{
|
908 |
+
"name": "stackexchange_title_body/security.stackexchange.com.jsonl.gz",
|
909 |
+
"lines": 58000,
|
910 |
+
"weight": 5
|
911 |
+
},
|
912 |
+
{
|
913 |
+
"name": "stackexchange_TitleBody_Answer/mathematica.stackexchange.com.jsonl.gz",
|
914 |
+
"lines": 59895,
|
915 |
+
"weight": 5
|
916 |
+
},
|
917 |
+
{
|
918 |
+
"name": "stackexchange_Title_Answer/mathematica.stackexchange.com.jsonl.gz",
|
919 |
+
"lines": 59895,
|
920 |
+
"weight": 5
|
921 |
+
},
|
922 |
+
{
|
923 |
+
"name": "stackexchange_title_body/diy.stackexchange.com.jsonl.gz",
|
924 |
+
"lines": 60083,
|
925 |
+
"weight": 5
|
926 |
+
},
|
927 |
+
{
|
928 |
+
"name": "stackexchange_TitleBody_Answer/meta.stackexchange.com.jsonl.gz",
|
929 |
+
"lines": 60744,
|
930 |
+
"weight": 5
|
931 |
+
},
|
932 |
+
{
|
933 |
+
"name": "stackexchange_Title_Answer/meta.stackexchange.com.jsonl.gz",
|
934 |
+
"lines": 60744,
|
935 |
+
"weight": 5
|
936 |
+
},
|
937 |
+
{
|
938 |
+
"name": "stackexchange_title_body/scifi.stackexchange.com.jsonl.gz",
|
939 |
+
"lines": 61528,
|
940 |
+
"weight": 6
|
941 |
+
},
|
942 |
+
{
|
943 |
+
"name": "stackexchange_TitleBody_Answer/drupal.stackexchange.com.jsonl.gz",
|
944 |
+
"lines": 67817,
|
945 |
+
"weight": 6
|
946 |
+
},
|
947 |
+
{
|
948 |
+
"name": "stackexchange_Title_Answer/drupal.stackexchange.com.jsonl.gz",
|
949 |
+
"lines": 67817,
|
950 |
+
"weight": 6
|
951 |
+
},
|
952 |
+
{
|
953 |
+
"name": "stackexchange_TitleBody_Answer/dba.stackexchange.com.jsonl.gz",
|
954 |
+
"lines": 71449,
|
955 |
+
"weight": 6
|
956 |
+
},
|
957 |
+
{
|
958 |
+
"name": "stackexchange_Title_Answer/dba.stackexchange.com.jsonl.gz",
|
959 |
+
"lines": 71449,
|
960 |
+
"weight": 6
|
961 |
+
},
|
962 |
+
{
|
963 |
+
"name": "stackexchange_title_body/mathematica.stackexchange.com.jsonl.gz",
|
964 |
+
"lines": 73131,
|
965 |
+
"weight": 7
|
966 |
+
},
|
967 |
+
{
|
968 |
+
"name": "stackexchange_TitleBody_Answer/ell.stackexchange.com.jsonl.gz",
|
969 |
+
"lines": 77892,
|
970 |
+
"weight": 7
|
971 |
+
},
|
972 |
+
{
|
973 |
+
"name": "stackexchange_Title_Answer/ell.stackexchange.com.jsonl.gz",
|
974 |
+
"lines": 77892,
|
975 |
+
"weight": 7
|
976 |
+
},
|
977 |
+
{
|
978 |
+
"name": "stackexchange_TitleBody_Answer/magento.stackexchange.com.jsonl.gz",
|
979 |
+
"lines": 79241,
|
980 |
+
"weight": 7
|
981 |
+
},
|
982 |
+
{
|
983 |
+
"name": "stackexchange_Title_Answer/magento.stackexchange.com.jsonl.gz",
|
984 |
+
"lines": 79241,
|
985 |
+
"weight": 7
|
986 |
+
},
|
987 |
+
{
|
988 |
+
"name": "stackexchange_title_body/drupal.stackexchange.com.jsonl.gz",
|
989 |
+
"lines": 79717,
|
990 |
+
"weight": 7
|
991 |
+
},
|
992 |
+
{
|
993 |
+
"name": "stackexchange_TitleBody_Answer/sharepoint.stackexchange.com.jsonl.gz",
|
994 |
+
"lines": 80420,
|
995 |
+
"weight": 7
|
996 |
+
},
|
997 |
+
{
|
998 |
+
"name": "stackexchange_Title_Answer/sharepoint.stackexchange.com.jsonl.gz",
|
999 |
+
"lines": 80420,
|
1000 |
+
"weight": 7
|
1001 |
+
},
|
1002 |
+
{
|
1003 |
+
"name": "stackexchange_title_body/blender.stackexchange.com.jsonl.gz",
|
1004 |
+
"lines": 80766,
|
1005 |
+
"weight": 7
|
1006 |
+
},
|
1007 |
+
{
|
1008 |
+
"name": "stackexchange_title_body/dba.stackexchange.com.jsonl.gz",
|
1009 |
+
"lines": 81871,
|
1010 |
+
"weight": 7
|
1011 |
+
},
|
1012 |
+
{
|
1013 |
+
"name": "stackexchange_TitleBody_Answer/gaming.stackexchange.com.jsonl.gz",
|
1014 |
+
"lines": 82887,
|
1015 |
+
"weight": 7
|
1016 |
+
},
|
1017 |
+
{
|
1018 |
+
"name": "stackexchange_Title_Answer/gaming.stackexchange.com.jsonl.gz",
|
1019 |
+
"lines": 82887,
|
1020 |
+
"weight": 7
|
1021 |
+
},
|
1022 |
+
{
|
1023 |
+
"name": "stackexchange_title_body/ell.stackexchange.com.jsonl.gz",
|
1024 |
+
"lines": 83271,
|
1025 |
+
"weight": 7
|
1026 |
+
},
|
1027 |
+
{
|
1028 |
+
"name": "stackexchange_title_body/meta.stackexchange.com.jsonl.gz",
|
1029 |
+
"lines": 83510,
|
1030 |
+
"weight": 7
|
1031 |
+
},
|
1032 |
+
{
|
1033 |
+
"name": "stackexchange_TitleBody_Answer/wordpress.stackexchange.com.jsonl.gz",
|
1034 |
+
"lines": 83621,
|
1035 |
+
"weight": 7
|
1036 |
+
},
|
1037 |
+
{
|
1038 |
+
"name": "stackexchange_Title_Answer/wordpress.stackexchange.com.jsonl.gz",
|
1039 |
+
"lines": 83621,
|
1040 |
+
"weight": 7
|
1041 |
+
},
|
1042 |
+
{
|
1043 |
+
"name": "stackexchange_TitleBody_Answer/mathoverflow.net.jsonl.gz",
|
1044 |
+
"lines": 85289,
|
1045 |
+
"weight": 8
|
1046 |
+
},
|
1047 |
+
{
|
1048 |
+
"name": "stackexchange_Title_Answer/mathoverflow.net.jsonl.gz",
|
1049 |
+
"lines": 85289,
|
1050 |
+
"weight": 8
|
1051 |
+
},
|
1052 |
+
{
|
1053 |
+
"name": "stackexchange_TitleBody_Answer/salesforce.stackexchange.com.jsonl.gz",
|
1054 |
+
"lines": 87272,
|
1055 |
+
"weight": 8
|
1056 |
+
},
|
1057 |
+
{
|
1058 |
+
"name": "stackexchange_Title_Answer/salesforce.stackexchange.com.jsonl.gz",
|
1059 |
+
"lines": 87272,
|
1060 |
+
"weight": 8
|
1061 |
+
},
|
1062 |
+
{
|
1063 |
+
"name": "stackexchange_title_body/gaming.stackexchange.com.jsonl.gz",
|
1064 |
+
"lines": 88912,
|
1065 |
+
"weight": 8
|
1066 |
+
},
|
1067 |
+
{
|
1068 |
+
"name": "stackexchange_TitleBody_Answer/apple.stackexchange.com.jsonl.gz",
|
1069 |
+
"lines": 92487,
|
1070 |
+
"weight": 8
|
1071 |
+
},
|
1072 |
+
{
|
1073 |
+
"name": "stackexchange_Title_Answer/apple.stackexchange.com.jsonl.gz",
|
1074 |
+
"lines": 92487,
|
1075 |
+
"weight": 8
|
1076 |
+
},
|
1077 |
+
{
|
1078 |
+
"name": "stackexchange_title_body/sharepoint.stackexchange.com.jsonl.gz",
|
1079 |
+
"lines": 94011,
|
1080 |
+
"weight": 8
|
1081 |
+
},
|
1082 |
+
{
|
1083 |
+
"name": "stackexchange_title_body/magento.stackexchange.com.jsonl.gz",
|
1084 |
+
"lines": 99991,
|
1085 |
+
"weight": 9
|
1086 |
+
},
|
1087 |
+
{
|
1088 |
+
"name": "stackexchange_TitleBody_Answer/gis.stackexchange.com.jsonl.gz",
|
1089 |
+
"lines": 100254,
|
1090 |
+
"weight": 9
|
1091 |
+
},
|
1092 |
+
{
|
1093 |
+
"name": "stackexchange_Title_Answer/gis.stackexchange.com.jsonl.gz",
|
1094 |
+
"lines": 100254,
|
1095 |
+
"weight": 9
|
1096 |
+
},
|
1097 |
+
{
|
1098 |
+
"name": "stackexchange_title_body/wordpress.stackexchange.com.jsonl.gz",
|
1099 |
+
"lines": 100474,
|
1100 |
+
"weight": 9
|
1101 |
+
},
|
1102 |
+
{
|
1103 |
+
"name": "stackexchange_TitleBody_Answer/english.stackexchange.com.jsonl.gz",
|
1104 |
+
"lines": 100640,
|
1105 |
+
"weight": 9
|
1106 |
+
},
|
1107 |
+
{
|
1108 |
+
"name": "stackexchange_Title_Answer/english.stackexchange.com.jsonl.gz",
|
1109 |
+
"lines": 100640,
|
1110 |
+
"weight": 9
|
1111 |
+
},
|
1112 |
+
{
|
1113 |
+
"name": "stackexchange_title_body/salesforce.stackexchange.com.jsonl.gz",
|
1114 |
+
"lines": 105260,
|
1115 |
+
"weight": 9
|
1116 |
+
},
|
1117 |
+
{
|
1118 |
+
"name": "stackexchange_title_body/english.stackexchange.com.jsonl.gz",
|
1119 |
+
"lines": 109522,
|
1120 |
+
"weight": 10
|
1121 |
+
},
|
1122 |
+
{
|
1123 |
+
"name": "stackexchange_title_body/apple.stackexchange.com.jsonl.gz",
|
1124 |
+
"lines": 110622,
|
1125 |
+
"weight": 10
|
1126 |
+
},
|
1127 |
+
{
|
1128 |
+
"name": "stackexchange_TitleBody_Answer/stats.stackexchange.com.jsonl.gz",
|
1129 |
+
"lines": 115679,
|
1130 |
+
"weight": 10
|
1131 |
+
},
|
1132 |
+
{
|
1133 |
+
"name": "stackexchange_Title_Answer/stats.stackexchange.com.jsonl.gz",
|
1134 |
+
"lines": 115679,
|
1135 |
+
"weight": 10
|
1136 |
+
},
|
1137 |
+
{
|
1138 |
+
"name": "stackexchange_title_body/mathoverflow.net.jsonl.gz",
|
1139 |
+
"lines": 120851,
|
1140 |
+
"weight": 10
|
1141 |
+
},
|
1142 |
+
{
|
1143 |
+
"name": "stackexchange_TitleBody_Answer/electronics.stackexchange.com.jsonl.gz",
|
1144 |
+
"lines": 129494,
|
1145 |
+
"weight": 11
|
1146 |
+
},
|
1147 |
+
{
|
1148 |
+
"name": "stackexchange_Title_Answer/electronics.stackexchange.com.jsonl.gz",
|
1149 |
+
"lines": 129494,
|
1150 |
+
"weight": 11
|
1151 |
+
},
|
1152 |
+
{
|
1153 |
+
"name": "stackexchange_title_body/gis.stackexchange.com.jsonl.gz",
|
1154 |
+
"lines": 131000,
|
1155 |
+
"weight": 11
|
1156 |
+
},
|
1157 |
+
{
|
1158 |
+
"name": "stackexchange_TitleBody_Answer/physics.stackexchange.com.jsonl.gz",
|
1159 |
+
"lines": 141230,
|
1160 |
+
"weight": 12
|
1161 |
+
},
|
1162 |
+
{
|
1163 |
+
"name": "stackexchange_Title_Answer/physics.stackexchange.com.jsonl.gz",
|
1164 |
+
"lines": 141230,
|
1165 |
+
"weight": 12
|
1166 |
+
},
|
1167 |
+
{
|
1168 |
+
"name": "stackexchange_title_body/electronics.stackexchange.com.jsonl.gz",
|
1169 |
+
"lines": 143582,
|
1170 |
+
"weight": 12
|
1171 |
+
},
|
1172 |
+
{
|
1173 |
+
"name": "stackexchange_TitleBody_Answer/unix.stackexchange.com.jsonl.gz",
|
1174 |
+
"lines": 155414,
|
1175 |
+
"weight": 13
|
1176 |
+
},
|
1177 |
+
{
|
1178 |
+
"name": "stackexchange_Title_Answer/unix.stackexchange.com.jsonl.gz",
|
1179 |
+
"lines": 155414,
|
1180 |
+
"weight": 13
|
1181 |
+
},
|
1182 |
+
{
|
1183 |
+
"name": "stackexchange_TitleBody_Answer/tex.stackexchange.com.jsonl.gz",
|
1184 |
+
"lines": 171628,
|
1185 |
+
"weight": 15
|
1186 |
+
},
|
1187 |
+
{
|
1188 |
+
"name": "stackexchange_Title_Answer/tex.stackexchange.com.jsonl.gz",
|
1189 |
+
"lines": 171628,
|
1190 |
+
"weight": 15
|
1191 |
+
},
|
1192 |
+
{
|
1193 |
+
"name": "stackexchange_title_body/physics.stackexchange.com.jsonl.gz",
|
1194 |
+
"lines": 173307,
|
1195 |
+
"weight": 15
|
1196 |
+
},
|
1197 |
+
{
|
1198 |
+
"name": "stackexchange_title_body/stats.stackexchange.com.jsonl.gz",
|
1199 |
+
"lines": 173466,
|
1200 |
+
"weight": 15
|
1201 |
+
},
|
1202 |
+
{
|
1203 |
+
"name": "stackexchange_title_body/unix.stackexchange.com.jsonl.gz",
|
1204 |
+
"lines": 185997,
|
1205 |
+
"weight": 16
|
1206 |
+
},
|
1207 |
+
{
|
1208 |
+
"name": "stackexchange_title_body/tex.stackexchange.com.jsonl.gz",
|
1209 |
+
"lines": 202954,
|
1210 |
+
"weight": 17
|
1211 |
+
},
|
1212 |
+
{
|
1213 |
+
"name": "TriviaQA_pairs.jsonl.gz",
|
1214 |
+
"lines": 73346,
|
1215 |
+
"weight": 19
|
1216 |
+
},
|
1217 |
+
{
|
1218 |
+
"name": "stackexchange_TitleBody_Answer/serverfault.com.jsonl.gz",
|
1219 |
+
"lines": 238507,
|
1220 |
+
"weight": 20
|
1221 |
+
},
|
1222 |
+
{
|
1223 |
+
"name": "stackexchange_Title_Answer/serverfault.com.jsonl.gz",
|
1224 |
+
"lines": 238507,
|
1225 |
+
"weight": 20
|
1226 |
+
},
|
1227 |
+
{
|
1228 |
+
"name": "stackexchange_duplicate_questions_title-body_title-body.jsonl.gz",
|
1229 |
+
"lines": 250460,
|
1230 |
+
"weight": 21
|
1231 |
+
},
|
1232 |
+
{
|
1233 |
+
"name": "stackexchange_duplicate_questions_body_body.jsonl.gz",
|
1234 |
+
"lines": 250519,
|
1235 |
+
"weight": 21
|
1236 |
+
},
|
1237 |
+
{
|
1238 |
+
"name": "squad_pairs.jsonl.gz",
|
1239 |
+
"lines": 87599,
|
1240 |
+
"weight": 22
|
1241 |
+
},
|
1242 |
+
{
|
1243 |
+
"name": "stackexchange_TitleBody_Answer/askubuntu.com.jsonl.gz",
|
1244 |
+
"lines": 267135,
|
1245 |
+
"weight": 22
|
1246 |
+
},
|
1247 |
+
{
|
1248 |
+
"name": "stackexchange_Title_Answer/askubuntu.com.jsonl.gz",
|
1249 |
+
"lines": 267135,
|
1250 |
+
"weight": 22
|
1251 |
+
},
|
1252 |
+
{
|
1253 |
+
"name": "stackexchange_title_body/serverfault.com.jsonl.gz",
|
1254 |
+
"lines": 270904,
|
1255 |
+
"weight": 23
|
1256 |
+
},
|
1257 |
+
{
|
1258 |
+
"name": "NQ-train_pairs.jsonl.gz",
|
1259 |
+
"lines": 100231,
|
1260 |
+
"weight": 25
|
1261 |
+
},
|
1262 |
+
{
|
1263 |
+
"name": "SimpleWiki.jsonl.gz",
|
1264 |
+
"lines": 102225,
|
1265 |
+
"weight": 26
|
1266 |
+
},
|
1267 |
+
{
|
1268 |
+
"name": "quora_duplicates_triplets.jsonl.gz",
|
1269 |
+
"lines": 103663,
|
1270 |
+
"weight": 26
|
1271 |
+
},
|
1272 |
+
{
|
1273 |
+
"name": "stackexchange_duplicate_questions_title_title.jsonl.gz",
|
1274 |
+
"lines": 304525,
|
1275 |
+
"weight": 26
|
1276 |
+
},
|
1277 |
+
{
|
1278 |
+
"name": "altlex.jsonl.gz",
|
1279 |
+
"lines": 112696,
|
1280 |
+
"weight": 28
|
1281 |
+
},
|
1282 |
+
{
|
1283 |
+
"name": "stackexchange_title_body/askubuntu.com.jsonl.gz",
|
1284 |
+
"lines": 347925,
|
1285 |
+
"weight": 29
|
1286 |
+
},
|
1287 |
+
{
|
1288 |
+
"name": "stackexchange_TitleBody_Answer/superuser.com.jsonl.gz",
|
1289 |
+
"lines": 352610,
|
1290 |
+
"weight": 30
|
1291 |
+
},
|
1292 |
+
{
|
1293 |
+
"name": "stackexchange_Title_Answer/superuser.com.jsonl.gz",
|
1294 |
+
"lines": 352610,
|
1295 |
+
"weight": 30
|
1296 |
+
},
|
1297 |
+
{
|
1298 |
+
"name": "wikihow.jsonl.gz",
|
1299 |
+
"lines": 128542,
|
1300 |
+
"weight": 32
|
1301 |
+
},
|
1302 |
+
{
|
1303 |
+
"name": "stackexchange_title_body/superuser.com.jsonl.gz",
|
1304 |
+
"lines": 435463,
|
1305 |
+
"weight": 36
|
1306 |
+
},
|
1307 |
+
{
|
1308 |
+
"name": "stackexchange_title_body/small_stackexchanges.jsonl.gz",
|
1309 |
+
"lines": 448146,
|
1310 |
+
"weight": 37
|
1311 |
+
},
|
1312 |
+
{
|
1313 |
+
"name": "stackexchange_TitleBody_Answer/small_stackexchanges.jsonl.gz",
|
1314 |
+
"lines": 460256,
|
1315 |
+
"weight": 38
|
1316 |
+
},
|
1317 |
+
{
|
1318 |
+
"name": "stackexchange_Title_Answer/small_stackexchanges.jsonl.gz",
|
1319 |
+
"lines": 460256,
|
1320 |
+
"weight": 38
|
1321 |
+
},
|
1322 |
+
{
|
1323 |
+
"name": "sentence-compression.jsonl.gz",
|
1324 |
+
"lines": 180000,
|
1325 |
+
"weight": 45
|
1326 |
+
},
|
1327 |
+
{
|
1328 |
+
"name": "AllNLI.jsonl.gz",
|
1329 |
+
"lines": 277230,
|
1330 |
+
"weight": 69
|
1331 |
+
},
|
1332 |
+
{
|
1333 |
+
"name": "eli5_question_answer.jsonl.gz",
|
1334 |
+
"lines": 325475,
|
1335 |
+
"weight": 81
|
1336 |
+
},
|
1337 |
+
{
|
1338 |
+
"name": "reddit/reddit_2015.jsonl.gz",
|
1339 |
+
"lines": 135108166,
|
1340 |
+
"weight": 82
|
1341 |
+
},
|
1342 |
+
{
|
1343 |
+
"name": "reddit/reddit_2016.jsonl.gz",
|
1344 |
+
"lines": 159164386,
|
1345 |
+
"weight": 82
|
1346 |
+
},
|
1347 |
+
{
|
1348 |
+
"name": "reddit/reddit_2017.jsonl.gz",
|
1349 |
+
"lines": 191485219,
|
1350 |
+
"weight": 82
|
1351 |
+
},
|
1352 |
+
{
|
1353 |
+
"name": "reddit/reddit_2018.jsonl.gz",
|
1354 |
+
"lines": 240726659,
|
1355 |
+
"weight": 82
|
1356 |
+
},
|
1357 |
+
{
|
1358 |
+
"name": "stackexchange_TitleBody_Answer/math.stackexchange.com.jsonl.gz",
|
1359 |
+
"lines": 1100953,
|
1360 |
+
"weight": 83
|
1361 |
+
},
|
1362 |
+
{
|
1363 |
+
"name": "stackexchange_Title_Answer/math.stackexchange.com.jsonl.gz",
|
1364 |
+
"lines": 1100953,
|
1365 |
+
"weight": 83
|
1366 |
+
},
|
1367 |
+
{
|
1368 |
+
"name": "stackexchange_title_body/math.stackexchange.com.jsonl.gz",
|
1369 |
+
"lines": 1338443,
|
1370 |
+
"weight": 83
|
1371 |
+
},
|
1372 |
+
{
|
1373 |
+
"name": "stackexchange_TitleBody_Answer/stackoverflow.com-Posts.jsonl.gz",
|
1374 |
+
"lines": 15768211,
|
1375 |
+
"weight": 83
|
1376 |
+
},
|
1377 |
+
{
|
1378 |
+
"name": "stackexchange_Title_Answer/stackoverflow.com-Posts.jsonl.gz",
|
1379 |
+
"lines": 15768211,
|
1380 |
+
"weight": 83
|
1381 |
+
},
|
1382 |
+
{
|
1383 |
+
"name": "stackexchange_title_body/stackoverflow.com-Posts.jsonl.gz",
|
1384 |
+
"lines": 18562443,
|
1385 |
+
"weight": 83
|
1386 |
+
},
|
1387 |
+
{
|
1388 |
+
"name": "specter_train_triples.jsonl.gz",
|
1389 |
+
"lines": 684100,
|
1390 |
+
"weight": 84
|
1391 |
+
},
|
1392 |
+
{
|
1393 |
+
"name": "S2ORC_title_abstract.jsonl.gz",
|
1394 |
+
"lines": 41769185,
|
1395 |
+
"weight": 123
|
1396 |
+
},
|
1397 |
+
{
|
1398 |
+
"name": "S2ORC_citation_pairs.jsonl.gz",
|
1399 |
+
"lines": 52603982,
|
1400 |
+
"weight": 123
|
1401 |
+
},
|
1402 |
+
{
|
1403 |
+
"name": "PAQ_pairs.jsonl.gz",
|
1404 |
+
"lines": 64371441,
|
1405 |
+
"weight": 123
|
1406 |
+
},
|
1407 |
+
{
|
1408 |
+
"name": "WikiAnswers_pairs.jsonl.gz",
|
1409 |
+
"lines": 77427422,
|
1410 |
+
"weight": 123
|
1411 |
+
},
|
1412 |
+
{
|
1413 |
+
"name": "S2ORC_citation_pairs_abstract.jsonl.gz",
|
1414 |
+
"lines": 116288806,
|
1415 |
+
"weight": 123
|
1416 |
+
},
|
1417 |
+
{
|
1418 |
+
"name": "searchQA_question_top5_snippets_merged.jsonl.gz",
|
1419 |
+
"lines": 582261,
|
1420 |
+
"weight": 144
|
1421 |
+
},
|
1422 |
+
{
|
1423 |
+
"name": "yahoo_answers_title_question.jsonl.gz",
|
1424 |
+
"lines": 659896,
|
1425 |
+
"weight": 163
|
1426 |
+
},
|
1427 |
+
{
|
1428 |
+
"name": "yahoo_answers_question_answer.jsonl.gz",
|
1429 |
+
"lines": 681164,
|
1430 |
+
"weight": 169
|
1431 |
+
},
|
1432 |
+
{
|
1433 |
+
"name": "yahoo_answers_title_answer.jsonl.gz",
|
1434 |
+
"lines": 1198260,
|
1435 |
+
"weight": 247
|
1436 |
+
},
|
1437 |
+
{
|
1438 |
+
"name": "amazon-qa-train-pairs.jsonl.gz",
|
1439 |
+
"lines": 2448839,
|
1440 |
+
"weight": 247
|
1441 |
+
},
|
1442 |
+
{
|
1443 |
+
"name": "gooaq_pairs.jsonl.gz",
|
1444 |
+
"lines": 3012496,
|
1445 |
+
"weight": 247
|
1446 |
+
},
|
1447 |
+
{
|
1448 |
+
"name": "msmarco-query_passage_negative.jsonl.gz",
|
1449 |
+
"lines": 9144553,
|
1450 |
+
"weight": 247
|
1451 |
+
}
|
1452 |
+
]
|
params_weight/all-mpnet-base-v2/gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
params_weight/all-mpnet-base-v2/modules.json
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"idx": 0,
|
4 |
+
"name": "0",
|
5 |
+
"path": "",
|
6 |
+
"type": "sentence_transformers.models.Transformer"
|
7 |
+
},
|
8 |
+
{
|
9 |
+
"idx": 1,
|
10 |
+
"name": "1",
|
11 |
+
"path": "1_Pooling",
|
12 |
+
"type": "sentence_transformers.models.Pooling"
|
13 |
+
},
|
14 |
+
{
|
15 |
+
"idx": 2,
|
16 |
+
"name": "2",
|
17 |
+
"path": "2_Normalize",
|
18 |
+
"type": "sentence_transformers.models.Normalize"
|
19 |
+
}
|
20 |
+
]
|
params_weight/all-mpnet-base-v2/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a8fd120b1a0032e70ff3d4b8ab8e46a6d01c2cb08ffe7c007a021c1788928146
|
3 |
+
size 438011953
|
params_weight/all-mpnet-base-v2/sentence_bert_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"max_seq_length": 384,
|
3 |
+
"do_lower_case": false
|
4 |
+
}
|
params_weight/all-mpnet-base-v2/special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "[UNK]", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
|
params_weight/all-mpnet-base-v2/tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
params_weight/all-mpnet-base-v2/tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"do_lower_case": true, "bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "[UNK]", "pad_token": "<pad>", "mask_token": "<mask>", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "microsoft/mpnet-base", "tokenizer_class": "MPNetTokenizer"}
|
params_weight/all-mpnet-base-v2/train_script.py
ADDED
@@ -0,0 +1,344 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Train script for a single file
|
3 |
+
|
4 |
+
Need to set the TPU address first:
|
5 |
+
export XRT_TPU_CONFIG="localservice;0;localhost:51011"
|
6 |
+
"""
|
7 |
+
|
8 |
+
import torch.multiprocessing as mp
|
9 |
+
import threading
|
10 |
+
import time
|
11 |
+
import random
|
12 |
+
import sys
|
13 |
+
import argparse
|
14 |
+
import gzip
|
15 |
+
import json
|
16 |
+
import logging
|
17 |
+
import tqdm
|
18 |
+
import torch
|
19 |
+
from torch import nn
|
20 |
+
from torch.utils.data import DataLoader
|
21 |
+
import torch
|
22 |
+
import torch_xla
|
23 |
+
import torch_xla.core
|
24 |
+
import torch_xla.core.functions
|
25 |
+
import torch_xla.core.xla_model as xm
|
26 |
+
import torch_xla.distributed.xla_multiprocessing as xmp
|
27 |
+
import torch_xla.distributed.parallel_loader as pl
|
28 |
+
import os
|
29 |
+
from shutil import copyfile
|
30 |
+
|
31 |
+
|
32 |
+
from transformers import (
|
33 |
+
AdamW,
|
34 |
+
AutoModel,
|
35 |
+
AutoTokenizer,
|
36 |
+
get_linear_schedule_with_warmup,
|
37 |
+
set_seed,
|
38 |
+
)
|
39 |
+
|
40 |
+
class AutoModelForSentenceEmbedding(nn.Module):
|
41 |
+
def __init__(self, model_name, tokenizer, normalize=True):
|
42 |
+
super(AutoModelForSentenceEmbedding, self).__init__()
|
43 |
+
|
44 |
+
self.model = AutoModel.from_pretrained(model_name)
|
45 |
+
self.normalize = normalize
|
46 |
+
self.tokenizer = tokenizer
|
47 |
+
|
48 |
+
def forward(self, **kwargs):
|
49 |
+
model_output = self.model(**kwargs)
|
50 |
+
embeddings = self.mean_pooling(model_output, kwargs['attention_mask'])
|
51 |
+
if self.normalize:
|
52 |
+
embeddings = torch.nn.functional.normalize(embeddings, p=2, dim=1)
|
53 |
+
|
54 |
+
return embeddings
|
55 |
+
|
56 |
+
def mean_pooling(self, model_output, attention_mask):
|
57 |
+
token_embeddings = model_output[0] # First element of model_output contains all token embeddings
|
58 |
+
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
59 |
+
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
60 |
+
|
61 |
+
def save_pretrained(self, output_path):
|
62 |
+
if xm.is_master_ordinal():
|
63 |
+
self.tokenizer.save_pretrained(output_path)
|
64 |
+
self.model.config.save_pretrained(output_path)
|
65 |
+
|
66 |
+
xm.save(self.model.state_dict(), os.path.join(output_path, "pytorch_model.bin"))
|
67 |
+
|
68 |
+
|
69 |
+
|
70 |
+
|
71 |
+
def train_function(index, args, queue):
|
72 |
+
tokenizer = AutoTokenizer.from_pretrained(args.model)
|
73 |
+
model = AutoModelForSentenceEmbedding(args.model, tokenizer)
|
74 |
+
|
75 |
+
|
76 |
+
### Train Loop
|
77 |
+
device = xm.xla_device()
|
78 |
+
model = model.to(device)
|
79 |
+
|
80 |
+
# Instantiate optimizer
|
81 |
+
optimizer = AdamW(params=model.parameters(), lr=2e-5, correct_bias=True)
|
82 |
+
|
83 |
+
lr_scheduler = get_linear_schedule_with_warmup(
|
84 |
+
optimizer=optimizer,
|
85 |
+
num_warmup_steps=500,
|
86 |
+
num_training_steps=args.steps,
|
87 |
+
)
|
88 |
+
|
89 |
+
# Now we train the model
|
90 |
+
cross_entropy_loss = nn.CrossEntropyLoss()
|
91 |
+
max_grad_norm = 1
|
92 |
+
|
93 |
+
model.train()
|
94 |
+
|
95 |
+
for global_step in tqdm.trange(args.steps, disable=not xm.is_master_ordinal()):
|
96 |
+
#### Get the batch data
|
97 |
+
batch = queue.get()
|
98 |
+
#print(index, "batch {}x{}".format(len(batch), ",".join([str(len(b)) for b in batch])))
|
99 |
+
|
100 |
+
|
101 |
+
if len(batch[0]) == 2: #(anchor, positive)
|
102 |
+
text1 = tokenizer([b[0] for b in batch], return_tensors="pt", max_length=args.max_length, truncation=True, padding="max_length")
|
103 |
+
text2 = tokenizer([b[1] for b in batch], return_tensors="pt", max_length=args.max_length, truncation=True, padding="max_length")
|
104 |
+
|
105 |
+
### Compute embeddings
|
106 |
+
embeddings_a = model(**text1.to(device))
|
107 |
+
embeddings_b = model(**text2.to(device))
|
108 |
+
|
109 |
+
### Gather all embedings
|
110 |
+
embeddings_a = torch_xla.core.functions.all_gather(embeddings_a)
|
111 |
+
embeddings_b = torch_xla.core.functions.all_gather(embeddings_b)
|
112 |
+
|
113 |
+
### Compute similarity scores 512 x 512
|
114 |
+
scores = torch.mm(embeddings_a, embeddings_b.transpose(0, 1)) * args.scale
|
115 |
+
|
116 |
+
### Compute cross-entropy loss
|
117 |
+
labels = torch.tensor(range(len(scores)), dtype=torch.long, device=embeddings_a.device) # Example a[i] should match with b[i]
|
118 |
+
|
119 |
+
## Symmetric loss as in CLIP
|
120 |
+
loss = (cross_entropy_loss(scores, labels) + cross_entropy_loss(scores.transpose(0, 1), labels)) / 2
|
121 |
+
|
122 |
+
else: #(anchor, positive, negative)
|
123 |
+
text1 = tokenizer([b[0] for b in batch], return_tensors="pt", max_length=args.max_length, truncation=True, padding="max_length")
|
124 |
+
text2 = tokenizer([b[1] for b in batch], return_tensors="pt", max_length=args.max_length, truncation=True, padding="max_length")
|
125 |
+
text3 = tokenizer([b[2] for b in batch], return_tensors="pt", max_length=args.max_length, truncation=True, padding="max_length")
|
126 |
+
|
127 |
+
embeddings_a = model(**text1.to(device))
|
128 |
+
embeddings_b1 = model(**text2.to(device))
|
129 |
+
embeddings_b2 = model(**text3.to(device))
|
130 |
+
|
131 |
+
embeddings_a = torch_xla.core.functions.all_gather(embeddings_a)
|
132 |
+
embeddings_b1 = torch_xla.core.functions.all_gather(embeddings_b1)
|
133 |
+
embeddings_b2 = torch_xla.core.functions.all_gather(embeddings_b2)
|
134 |
+
|
135 |
+
embeddings_b = torch.cat([embeddings_b1, embeddings_b2])
|
136 |
+
|
137 |
+
### Compute similarity scores 512 x 1024
|
138 |
+
scores = torch.mm(embeddings_a, embeddings_b.transpose(0, 1)) * args.scale
|
139 |
+
|
140 |
+
### Compute cross-entropy loss
|
141 |
+
labels = torch.tensor(range(len(scores)), dtype=torch.long, device=embeddings_a.device) # Example a[i] should match with b[i]
|
142 |
+
|
143 |
+
## One-way loss
|
144 |
+
loss = cross_entropy_loss(scores, labels)
|
145 |
+
|
146 |
+
|
147 |
+
# Backward pass
|
148 |
+
optimizer.zero_grad()
|
149 |
+
loss.backward()
|
150 |
+
torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm)
|
151 |
+
|
152 |
+
xm.optimizer_step(optimizer, barrier=True)
|
153 |
+
lr_scheduler.step()
|
154 |
+
|
155 |
+
|
156 |
+
#Save model
|
157 |
+
if (global_step+1) % args.save_steps == 0:
|
158 |
+
output_path = os.path.join(args.output, str(global_step+1))
|
159 |
+
xm.master_print("save model: "+output_path)
|
160 |
+
model.save_pretrained(output_path)
|
161 |
+
|
162 |
+
|
163 |
+
output_path = os.path.join(args.output, "final")
|
164 |
+
xm.master_print("save model final: "+ output_path)
|
165 |
+
model.save_pretrained(output_path)
|
166 |
+
|
167 |
+
|
168 |
+
def produce_data(args, queue, filepaths, dataset_indices):
|
169 |
+
global_batch_size = args.batch_size*args.nprocs #Global batch size
|
170 |
+
size_per_dataset = int(global_batch_size / args.datasets_per_batch) #How many datasets per batch
|
171 |
+
num_same_dataset = int(size_per_dataset / args.batch_size)
|
172 |
+
print("producer", "global_batch_size", global_batch_size)
|
173 |
+
print("producer", "size_per_dataset", size_per_dataset)
|
174 |
+
print("producer", "num_same_dataset", num_same_dataset)
|
175 |
+
|
176 |
+
datasets = []
|
177 |
+
for filepath in filepaths:
|
178 |
+
if "reddit_" in filepath: #Special dataset class for Reddit files
|
179 |
+
data_obj = RedditDataset(filepath)
|
180 |
+
else:
|
181 |
+
data_obj = Dataset(filepath)
|
182 |
+
datasets.append(iter(data_obj))
|
183 |
+
|
184 |
+
# Store if dataset is in a 2 col or 3 col format
|
185 |
+
num_cols = {idx: len(next(dataset)) for idx, dataset in enumerate(datasets)}
|
186 |
+
|
187 |
+
while True:
|
188 |
+
texts_in_batch = set()
|
189 |
+
batch_format = None #2 vs 3 col format for this batch
|
190 |
+
|
191 |
+
#Add data from several sub datasets
|
192 |
+
for _ in range(args.datasets_per_batch):
|
193 |
+
valid_dataset = False #Check that datasets have the same 2/3 col format
|
194 |
+
while not valid_dataset:
|
195 |
+
data_idx = random.choice(dataset_indices)
|
196 |
+
if batch_format is None:
|
197 |
+
batch_format = num_cols[data_idx]
|
198 |
+
valid_dataset = True
|
199 |
+
else: #Check that this dataset has the same format
|
200 |
+
valid_dataset = (batch_format == num_cols[data_idx])
|
201 |
+
|
202 |
+
#Get data from this dataset
|
203 |
+
dataset = datasets[data_idx]
|
204 |
+
for _ in range(num_same_dataset):
|
205 |
+
for _ in range(args.nprocs):
|
206 |
+
batch_device = [] #A batch for one device
|
207 |
+
while len(batch_device) < args.batch_size:
|
208 |
+
sample = next(dataset)
|
209 |
+
in_batch = False
|
210 |
+
for text in sample:
|
211 |
+
if text in texts_in_batch:
|
212 |
+
in_batch = True
|
213 |
+
break
|
214 |
+
|
215 |
+
if not in_batch:
|
216 |
+
for text in sample:
|
217 |
+
texts_in_batch.add(text)
|
218 |
+
batch_device.append(sample)
|
219 |
+
|
220 |
+
queue.put(batch_device)
|
221 |
+
|
222 |
+
|
223 |
+
class RedditDataset:
|
224 |
+
"""
|
225 |
+
A class that handles the reddit data files
|
226 |
+
"""
|
227 |
+
def __init__(self, filepath):
|
228 |
+
self.filepath = filepath
|
229 |
+
|
230 |
+
def __iter__(self):
|
231 |
+
while True:
|
232 |
+
with gzip.open(self.filepath, "rt") as fIn:
|
233 |
+
for line in fIn:
|
234 |
+
data = json.loads(line)
|
235 |
+
|
236 |
+
if "response" in data and "context" in data:
|
237 |
+
yield [data["response"], data["context"]]
|
238 |
+
|
239 |
+
class Dataset:
|
240 |
+
"""
|
241 |
+
A class that handles one dataset
|
242 |
+
"""
|
243 |
+
def __init__(self, filepath):
|
244 |
+
self.filepath = filepath
|
245 |
+
|
246 |
+
def __iter__(self):
|
247 |
+
max_dataset_size = 10*1000*1000 #Cache small datasets in memory
|
248 |
+
dataset = []
|
249 |
+
data_format = None
|
250 |
+
|
251 |
+
while dataset is None or len(dataset) == 0:
|
252 |
+
with gzip.open(self.filepath, "rt") as fIn:
|
253 |
+
for line in fIn:
|
254 |
+
data = json.loads(line)
|
255 |
+
if isinstance(data, dict):
|
256 |
+
data = data['texts']
|
257 |
+
|
258 |
+
if data_format is None:
|
259 |
+
data_format = len(data)
|
260 |
+
|
261 |
+
#Ensure that all entries are of the same 2/3 col format
|
262 |
+
assert len(data) == data_format
|
263 |
+
|
264 |
+
if dataset is not None:
|
265 |
+
dataset.append(data)
|
266 |
+
if len(dataset) >= max_dataset_size:
|
267 |
+
dataset = None
|
268 |
+
|
269 |
+
yield data
|
270 |
+
|
271 |
+
# Data loaded. Now stream to the queue
|
272 |
+
# Shuffle for each epoch
|
273 |
+
while True:
|
274 |
+
random.shuffle(dataset)
|
275 |
+
for data in dataset:
|
276 |
+
yield data
|
277 |
+
|
278 |
+
|
279 |
+
|
280 |
+
if __name__ == "__main__":
|
281 |
+
parser = argparse.ArgumentParser()
|
282 |
+
parser.add_argument('--model', default='nreimers/MiniLM-L6-H384-uncased')
|
283 |
+
parser.add_argument('--steps', type=int, default=2000)
|
284 |
+
parser.add_argument('--save_steps', type=int, default=10000)
|
285 |
+
parser.add_argument('--batch_size', type=int, default=64)
|
286 |
+
parser.add_argument('--max_length', type=int, default=128)
|
287 |
+
parser.add_argument('--nprocs', type=int, default=8)
|
288 |
+
parser.add_argument('--datasets_per_batch', type=int, default=2, help="Number of datasets per batch")
|
289 |
+
parser.add_argument('--scale', type=float, default=20, help="Use 20 for cossim, and 1 when you work with unnormalized embeddings with dot product")
|
290 |
+
parser.add_argument('--data_folder', default="/data", help="Folder with your dataset files")
|
291 |
+
parser.add_argument('data_config', help="A data_config.json file")
|
292 |
+
parser.add_argument('output')
|
293 |
+
args = parser.parse_args()
|
294 |
+
|
295 |
+
# Ensure global batch size is divisble by data_sample_size
|
296 |
+
assert (args.batch_size*args.nprocs) % args.datasets_per_batch == 0
|
297 |
+
|
298 |
+
logging.info("Output: "+args.output)
|
299 |
+
if os.path.exists(args.output):
|
300 |
+
print("Output folder already exists.")
|
301 |
+
input("Continue?")
|
302 |
+
|
303 |
+
# Write train script to output path
|
304 |
+
os.makedirs(args.output, exist_ok=True)
|
305 |
+
|
306 |
+
data_config_path = os.path.join(args.output, 'data_config.json')
|
307 |
+
copyfile(args.data_config, data_config_path)
|
308 |
+
|
309 |
+
train_script_path = os.path.join(args.output, 'train_script.py')
|
310 |
+
copyfile(__file__, train_script_path)
|
311 |
+
with open(train_script_path, 'a') as fOut:
|
312 |
+
fOut.write("\n\n# Script was called via:\n#python " + " ".join(sys.argv))
|
313 |
+
|
314 |
+
|
315 |
+
|
316 |
+
#Load data config
|
317 |
+
with open(args.data_config) as fIn:
|
318 |
+
data_config = json.load(fIn)
|
319 |
+
|
320 |
+
queue = mp.Queue(maxsize=100*args.nprocs)
|
321 |
+
|
322 |
+
filepaths = []
|
323 |
+
dataset_indices = []
|
324 |
+
for idx, data in enumerate(data_config):
|
325 |
+
filepaths.append(os.path.join(os.path.expanduser(args.data_folder), data['name']))
|
326 |
+
dataset_indices.extend([idx]*data['weight'])
|
327 |
+
|
328 |
+
# Start producer
|
329 |
+
p = mp.Process(target=produce_data, args=(args, queue, filepaths, dataset_indices))
|
330 |
+
p.start()
|
331 |
+
|
332 |
+
# Run training
|
333 |
+
print("Start processes:", args.nprocs)
|
334 |
+
xmp.spawn(train_function, args=(args, queue), nprocs=args.nprocs, start_method='fork')
|
335 |
+
print("Training done")
|
336 |
+
print("It might be that not all processes exit automatically. In that case you must manually kill this process.")
|
337 |
+
print("With 'pkill python' you can kill all remaining python processes")
|
338 |
+
p.kill()
|
339 |
+
exit()
|
340 |
+
|
341 |
+
|
342 |
+
|
343 |
+
# Script was called via:
|
344 |
+
#python train_many_data_files_v2.py --steps 1000000 --batch_size 64 --model microsoft/mpnet-base train_data_configs/all_datasets_v4.json output/all_datasets_v4_mpnet-base
|
params_weight/all-mpnet-base-v2/vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
params_weight/bert-base-uncased/LICENSE
ADDED
@@ -0,0 +1,201 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Apache License
|
2 |
+
Version 2.0, January 2004
|
3 |
+
http://www.apache.org/licenses/
|
4 |
+
|
5 |
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
6 |
+
|
7 |
+
1. Definitions.
|
8 |
+
|
9 |
+
"License" shall mean the terms and conditions for use, reproduction,
|
10 |
+
and distribution as defined by Sections 1 through 9 of this document.
|
11 |
+
|
12 |
+
"Licensor" shall mean the copyright owner or entity authorized by
|
13 |
+
the copyright owner that is granting the License.
|
14 |
+
|
15 |
+
"Legal Entity" shall mean the union of the acting entity and all
|
16 |
+
other entities that control, are controlled by, or are under common
|
17 |
+
control with that entity. For the purposes of this definition,
|
18 |
+
"control" means (i) the power, direct or indirect, to cause the
|
19 |
+
direction or management of such entity, whether by contract or
|
20 |
+
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
21 |
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
22 |
+
|
23 |
+
"You" (or "Your") shall mean an individual or Legal Entity
|
24 |
+
exercising permissions granted by this License.
|
25 |
+
|
26 |
+
"Source" form shall mean the preferred form for making modifications,
|
27 |
+
including but not limited to software source code, documentation
|
28 |
+
source, and configuration files.
|
29 |
+
|
30 |
+
"Object" form shall mean any form resulting from mechanical
|
31 |
+
transformation or translation of a Source form, including but
|
32 |
+
not limited to compiled object code, generated documentation,
|
33 |
+
and conversions to other media types.
|
34 |
+
|
35 |
+
"Work" shall mean the work of authorship, whether in Source or
|
36 |
+
Object form, made available under the License, as indicated by a
|
37 |
+
copyright notice that is included in or attached to the work
|
38 |
+
(an example is provided in the Appendix below).
|
39 |
+
|
40 |
+
"Derivative Works" shall mean any work, whether in Source or Object
|
41 |
+
form, that is based on (or derived from) the Work and for which the
|
42 |
+
editorial revisions, annotations, elaborations, or other modifications
|
43 |
+
represent, as a whole, an original work of authorship. For the purposes
|
44 |
+
of this License, Derivative Works shall not include works that remain
|
45 |
+
separable from, or merely link (or bind by name) to the interfaces of,
|
46 |
+
the Work and Derivative Works thereof.
|
47 |
+
|
48 |
+
"Contribution" shall mean any work of authorship, including
|
49 |
+
the original version of the Work and any modifications or additions
|
50 |
+
to that Work or Derivative Works thereof, that is intentionally
|
51 |
+
submitted to Licensor for inclusion in the Work by the copyright owner
|
52 |
+
or by an individual or Legal Entity authorized to submit on behalf of
|
53 |
+
the copyright owner. For the purposes of this definition, "submitted"
|
54 |
+
means any form of electronic, verbal, or written communication sent
|
55 |
+
to the Licensor or its representatives, including but not limited to
|
56 |
+
communication on electronic mailing lists, source code control systems,
|
57 |
+
and issue tracking systems that are managed by, or on behalf of, the
|
58 |
+
Licensor for the purpose of discussing and improving the Work, but
|
59 |
+
excluding communication that is conspicuously marked or otherwise
|
60 |
+
designated in writing by the copyright owner as "Not a Contribution."
|
61 |
+
|
62 |
+
"Contributor" shall mean Licensor and any individual or Legal Entity
|
63 |
+
on behalf of whom a Contribution has been received by Licensor and
|
64 |
+
subsequently incorporated within the Work.
|
65 |
+
|
66 |
+
2. Grant of Copyright License. Subject to the terms and conditions of
|
67 |
+
this License, each Contributor hereby grants to You a perpetual,
|
68 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
69 |
+
copyright license to reproduce, prepare Derivative Works of,
|
70 |
+
publicly display, publicly perform, sublicense, and distribute the
|
71 |
+
Work and such Derivative Works in Source or Object form.
|
72 |
+
|
73 |
+
3. Grant of Patent License. Subject to the terms and conditions of
|
74 |
+
this License, each Contributor hereby grants to You a perpetual,
|
75 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
76 |
+
(except as stated in this section) patent license to make, have made,
|
77 |
+
use, offer to sell, sell, import, and otherwise transfer the Work,
|
78 |
+
where such license applies only to those patent claims licensable
|
79 |
+
by such Contributor that are necessarily infringed by their
|
80 |
+
Contribution(s) alone or by combination of their Contribution(s)
|
81 |
+
with the Work to which such Contribution(s) was submitted. If You
|
82 |
+
institute patent litigation against any entity (including a
|
83 |
+
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
84 |
+
or a Contribution incorporated within the Work constitutes direct
|
85 |
+
or contributory patent infringement, then any patent licenses
|
86 |
+
granted to You under this License for that Work shall terminate
|
87 |
+
as of the date such litigation is filed.
|
88 |
+
|
89 |
+
4. Redistribution. You may reproduce and distribute copies of the
|
90 |
+
Work or Derivative Works thereof in any medium, with or without
|
91 |
+
modifications, and in Source or Object form, provided that You
|
92 |
+
meet the following conditions:
|
93 |
+
|
94 |
+
(a) You must give any other recipients of the Work or
|
95 |
+
Derivative Works a copy of this License; and
|
96 |
+
|
97 |
+
(b) You must cause any modified files to carry prominent notices
|
98 |
+
stating that You changed the files; and
|
99 |
+
|
100 |
+
(c) You must retain, in the Source form of any Derivative Works
|
101 |
+
that You distribute, all copyright, patent, trademark, and
|
102 |
+
attribution notices from the Source form of the Work,
|
103 |
+
excluding those notices that do not pertain to any part of
|
104 |
+
the Derivative Works; and
|
105 |
+
|
106 |
+
(d) If the Work includes a "NOTICE" text file as part of its
|
107 |
+
distribution, then any Derivative Works that You distribute must
|
108 |
+
include a readable copy of the attribution notices contained
|
109 |
+
within such NOTICE file, excluding those notices that do not
|
110 |
+
pertain to any part of the Derivative Works, in at least one
|
111 |
+
of the following places: within a NOTICE text file distributed
|
112 |
+
as part of the Derivative Works; within the Source form or
|
113 |
+
documentation, if provided along with the Derivative Works; or,
|
114 |
+
within a display generated by the Derivative Works, if and
|
115 |
+
wherever such third-party notices normally appear. The contents
|
116 |
+
of the NOTICE file are for informational purposes only and
|
117 |
+
do not modify the License. You may add Your own attribution
|
118 |
+
notices within Derivative Works that You distribute, alongside
|
119 |
+
or as an addendum to the NOTICE text from the Work, provided
|
120 |
+
that such additional attribution notices cannot be construed
|
121 |
+
as modifying the License.
|
122 |
+
|
123 |
+
You may add Your own copyright statement to Your modifications and
|
124 |
+
may provide additional or different license terms and conditions
|
125 |
+
for use, reproduction, or distribution of Your modifications, or
|
126 |
+
for any such Derivative Works as a whole, provided Your use,
|
127 |
+
reproduction, and distribution of the Work otherwise complies with
|
128 |
+
the conditions stated in this License.
|
129 |
+
|
130 |
+
5. Submission of Contributions. Unless You explicitly state otherwise,
|
131 |
+
any Contribution intentionally submitted for inclusion in the Work
|
132 |
+
by You to the Licensor shall be under the terms and conditions of
|
133 |
+
this License, without any additional terms or conditions.
|
134 |
+
Notwithstanding the above, nothing herein shall supersede or modify
|
135 |
+
the terms of any separate license agreement you may have executed
|
136 |
+
with Licensor regarding such Contributions.
|
137 |
+
|
138 |
+
6. Trademarks. This License does not grant permission to use the trade
|
139 |
+
names, trademarks, service marks, or product names of the Licensor,
|
140 |
+
except as required for reasonable and customary use in describing the
|
141 |
+
origin of the Work and reproducing the content of the NOTICE file.
|
142 |
+
|
143 |
+
7. Disclaimer of Warranty. Unless required by applicable law or
|
144 |
+
agreed to in writing, Licensor provides the Work (and each
|
145 |
+
Contributor provides its Contributions) on an "AS IS" BASIS,
|
146 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
147 |
+
implied, including, without limitation, any warranties or conditions
|
148 |
+
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
149 |
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
150 |
+
appropriateness of using or redistributing the Work and assume any
|
151 |
+
risks associated with Your exercise of permissions under this License.
|
152 |
+
|
153 |
+
8. Limitation of Liability. In no event and under no legal theory,
|
154 |
+
whether in tort (including negligence), contract, or otherwise,
|
155 |
+
unless required by applicable law (such as deliberate and grossly
|
156 |
+
negligent acts) or agreed to in writing, shall any Contributor be
|
157 |
+
liable to You for damages, including any direct, indirect, special,
|
158 |
+
incidental, or consequential damages of any character arising as a
|
159 |
+
result of this License or out of the use or inability to use the
|
160 |
+
Work (including but not limited to damages for loss of goodwill,
|
161 |
+
work stoppage, computer failure or malfunction, or any and all
|
162 |
+
other commercial damages or losses), even if such Contributor
|
163 |
+
has been advised of the possibility of such damages.
|
164 |
+
|
165 |
+
9. Accepting Warranty or Additional Liability. While redistributing
|
166 |
+
the Work or Derivative Works thereof, You may choose to offer,
|
167 |
+
and charge a fee for, acceptance of support, warranty, indemnity,
|
168 |
+
or other liability obligations and/or rights consistent with this
|
169 |
+
License. However, in accepting such obligations, You may act only
|
170 |
+
on Your own behalf and on Your sole responsibility, not on behalf
|
171 |
+
of any other Contributor, and only if You agree to indemnify,
|
172 |
+
defend, and hold each Contributor harmless for any liability
|
173 |
+
incurred by, or claims asserted against, such Contributor by reason
|
174 |
+
of your accepting any such warranty or additional liability.
|
175 |
+
|
176 |
+
END OF TERMS AND CONDITIONS
|
177 |
+
|
178 |
+
APPENDIX: How to apply the Apache License to your work.
|
179 |
+
|
180 |
+
To apply the Apache License to your work, attach the following
|
181 |
+
boilerplate notice, with the fields enclosed by brackets "[]"
|
182 |
+
replaced with your own identifying information. (Don't include
|
183 |
+
the brackets!) The text should be enclosed in the appropriate
|
184 |
+
comment syntax for the file format. We also recommend that a
|
185 |
+
file or class name and description of purpose be included on the
|
186 |
+
same "printed page" as the copyright notice for easier
|
187 |
+
identification within third-party archives.
|
188 |
+
|
189 |
+
Copyright [yyyy] [name of copyright owner]
|
190 |
+
|
191 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
192 |
+
you may not use this file except in compliance with the License.
|
193 |
+
You may obtain a copy of the License at
|
194 |
+
|
195 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
196 |
+
|
197 |
+
Unless required by applicable law or agreed to in writing, software
|
198 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
199 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
200 |
+
See the License for the specific language governing permissions and
|
201 |
+
limitations under the License.
|
params_weight/bert-base-uncased/README.md
ADDED
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
tags:
|
4 |
+
- exbert
|
5 |
+
license: apache-2.0
|
6 |
+
datasets:
|
7 |
+
- bookcorpus
|
8 |
+
- wikipedia
|
9 |
+
---
|
10 |
+
|
11 |
+
# BERT base model (uncased)
|
12 |
+
|
13 |
+
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
|
14 |
+
[this paper](https://arxiv.org/abs/1810.04805) and first released in
|
15 |
+
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
|
16 |
+
between english and English.
|
17 |
+
|
18 |
+
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
|
19 |
+
the Hugging Face team.
|
20 |
+
|
21 |
+
## Model description
|
22 |
+
|
23 |
+
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
|
24 |
+
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
|
25 |
+
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
|
26 |
+
was pretrained with two objectives:
|
27 |
+
|
28 |
+
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
|
29 |
+
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
|
30 |
+
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
|
31 |
+
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
|
32 |
+
sentence.
|
33 |
+
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
|
34 |
+
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
|
35 |
+
predict if the two sentences were following each other or not.
|
36 |
+
|
37 |
+
This way, the model learns an inner representation of the English language that can then be used to extract features
|
38 |
+
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
|
39 |
+
classifier using the features produced by the BERT model as inputs.
|
40 |
+
|
41 |
+
## Model variations
|
42 |
+
|
43 |
+
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
|
44 |
+
Chinese and multilingual uncased and cased versions followed shortly after.
|
45 |
+
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
|
46 |
+
Other 24 smaller models are released afterward.
|
47 |
+
|
48 |
+
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
|
49 |
+
|
50 |
+
| Model | #params | Language |
|
51 |
+
|------------------------|--------------------------------|-------|
|
52 |
+
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
|
53 |
+
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
|
54 |
+
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
|
55 |
+
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
|
56 |
+
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
|
57 |
+
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
|
58 |
+
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
|
59 |
+
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
|
60 |
+
|
61 |
+
## Intended uses & limitations
|
62 |
+
|
63 |
+
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
|
64 |
+
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
|
65 |
+
fine-tuned versions of a task that interests you.
|
66 |
+
|
67 |
+
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
|
68 |
+
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
|
69 |
+
generation you should look at model like GPT2.
|
70 |
+
|
71 |
+
### How to use
|
72 |
+
|
73 |
+
You can use this model directly with a pipeline for masked language modeling:
|
74 |
+
|
75 |
+
```python
|
76 |
+
>>> from transformers import pipeline
|
77 |
+
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
|
78 |
+
>>> unmasker("Hello I'm a [MASK] model.")
|
79 |
+
|
80 |
+
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
|
81 |
+
'score': 0.1073106899857521,
|
82 |
+
'token': 4827,
|
83 |
+
'token_str': 'fashion'},
|
84 |
+
{'sequence': "[CLS] hello i'm a role model. [SEP]",
|
85 |
+
'score': 0.08774490654468536,
|
86 |
+
'token': 2535,
|
87 |
+
'token_str': 'role'},
|
88 |
+
{'sequence': "[CLS] hello i'm a new model. [SEP]",
|
89 |
+
'score': 0.05338378623127937,
|
90 |
+
'token': 2047,
|
91 |
+
'token_str': 'new'},
|
92 |
+
{'sequence': "[CLS] hello i'm a super model. [SEP]",
|
93 |
+
'score': 0.04667217284440994,
|
94 |
+
'token': 3565,
|
95 |
+
'token_str': 'super'},
|
96 |
+
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
|
97 |
+
'score': 0.027095865458250046,
|
98 |
+
'token': 2986,
|
99 |
+
'token_str': 'fine'}]
|
100 |
+
```
|
101 |
+
|
102 |
+
Here is how to use this model to get the features of a given text in PyTorch:
|
103 |
+
|
104 |
+
```python
|
105 |
+
from transformers import BertTokenizer, BertModel
|
106 |
+
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
|
107 |
+
model = BertModel.from_pretrained("bert-base-uncased")
|
108 |
+
text = "Replace me by any text you'd like."
|
109 |
+
encoded_input = tokenizer(text, return_tensors='pt')
|
110 |
+
output = model(**encoded_input)
|
111 |
+
```
|
112 |
+
|
113 |
+
and in TensorFlow:
|
114 |
+
|
115 |
+
```python
|
116 |
+
from transformers import BertTokenizer, TFBertModel
|
117 |
+
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
|
118 |
+
model = TFBertModel.from_pretrained("bert-base-uncased")
|
119 |
+
text = "Replace me by any text you'd like."
|
120 |
+
encoded_input = tokenizer(text, return_tensors='tf')
|
121 |
+
output = model(encoded_input)
|
122 |
+
```
|
123 |
+
|
124 |
+
### Limitations and bias
|
125 |
+
|
126 |
+
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
|
127 |
+
predictions:
|
128 |
+
|
129 |
+
```python
|
130 |
+
>>> from transformers import pipeline
|
131 |
+
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
|
132 |
+
>>> unmasker("The man worked as a [MASK].")
|
133 |
+
|
134 |
+
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
|
135 |
+
'score': 0.09747550636529922,
|
136 |
+
'token': 10533,
|
137 |
+
'token_str': 'carpenter'},
|
138 |
+
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
|
139 |
+
'score': 0.0523831807076931,
|
140 |
+
'token': 15610,
|
141 |
+
'token_str': 'waiter'},
|
142 |
+
{'sequence': '[CLS] the man worked as a barber. [SEP]',
|
143 |
+
'score': 0.04962705448269844,
|
144 |
+
'token': 13362,
|
145 |
+
'token_str': 'barber'},
|
146 |
+
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
|
147 |
+
'score': 0.03788609802722931,
|
148 |
+
'token': 15893,
|
149 |
+
'token_str': 'mechanic'},
|
150 |
+
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
|
151 |
+
'score': 0.037680890411138535,
|
152 |
+
'token': 18968,
|
153 |
+
'token_str': 'salesman'}]
|
154 |
+
|
155 |
+
>>> unmasker("The woman worked as a [MASK].")
|
156 |
+
|
157 |
+
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
|
158 |
+
'score': 0.21981462836265564,
|
159 |
+
'token': 6821,
|
160 |
+
'token_str': 'nurse'},
|
161 |
+
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
|
162 |
+
'score': 0.1597415804862976,
|
163 |
+
'token': 13877,
|
164 |
+
'token_str': 'waitress'},
|
165 |
+
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
|
166 |
+
'score': 0.1154729500412941,
|
167 |
+
'token': 10850,
|
168 |
+
'token_str': 'maid'},
|
169 |
+
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
|
170 |
+
'score': 0.037968918681144714,
|
171 |
+
'token': 19215,
|
172 |
+
'token_str': 'prostitute'},
|
173 |
+
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
|
174 |
+
'score': 0.03042375110089779,
|
175 |
+
'token': 5660,
|
176 |
+
'token_str': 'cook'}]
|
177 |
+
```
|
178 |
+
|
179 |
+
This bias will also affect all fine-tuned versions of this model.
|
180 |
+
|
181 |
+
## Training data
|
182 |
+
|
183 |
+
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
|
184 |
+
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
|
185 |
+
headers).
|
186 |
+
|
187 |
+
## Training procedure
|
188 |
+
|
189 |
+
### Preprocessing
|
190 |
+
|
191 |
+
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
|
192 |
+
then of the form:
|
193 |
+
|
194 |
+
```
|
195 |
+
[CLS] Sentence A [SEP] Sentence B [SEP]
|
196 |
+
```
|
197 |
+
|
198 |
+
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
|
199 |
+
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
|
200 |
+
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
|
201 |
+
"sentences" has a combined length of less than 512 tokens.
|
202 |
+
|
203 |
+
The details of the masking procedure for each sentence are the following:
|
204 |
+
- 15% of the tokens are masked.
|
205 |
+
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
|
206 |
+
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
|
207 |
+
- In the 10% remaining cases, the masked tokens are left as is.
|
208 |
+
|
209 |
+
### Pretraining
|
210 |
+
|
211 |
+
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
|
212 |
+
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
|
213 |
+
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
|
214 |
+
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
|
215 |
+
|
216 |
+
## Evaluation results
|
217 |
+
|
218 |
+
When fine-tuned on downstream tasks, this model achieves the following results:
|
219 |
+
|
220 |
+
Glue test results:
|
221 |
+
|
222 |
+
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|
223 |
+
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
|
224 |
+
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
|
225 |
+
|
226 |
+
|
227 |
+
### BibTeX entry and citation info
|
228 |
+
|
229 |
+
```bibtex
|
230 |
+
@article{DBLP:journals/corr/abs-1810-04805,
|
231 |
+
author = {Jacob Devlin and
|
232 |
+
Ming{-}Wei Chang and
|
233 |
+
Kenton Lee and
|
234 |
+
Kristina Toutanova},
|
235 |
+
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
|
236 |
+
Understanding},
|
237 |
+
journal = {CoRR},
|
238 |
+
volume = {abs/1810.04805},
|
239 |
+
year = {2018},
|
240 |
+
url = {http://arxiv.org/abs/1810.04805},
|
241 |
+
archivePrefix = {arXiv},
|
242 |
+
eprint = {1810.04805},
|
243 |
+
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
|
244 |
+
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
|
245 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
246 |
+
}
|
247 |
+
```
|
248 |
+
|
249 |
+
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
|
250 |
+
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
|
251 |
+
</a>
|
params_weight/bert-base-uncased/config.json
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"BertForMaskedLM"
|
4 |
+
],
|
5 |
+
"attention_probs_dropout_prob": 0.1,
|
6 |
+
"gradient_checkpointing": false,
|
7 |
+
"hidden_act": "gelu",
|
8 |
+
"hidden_dropout_prob": 0.1,
|
9 |
+
"hidden_size": 768,
|
10 |
+
"initializer_range": 0.02,
|
11 |
+
"intermediate_size": 3072,
|
12 |
+
"layer_norm_eps": 1e-12,
|
13 |
+
"max_position_embeddings": 512,
|
14 |
+
"model_type": "bert",
|
15 |
+
"num_attention_heads": 12,
|
16 |
+
"num_hidden_layers": 12,
|
17 |
+
"pad_token_id": 0,
|
18 |
+
"position_embedding_type": "absolute",
|
19 |
+
"transformers_version": "4.6.0.dev0",
|
20 |
+
"type_vocab_size": 2,
|
21 |
+
"use_cache": true,
|
22 |
+
"vocab_size": 30522
|
23 |
+
}
|
params_weight/bert-base-uncased/flax_model.msgpack
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ea201fabe466ef7182f1f687fb5be4b62a73d3a78883f11264ff7f682cdb54bf
|
3 |
+
size 438064459
|
params_weight/bert-base-uncased/gitattributes
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.tar.gz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
11 |
+
model.safetensors filter=lfs diff=lfs merge=lfs -text
|
params_weight/bert-base-uncased/model.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:44d7a2896d341c51fb1eba89aea3a590e6af0ce33e25481136f7eeecb62e5f7f
|
3 |
+
size 532091246
|
params_weight/bert-base-uncased/model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:68d45e234eb4a928074dfd868cead0219ab85354cc53d20e772753c6bb9169d3
|
3 |
+
size 440449768
|
params_weight/bert-base-uncased/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:097417381d6c7230bd9e3557456d726de6e83245ec8b24f529f60198a67b203a
|
3 |
+
size 440473133
|
params_weight/bert-base-uncased/rust_model.ot
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:afd9aa425fd45c5655d3d43a0d041f9b76729bf475d6c017a0e9304a38f89972
|
3 |
+
size 534240408
|
params_weight/bert-base-uncased/tf_model.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a7a17d6d844b5de815ccab5f42cad6d24496db3850a2a43d8258221018ce87d2
|
3 |
+
size 536063208
|
params_weight/bert-base-uncased/tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
params_weight/bert-base-uncased/tokenizer_config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"do_lower_case": true
|
3 |
+
}
|
params_weight/bert-base-uncased/vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
params_weight/pc_encoder/point_model.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f78df9a25394bf1f0c60069037549c49b84da2a966ef77afbf35abf9d6863bc0
|
3 |
+
size 43778761
|
params_weight/sup-simcse-roberta-large/README.md
ADDED
@@ -0,0 +1,168 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
|
3 |
+
tags:
|
4 |
+
- feature-extraction
|
5 |
+
|
6 |
+
---
|
7 |
+
# Model Card for sup-simcse-roberta-large
|
8 |
+
|
9 |
+
|
10 |
+
# Model Details
|
11 |
+
|
12 |
+
## Model Description
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
- **Developed by:** Princeton-nlp
|
17 |
+
- **Shared by [Optional]:** More information needed
|
18 |
+
- **Model type:** Feature Extraction
|
19 |
+
- **Language(s) (NLP):** More information needed
|
20 |
+
- **License:** More information needed
|
21 |
+
- **Related Models:**
|
22 |
+
- **Parent Model:** RoBERTa-large
|
23 |
+
- **Resources for more information:**
|
24 |
+
- [GitHub Repo](https://github.com/princeton-nlp/SimCSE)
|
25 |
+
- [Associated Paper](https://arxiv.org/abs/2104.08821)
|
26 |
+
- [Blog Post]({0})
|
27 |
+
|
28 |
+
# Uses
|
29 |
+
|
30 |
+
|
31 |
+
## Direct Use
|
32 |
+
|
33 |
+
This model can be used for the task of Feature Extraction
|
34 |
+
|
35 |
+
## Downstream Use [Optional]
|
36 |
+
|
37 |
+
More information needed
|
38 |
+
|
39 |
+
## Out-of-Scope Use
|
40 |
+
|
41 |
+
The model should not be used to intentionally create hostile or alienating environments for people.
|
42 |
+
|
43 |
+
# Bias, Risks, and Limitations
|
44 |
+
|
45 |
+
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
|
46 |
+
|
47 |
+
|
48 |
+
## Recommendations
|
49 |
+
|
50 |
+
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
51 |
+
|
52 |
+
|
53 |
+
# Training Details
|
54 |
+
|
55 |
+
## Training Data
|
56 |
+
The model craters note in the [Github Repository](https://github.com/princeton-nlp/SimCSE/blob/main/README.md)
|
57 |
+
> We train unsupervised SimCSE on 106 randomly sampled sentences from English Wikipedia, and train supervised SimCSE on the combination of MNLI and SNLI datasets (314k).
|
58 |
+
|
59 |
+
## Training Procedure
|
60 |
+
|
61 |
+
|
62 |
+
### Preprocessing
|
63 |
+
|
64 |
+
More information needed
|
65 |
+
|
66 |
+
### Speeds, Sizes, Times
|
67 |
+
|
68 |
+
More information needed
|
69 |
+
|
70 |
+
# Evaluation
|
71 |
+
|
72 |
+
|
73 |
+
## Testing Data, Factors & Metrics
|
74 |
+
|
75 |
+
### Testing Data
|
76 |
+
|
77 |
+
The model craters note in the [associated paper](https://arxiv.org/pdf/2104.08821.pdf)
|
78 |
+
> Our evaluation code for sentence embeddings is based on a modified version of [SentEval](https://github.com/facebookresearch/SentEval). It evaluates sentence embeddings on semantic textual similarity (STS) tasks and downstream transfer tasks. For STS tasks, our evaluation takes the "all" setting, and report Spearman's correlation. See [associated paper](https://arxiv.org/pdf/2104.08821.pdf) (Appendix B) for evaluation details.
|
79 |
+
|
80 |
+
### Factors
|
81 |
+
|
82 |
+
|
83 |
+
### Metrics
|
84 |
+
|
85 |
+
More information needed
|
86 |
+
## Results
|
87 |
+
|
88 |
+
More information needed
|
89 |
+
|
90 |
+
# Model Examination
|
91 |
+
|
92 |
+
More information needed
|
93 |
+
|
94 |
+
# Environmental Impact
|
95 |
+
|
96 |
+
|
97 |
+
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
98 |
+
|
99 |
+
- **Hardware Type:** More information needed
|
100 |
+
- **Hours used:** More information needed
|
101 |
+
- **Cloud Provider:** More information needed
|
102 |
+
- **Compute Region:** More information needed
|
103 |
+
- **Carbon Emitted:** More information needed
|
104 |
+
|
105 |
+
# Technical Specifications [optional]
|
106 |
+
|
107 |
+
## Model Architecture and Objective
|
108 |
+
|
109 |
+
More information needed
|
110 |
+
|
111 |
+
## Compute Infrastructure
|
112 |
+
|
113 |
+
More information needed
|
114 |
+
|
115 |
+
### Hardware
|
116 |
+
|
117 |
+
More information needed
|
118 |
+
|
119 |
+
### Software
|
120 |
+
More information needed
|
121 |
+
|
122 |
+
# Citation
|
123 |
+
|
124 |
+
|
125 |
+
**BibTeX:**
|
126 |
+
|
127 |
+
```bibtex
|
128 |
+
@inproceedings{gao2021simcse,
|
129 |
+
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
|
130 |
+
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
|
131 |
+
booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
|
132 |
+
year={2021}
|
133 |
+
}
|
134 |
+
|
135 |
+
```
|
136 |
+
|
137 |
+
|
138 |
+
# Glossary [optional]
|
139 |
+
More information needed
|
140 |
+
|
141 |
+
# More Information [optional]
|
142 |
+
|
143 |
+
If you have any questions related to the code or the paper, feel free to email Tianyu (`tianyug@cs.princeton.edu`) and Xingcheng (`yxc18@mails.tsinghua.edu.cn`). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!
|
144 |
+
# Model Card Authors [optional]
|
145 |
+
|
146 |
+
|
147 |
+
Princeton NLP group in collaboration with Ezi Ozoani and the Hugging Face team
|
148 |
+
|
149 |
+
# Model Card Contact
|
150 |
+
|
151 |
+
More information needed
|
152 |
+
|
153 |
+
# How to Get Started with the Model
|
154 |
+
|
155 |
+
Use the code below to get started with the model.
|
156 |
+
|
157 |
+
<details>
|
158 |
+
<summary> Click to expand </summary>
|
159 |
+
|
160 |
+
```python
|
161 |
+
from transformers import AutoTokenizer, AutoModel
|
162 |
+
|
163 |
+
tokenizer = AutoTokenizer.from_pretrained("princeton-nlp/sup-simcse-roberta-large")
|
164 |
+
|
165 |
+
model = AutoModel.from_pretrained("princeton-nlp/sup-simcse-roberta-large")
|
166 |
+
|
167 |
+
```
|
168 |
+
</details>
|
params_weight/sup-simcse-roberta-large/config.json
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "result/roberta-large-bs512-lr1e-5",
|
3 |
+
"architectures": [
|
4 |
+
"RobertaModel"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"bos_token_id": 0,
|
8 |
+
"eos_token_id": 2,
|
9 |
+
"gradient_checkpointing": false,
|
10 |
+
"hidden_act": "gelu",
|
11 |
+
"hidden_dropout_prob": 0.1,
|
12 |
+
"hidden_size": 1024,
|
13 |
+
"initializer_range": 0.02,
|
14 |
+
"intermediate_size": 4096,
|
15 |
+
"layer_norm_eps": 1e-05,
|
16 |
+
"max_position_embeddings": 514,
|
17 |
+
"model_type": "roberta",
|
18 |
+
"num_attention_heads": 16,
|
19 |
+
"num_hidden_layers": 24,
|
20 |
+
"pad_token_id": 1,
|
21 |
+
"position_embedding_type": "absolute",
|
22 |
+
"transformers_version": "4.2.1",
|
23 |
+
"type_vocab_size": 1,
|
24 |
+
"use_cache": true,
|
25 |
+
"vocab_size": 50265
|
26 |
+
}
|
params_weight/sup-simcse-roberta-large/flax_model.msgpack
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:33223b2f3c2fcee6351cb0027bc3451e3db1b4e8f16fef29f245592c381bdf2e
|
3 |
+
size 1421452955
|
params_weight/sup-simcse-roberta-large/gitattributes
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.tar.gz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
params_weight/sup-simcse-roberta-large/merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
params_weight/sup-simcse-roberta-large/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b97bbd5aa01a5ab66f6e2d8bb96bb78aa01f81238787cfa9dc28b3f950f3da78
|
3 |
+
size 1421571527
|