doberst commited on
Commit
98fbfb0
1 Parent(s): fa78b69

Upload 4 files

Browse files
Files changed (4) hide show
  1. README.md +115 -0
  2. config.json +27 -0
  3. pytorch_model.bin +3 -0
  4. tokenizer.json +0 -0
README.md CHANGED
@@ -1,3 +1,118 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # Model Card for Model ID
5
+
6
+ <!-- Provide a quick summary of what the model is/does. -->
7
+
8
+ industry-bert-sec-v0.1 is part of a series of industry-fine-tuned sentence_transformer embedding models.
9
+
10
+ ## Model Details
11
+
12
+ ### Model Description
13
+
14
+ <!-- Provide a longer summary of what this model is. -->
15
+
16
+ BERT-based 768-parameter drop-in substitute for non-industry-specific embeddings model. This model was trained on a wide range of
17
+ publicly available U.S. Securities and Exchange Commission (SEC) regulatory filings and related documents.
18
+
19
+ - **Developed by:** llmware
20
+ - **Shared by [optional]:** Darren Oberst
21
+ - **Model type:** BERT-based Industry domain fine-tuned Sentence Transformer architecture
22
+ - **Language(s) (NLP):** English
23
+ - **License:** Apache 2.0
24
+ - **Finetuned from model [optional]:** BERT-based model, fine-tuning methodology described below.
25
+
26
+ ### Model Sources [optional]
27
+
28
+ <!-- Provide the basic links for the model. -->
29
+
30
+ - **Repository:** [More Information Needed]
31
+ - **Paper [optional]:** [More Information Needed]
32
+ - **Demo [optional]:** [More Information Needed]
33
+
34
+ ## Uses
35
+
36
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
37
+
38
+ ### Direct Use
39
+
40
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
41
+
42
+ This model is intended to be used as a sentence embedding model, specifically for financial services and use cases involving regulatory and financial filing documents.
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Training Procedure
65
+
66
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
67
+
68
+ This model was fine-tuned using a custom self-supervised procedure that combined contrastive techniques with stochastic injections of
69
+ distortions in the samples. The methodology was derived, adapted and inspired primarily from three research papers cited below:
70
+ TSDAE (Reimers), DeClutr (Giorgi), and Contrastive Tension (Carlsson).
71
+
72
+
73
+ ## Citation [optional]
74
+
75
+ Custom training protocol used to train the model, which was derived and inspired by the following papers:
76
+
77
+ @article{wang-2021-TSDAE,
78
+ title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
79
+ author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
80
+ journal= "arXiv preprint arXiv:2104.06979",
81
+ month = "4",
82
+ year = "2021",
83
+ url = "https://arxiv.org/abs/2104.06979",
84
+ }
85
+
86
+ @inproceedings{giorgi-etal-2021-declutr,
87
+ title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations},
88
+ author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary},
89
+ year = 2021,
90
+ month = aug,
91
+ booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
92
+ publisher = {Association for Computational Linguistics},
93
+ address = {Online},
94
+ pages = {879--895},
95
+ doi = {10.18653/v1/2021.acl-long.72},
96
+ url = {https://aclanthology.org/2021.acl-long.72}
97
+ }
98
+
99
+ @article{Carlsson-2021-CT,
100
+ title = {Semantic Re-tuning with Contrastive Tension},
101
+ author= {Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, Magnus Sahlgren},
102
+ year= {2021},
103
+ month= {"January"}
104
+ Published: 12 Jan 2021, Last Modified: 05 May 2023
105
+ }
106
+
107
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
108
+
109
+
110
+ ## Model Card Authors [optional]
111
+
112
+ [More Information Needed]
113
+
114
+ ## Model Card Contact
115
+
116
+ [More Information Needed]
117
+
118
+
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_name": "industry-bert-sec-v0.1",
3
+ "description": "Sentence transformer embedding model finetuned on wide range of public SEC filing and regulatory documents",
4
+ "_name_or_path": "bert-base-uncased",
5
+ "architectures": [
6
+ "BertModel"
7
+ ],
8
+ "attention_probs_dropout_prob": 0.1,
9
+ "classifier_dropout": null,
10
+ "gradient_checkpointing": false,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 768,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "layer_norm_eps": 1e-12,
17
+ "max_position_embeddings": 512,
18
+ "model_type": "bert",
19
+ "num_attention_heads": 12,
20
+ "num_hidden_layers": 12,
21
+ "pad_token_id": 0,
22
+ "position_embedding_type": "absolute",
23
+ "torch_dtype": "float32",
24
+ "type_vocab_size": 2,
25
+ "use_cache": true,
26
+ "vocab_size": 30522
27
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9e8f6a4cf3ec548c258b2718ba53be1f1e396749ea5b0db2402303c0d78edf1
3
+ size 438000173
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff