doberst commited on
Commit
983cda3
1 Parent(s): 836d197

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -71
README.md CHANGED
@@ -13,46 +13,16 @@ industry-bert-contracts-v0.1 is part of a series of industry-fine-tuned sentence
13
 
14
  <!-- Provide a longer summary of what this model is. -->
15
 
16
- BERT-based 768-parameter drop-in substitute for non-industry-specific embeddings model. This model was trained on a wide range of
17
- publicly available commercial contracts, including open source contract datasets.
 
18
 
19
  - **Developed by:** llmware
20
- - **Shared by [optional]:** Darren Oberst
21
  - **Model type:** BERT-based Industry domain fine-tuned Sentence Transformer architecture
22
  - **Language(s) (NLP):** English
23
  - **License:** Apache 2.0
24
  - **Finetuned from model [optional]:** BERT-based model, fine-tuning methodology described below.
25
 
26
- ### Model Sources [optional]
27
-
28
- <!-- Provide the basic links for the model. -->
29
-
30
- - **Repository:** [More Information Needed]
31
- - **Paper [optional]:** [More Information Needed]
32
- - **Demo [optional]:** [More Information Needed]
33
-
34
- ## Uses
35
-
36
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
37
-
38
- ### Direct Use
39
-
40
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
41
-
42
- This model is intended to be used as a sentence embedding model, specifically for contracts use cases.
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
  [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
@@ -65,35 +35,14 @@ This model is intended to be used as a sentence embedding model, specifically fo
65
 
66
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
67
 
68
- This model was fine-tuned using a custom self-supervised procedure that combined contrastive techniques with stochastic injections of
69
- distortions in the samples. The methodology was derived, adapted and inspired primarily from three research papers cited below:
70
- TSDAE (Reimers), DeClutr (Giorgi), and Contrastive Tension (Carlsson).
71
-
72
- #### Training Hyperparameters
73
-
74
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
75
-
76
- #### Metrics
77
-
78
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
79
-
80
- [More Information Needed]
81
-
82
- ### Results
83
-
84
- [More Information Needed]
85
-
86
- #### Summary
87
-
88
-
89
- ### Model Architecture and Objective
90
-
91
- [More Information Needed]
92
 
93
 
94
  ## Citation [optional]
95
 
96
- Custom training protocol used to train the model, which was derived and inspired by the following papers:
97
 
98
  @article{wang-2021-TSDAE,
99
  title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
@@ -127,21 +76,9 @@ Custom training protocol used to train the model, which was derived and inspired
127
 
128
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
129
 
130
- **BibTeX:**
131
-
132
- [More Information Needed]
133
-
134
- **APA:**
135
-
136
- [More Information Needed]
137
-
138
-
139
- ## Model Card Authors [optional]
140
-
141
- [More Information Needed]
142
 
143
  ## Model Card Contact
144
 
145
- [More Information Needed]
146
 
147
 
 
13
 
14
  <!-- Provide a longer summary of what this model is. -->
15
 
16
+ industry-bert-contracts-v0.1 is a domain fine-tuned BERT-based 768-parameter Sentence Transformer model, intended to as a "drop-in"
17
+ substitute for contractual and legal domains. This model was trained on a wide range of publicly available commercial contracts,
18
+ including open source contract datasets.
19
 
20
  - **Developed by:** llmware
 
21
  - **Model type:** BERT-based Industry domain fine-tuned Sentence Transformer architecture
22
  - **Language(s) (NLP):** English
23
  - **License:** Apache 2.0
24
  - **Finetuned from model [optional]:** BERT-based model, fine-tuning methodology described below.
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  [More Information Needed]
27
 
28
  ## Bias, Risks, and Limitations
 
35
 
36
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
37
 
38
+ This model was fine-tuned using a custom self-supervised procedure and custom dataset that combined contrastive techniques
39
+ with stochastic injections of distortions in the samples. The methodology was derived, adapted and inspired primarily from
40
+ three research papers cited below: TSDAE (Reimers), DeClutr (Giorgi), and Contrastive Tension (Carlsson).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
 
43
  ## Citation [optional]
44
 
45
+ Custom self-supervised training protocol used to train the model, which was derived and inspired by the following papers:
46
 
47
  @article{wang-2021-TSDAE,
48
  title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
 
76
 
77
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
78
 
 
 
 
 
 
 
 
 
 
 
 
 
79
 
80
  ## Model Card Contact
81
 
82
+ Darren Oberst @ llmware
83
 
84