doberst commited on
Commit
836d197
1 Parent(s): de44e86

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +144 -0
README.md CHANGED
@@ -1,3 +1,147 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # Model Card for Model ID
5
+
6
+ <!-- Provide a quick summary of what the model is/does. -->
7
+
8
+ industry-bert-contracts-v0.1 is part of a series of industry-fine-tuned sentence_transformer embedding models.
9
+
10
+ ## Model Details
11
+
12
+ ### Model Description
13
+
14
+ <!-- Provide a longer summary of what this model is. -->
15
+
16
+ BERT-based 768-parameter drop-in substitute for non-industry-specific embeddings model. This model was trained on a wide range of
17
+ publicly available commercial contracts, including open source contract datasets.
18
+
19
+ - **Developed by:** llmware
20
+ - **Shared by [optional]:** Darren Oberst
21
+ - **Model type:** BERT-based Industry domain fine-tuned Sentence Transformer architecture
22
+ - **Language(s) (NLP):** English
23
+ - **License:** Apache 2.0
24
+ - **Finetuned from model [optional]:** BERT-based model, fine-tuning methodology described below.
25
+
26
+ ### Model Sources [optional]
27
+
28
+ <!-- Provide the basic links for the model. -->
29
+
30
+ - **Repository:** [More Information Needed]
31
+ - **Paper [optional]:** [More Information Needed]
32
+ - **Demo [optional]:** [More Information Needed]
33
+
34
+ ## Uses
35
+
36
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
37
+
38
+ ### Direct Use
39
+
40
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
41
+
42
+ This model is intended to be used as a sentence embedding model, specifically for contracts use cases.
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Training Procedure
65
+
66
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
67
+
68
+ This model was fine-tuned using a custom self-supervised procedure that combined contrastive techniques with stochastic injections of
69
+ distortions in the samples. The methodology was derived, adapted and inspired primarily from three research papers cited below:
70
+ TSDAE (Reimers), DeClutr (Giorgi), and Contrastive Tension (Carlsson).
71
+
72
+ #### Training Hyperparameters
73
+
74
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
75
+
76
+ #### Metrics
77
+
78
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
79
+
80
+ [More Information Needed]
81
+
82
+ ### Results
83
+
84
+ [More Information Needed]
85
+
86
+ #### Summary
87
+
88
+
89
+ ### Model Architecture and Objective
90
+
91
+ [More Information Needed]
92
+
93
+
94
+ ## Citation [optional]
95
+
96
+ Custom training protocol used to train the model, which was derived and inspired by the following papers:
97
+
98
+ @article{wang-2021-TSDAE,
99
+ title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
100
+ author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
101
+ journal= "arXiv preprint arXiv:2104.06979",
102
+ month = "4",
103
+ year = "2021",
104
+ url = "https://arxiv.org/abs/2104.06979",
105
+ }
106
+
107
+ @inproceedings{giorgi-etal-2021-declutr,
108
+ title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations},
109
+ author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary},
110
+ year = 2021,
111
+ month = aug,
112
+ booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
113
+ publisher = {Association for Computational Linguistics},
114
+ address = {Online},
115
+ pages = {879--895},
116
+ doi = {10.18653/v1/2021.acl-long.72},
117
+ url = {https://aclanthology.org/2021.acl-long.72}
118
+ }
119
+
120
+ @article{Carlsson-2021-CT,
121
+ title = {Semantic Re-tuning with Contrastive Tension},
122
+ author= {Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, Magnus Sahlgren},
123
+ year= {2021},
124
+ month= {"January"}
125
+ Published: 12 Jan 2021, Last Modified: 05 May 2023
126
+ }
127
+
128
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
129
+
130
+ **BibTeX:**
131
+
132
+ [More Information Needed]
133
+
134
+ **APA:**
135
+
136
+ [More Information Needed]
137
+
138
+
139
+ ## Model Card Authors [optional]
140
+
141
+ [More Information Needed]
142
+
143
+ ## Model Card Contact
144
+
145
+ [More Information Needed]
146
+
147
+