doberst commited on
Commit
b93baf4
1 Parent(s): a8fac66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -7,8 +7,6 @@ license: apache-2.0
7
 
8
  industry-bert-insurance-v0.1 is part of a series of industry-fine-tuned sentence_transformer embedding models.
9
 
10
- ## Model Details
11
-
12
  ### Model Description
13
 
14
  <!-- Provide a longer summary of what this model is. -->
@@ -24,15 +22,17 @@ substitute for embeddings in the insurance industry domain. This model was tra
24
 
25
  ## Model Use
26
 
27
- from transformers import AutoTokenizer, AutoModel
 
 
 
 
28
 
29
- tokenizer = AutoTokenizer.from_pretrained("llmware/industry-bert-insurance-v0.1")
30
- model = AutoModel.from_pretrained("llmware/industry-bert-insurance-v0.1")
31
 
32
 
33
  ## Bias, Risks, and Limitations
34
 
35
- This is a semantic embedding model, fine-tuned on public domain SEC filings and regulatory documents. Results may vary if used outside of this
36
  domain, and like any embedding model, there is always the potential for anomalies in the vector embedding space. No specific safeguards have
37
  put in place for safety or mitigate potential bias in the dataset.
38
 
 
7
 
8
  industry-bert-insurance-v0.1 is part of a series of industry-fine-tuned sentence_transformer embedding models.
9
 
 
 
10
  ### Model Description
11
 
12
  <!-- Provide a longer summary of what this model is. -->
 
22
 
23
  ## Model Use
24
 
25
+ from transformers import AutoTokenizer, AutoModel
26
+
27
+ tokenizer = AutoTokenizer.from_pretrained("llmware/industry-bert-insurance-v0.1")
28
+
29
+ model = AutoModel.from_pretrained("llmware/industry-bert-insurance-v0.1")
30
 
 
 
31
 
32
 
33
  ## Bias, Risks, and Limitations
34
 
35
+ This is a semantic embedding model, fine-tuned on public domain documents about the insurance industry. Results may vary if used outside of this
36
  domain, and like any embedding model, there is always the potential for anomalies in the vector embedding space. No specific safeguards have
37
  put in place for safety or mitigate potential bias in the dataset.
38