doberst commited on
Commit
c2782a8
1 Parent(s): e4c8c29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -8
README.md CHANGED
@@ -25,14 +25,6 @@ BLING models are fine-tuned with distilled high-quality custom instruct datasets
25
  - **License:** Apache 2.0
26
  - **Finetuned from model [optional]:** EleutherAI/Pythia-1b-deduped
27
 
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
  ## Uses
37
 
38
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
@@ -94,6 +86,7 @@ mitigate potential bias and safety. We would strongly discourage any use of B
94
  The fastest way to get started with BLING is through direct import in transformers:
95
 
96
  model = AutoModelForCausalLM.from_pretrained("llmware/bling-1b-0.1")
 
97
  tokenizer = AutoTokenizer.from_pretrained("llmware/bling-1b-0.1")
98
 
99
  The BLING model was fine-tuned with a simple "<human> and <bot> wrapper", so to get the best results, wrap inference entries as:
 
25
  - **License:** Apache 2.0
26
  - **Finetuned from model [optional]:** EleutherAI/Pythia-1b-deduped
27
 
 
 
 
 
 
 
 
 
28
  ## Uses
29
 
30
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
86
  The fastest way to get started with BLING is through direct import in transformers:
87
 
88
  model = AutoModelForCausalLM.from_pretrained("llmware/bling-1b-0.1")
89
+
90
  tokenizer = AutoTokenizer.from_pretrained("llmware/bling-1b-0.1")
91
 
92
  The BLING model was fine-tuned with a simple "<human> and <bot> wrapper", so to get the best results, wrap inference entries as: