doberst commited on
Commit
278c92b
1 Parent(s): e3952fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -10
README.md CHANGED
@@ -76,14 +76,14 @@ Any model can provide inaccurate or incomplete information, and should be used i
76
 
77
  The fastest way to get started with BLING is through direct import in transformers:
78
 
79
- from transformers import AutoTokenizer, AutoModelForCausalLM
80
- tokenizer = AutoTokenizer.from_pretrained("llmware/bling-falcon-1b-0.1")
81
- model = AutoModelForCausalLM.from_pretrained("llmware/bling-falcon-1b-0.1")
82
 
83
 
84
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
85
 
86
- full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
87
 
88
  The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
89
 
@@ -92,9 +92,33 @@ The BLING model was fine-tuned with closed-context samples, which assume general
92
 
93
  To get the best results, package "my_prompt" as follows:
94
 
95
- my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
96
 
97
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
  ## Citation [optional]
99
 
100
  This BLING model was built on top of a Falcon model base - for more information about the Falcon model, please see the paper referenced below:
@@ -113,8 +137,3 @@ This BLING model was built on top of a Falcon model base - for more information
113
  ## Model Card Contact
114
 
115
  Darren Oberst & llmware team
116
-
117
- Please reach out anytime if you are interested in this project!
118
-
119
-
120
-
 
76
 
77
  The fastest way to get started with BLING is through direct import in transformers:
78
 
79
+ from transformers import AutoTokenizer, AutoModelForCausalLM
80
+ tokenizer = AutoTokenizer.from_pretrained("llmware/bling-falcon-1b-0.1")
81
+ model = AutoModelForCausalLM.from_pretrained("llmware/bling-falcon-1b-0.1")
82
 
83
 
84
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
85
 
86
+ full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
87
 
88
  The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
89
 
 
92
 
93
  To get the best results, package "my_prompt" as follows:
94
 
95
+ my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
96
 
97
 
98
+ If you are using a HuggingFace generation script:
99
+
100
+ # prepare prompt packaging used in fine-tuning process
101
+ new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
102
+
103
+ inputs = tokenizer(new_prompt, return_tensors="pt")
104
+ start_of_output = len(inputs.input_ids[0])
105
+
106
+ # temperature: set at 0.3 for consistency of output
107
+ # max_new_tokens: set at 100 - may prematurely stop a few of the summaries
108
+
109
+ outputs = model.generate(
110
+ inputs.input_ids.to(device),
111
+ eos_token_id=tokenizer.eos_token_id,
112
+ pad_token_id=tokenizer.eos_token_id,
113
+ do_sample=True,
114
+ temperature=0.3,
115
+ max_new_tokens=100,
116
+ )
117
+
118
+ output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
119
+
120
+
121
+
122
  ## Citation [optional]
123
 
124
  This BLING model was built on top of a Falcon model base - for more information about the Falcon model, please see the paper referenced below:
 
137
  ## Model Card Contact
138
 
139
  Darren Oberst & llmware team