doberst commited on
Commit
3574eed
·
1 Parent(s): d95d779

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -0
README.md CHANGED
@@ -97,6 +97,29 @@ To get the best results, package "my_prompt" as follows:
97
  my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
98
 
99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
  ## Citations
101
 
102
  This model has been fine-tuned on the base StableLM-3B-4E1T model from StabilityAI. For more information about this base model, please see the citation below:
 
97
  my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
98
 
99
 
100
+ If you are using a HuggingFace generation script:
101
+
102
+ # prepare prompt packaging used in fine-tuning process
103
+ new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
104
+
105
+ inputs = tokenizer(new_prompt, return_tensors="pt")
106
+ start_of_output = len(inputs.input_ids[0])
107
+
108
+ # temperature: set at 0.3 for consistency of output
109
+ # max_new_tokens: set at 100 - may prematurely stop a few of the summaries
110
+
111
+ outputs = model.generate(
112
+ inputs.input_ids.to(device),
113
+ eos_token_id=tokenizer.eos_token_id,
114
+ pad_token_id=tokenizer.eos_token_id,
115
+ do_sample=True,
116
+ temperature=0.3,
117
+ max_new_tokens=100,
118
+ )
119
+
120
+ output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
121
+
122
+
123
  ## Citations
124
 
125
  This model has been fine-tuned on the base StableLM-3B-4E1T model from StabilityAI. For more information about this base model, please see the citation below: