Text Generation
Transformers
PyTorch
Safetensors
French
English
bloom
Inference Endpoints
text-generation-inference
Cyrile commited on
Commit
812313c
1 Parent(s): f57be65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -71,7 +71,7 @@ We observe that across all models, the performance gain is logarithmic in relati
71
  How to use bloomz-560m-sft-chat
72
  -------------------------------
73
 
74
- There are no specific instructions for using these models in a normal causal inference context. However, to leverage the chatbot capability of the model, an individual's prompt should be preceded by the EOS token ({</s>}), and the generated part should be preceded by the BOS token ({<s>}). The structure takes the following form:
75
  ```
76
  </s>[human prompt 1]<s>[bot answer 1]</s>[human prompt 2]<s>
77
  ```
 
71
  How to use bloomz-560m-sft-chat
72
  -------------------------------
73
 
74
+ There are no specific instructions for using these models in a normal causal inference context. However, to leverage the chatbot capability of the model, an individual's prompt should be preceded by the EOS token (&lt;/s>), and the generated part should be preceded by the BOS token (&lt;s>). The structure takes the following form:
75
  ```
76
  </s>[human prompt 1]<s>[bot answer 1]</s>[human prompt 2]<s>
77
  ```