xwinxu commited on
Commit
3e296ea
1 Parent(s): 06d7ae2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +13 -2
README.md CHANGED
@@ -26,8 +26,19 @@ This repo contains the model checkpoints for:
26
  - optimized with the loss <b>SFT+DPO</b>
27
  - aligned using the SHP, Anthropic HH and Open Assistant datasets.
28
 
29
- To prompt archangel models, ensure that the format is consistent with that of TuluV2, i.e. `"<s>\n<|user|>\n" + <prompt> + "\n<|assistant|>\n</s>"`.
30
- Note that the BOS / EOS tokens should be excluded if automatically added by your tokenizer during batch collation.
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
33
 
 
26
  - optimized with the loss <b>SFT+DPO</b>
27
  - aligned using the SHP, Anthropic HH and Open Assistant datasets.
28
 
29
+ To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
30
+ For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
31
+ The human should speak first:
32
+ ```
33
+ <|user|>
34
+ Hi! I'm looking for a cake recipe.
35
+ <|assistant|>
36
+ What kind of cake?
37
+ <|user|>
38
+ Chocolate cake.
39
+ <|assistant|>
40
+ ```
41
+ Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
42
 
43
  Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
44