bpucla commited on
Commit
5358809
1 Parent(s): 6ecb6b8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -1,10 +1,10 @@
1
  ---
2
- license: cc-by-nc-nd-3.0
3
  ---
4
- # SFR-Iterative-DPO-Llama-3-8B-R
5
 
6
  ## Introduction
7
- We release a state-of-the-art instruct model of its class, **SFR-Iterative-DPO-LLaMA-3-8B-R**.
8
  On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it),
9
  and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling.
10
 
@@ -66,8 +66,8 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
66
 
67
  device = "cuda"
68
 
69
- model = AutoModelForCausalLM.from_pretrained("Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R")
70
- tokenizer = AutoTokenizer.from_pretrained("Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R")
71
 
72
  messages = [
73
  {"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"},
@@ -85,7 +85,7 @@ print(model_outputs[0])
85
 
86
 
87
  ## Limitations
88
- SFR-Iterative-DPO-LLaMA-3-8B-R is a research model developed as part of our RLHF initiative at Salesforce.
89
  While safety and ethical considerations are integral to our alignment process,
90
  there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions.
91
  We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage.
 
1
  ---
2
+ license: llama3
3
  ---
4
+ # Llama-3-8B-SFR-Iterative-DPO-R
5
 
6
  ## Introduction
7
+ We release a state-of-the-art instruct model of its class, **Llama-3-8B-SFR-Iterative-DPO-R**.
8
  On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it),
9
  and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling.
10
 
 
66
 
67
  device = "cuda"
68
 
69
+ model = AutoModelForCausalLM.from_pretrained("Salesforce/Llama-3-8B-SFR-Iterative-DPO-R")
70
+ tokenizer = AutoTokenizer.from_pretrained("Salesforce/Llama-3-8B-SFR-Iterative-DPO-R")
71
 
72
  messages = [
73
  {"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"},
 
85
 
86
 
87
  ## Limitations
88
+ Llama-3-8B-SFR-Iterative-DPO-R is a research model developed as part of our RLHF initiative at Salesforce.
89
  While safety and ethical considerations are integral to our alignment process,
90
  there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions.
91
  We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage.