Syed-Hasan-8503 commited on
Commit
2309c24
β€’
1 Parent(s): cd2e519

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -7,8 +7,8 @@ datasets:
7
 
8
  # Phi-2-ORPO
9
 
10
- Phi-2-ORPO is a finetuned version of **[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)** on **[argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)**
11
- preference dataset using **Odds Ratio Preference Optimization (ORPO)**.
12
 
13
  ## LazyORPO
14
 
@@ -20,10 +20,10 @@ process much easier. Based on [ORPO paper](https://colab.research.google.com/cor
20
  Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results.
21
  Some highlights of this techniques are:
22
 
23
- 🧠 Reference model-free β†’ memory friendly
24
- πŸ”„ Replaces SFT+DPO/PPO with 1 single method (ORPO)
25
- πŸ† ORPO Outperforms SFT, SFT+DPO on PHI-2, Llama 2, and Mistral
26
- πŸ“Š Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta
27
 
28
 
29
  #### Usage
@@ -39,7 +39,7 @@ tokenizer = AutoTokenizer.from_pretrained("abideen/phi2-pro", trust_remote_code=
39
 
40
  inputs = tokenizer('''def print_prime(n):
41
  """
42
- Print all primes between 1 and n
43
  """''', return_tensors="pt", return_attention_mask=False)
44
 
45
  outputs = model.generate(**inputs, max_length=200)
 
7
 
8
  # Phi-2-ORPO
9
 
10
+ **phi2-pro** is a fine-tuned version of **[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)** on **[argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)**
11
+ preference dataset using **Odds Ratio Preference Optimization (ORPO)**. The model has been trained for 1 epoch.
12
 
13
  ## LazyORPO
14
 
 
20
  Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results.
21
  Some highlights of this techniques are:
22
 
23
+ * 🧠 Reference model-free β†’ memory friendly
24
+ * πŸ”„ Replaces SFT+DPO/PPO with 1 single method (ORPO)
25
+ * πŸ† ORPO Outperforms SFT, SFT+DPO on PHI-2, Llama 2, and Mistral
26
+ * πŸ“Š Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta
27
 
28
 
29
  #### Usage
 
39
 
40
  inputs = tokenizer('''def print_prime(n):
41
  """
42
+ Write a detailed analogy between mathematics and a lighthouse.
43
  """''', return_tensors="pt", return_attention_mask=False)
44
 
45
  outputs = model.generate(**inputs, max_length=200)