Syed-Hasan-8503
commited on
Commit
β’
2309c24
1
Parent(s):
cd2e519
Update README.md
Browse files
README.md
CHANGED
@@ -7,8 +7,8 @@ datasets:
|
|
7 |
|
8 |
# Phi-2-ORPO
|
9 |
|
10 |
-
|
11 |
-
preference dataset using **Odds Ratio Preference Optimization (ORPO)**.
|
12 |
|
13 |
## LazyORPO
|
14 |
|
@@ -20,10 +20,10 @@ process much easier. Based on [ORPO paper](https://colab.research.google.com/cor
|
|
20 |
Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results.
|
21 |
Some highlights of this techniques are:
|
22 |
|
23 |
-
π§ Reference model-free β memory friendly
|
24 |
-
π Replaces SFT+DPO/PPO with 1 single method (ORPO)
|
25 |
-
π ORPO Outperforms SFT, SFT+DPO on PHI-2, Llama 2, and Mistral
|
26 |
-
π Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta
|
27 |
|
28 |
|
29 |
#### Usage
|
@@ -39,7 +39,7 @@ tokenizer = AutoTokenizer.from_pretrained("abideen/phi2-pro", trust_remote_code=
|
|
39 |
|
40 |
inputs = tokenizer('''def print_prime(n):
|
41 |
"""
|
42 |
-
|
43 |
"""''', return_tensors="pt", return_attention_mask=False)
|
44 |
|
45 |
outputs = model.generate(**inputs, max_length=200)
|
|
|
7 |
|
8 |
# Phi-2-ORPO
|
9 |
|
10 |
+
**phi2-pro** is a fine-tuned version of **[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)** on **[argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)**
|
11 |
+
preference dataset using **Odds Ratio Preference Optimization (ORPO)**. The model has been trained for 1 epoch.
|
12 |
|
13 |
## LazyORPO
|
14 |
|
|
|
20 |
Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results.
|
21 |
Some highlights of this techniques are:
|
22 |
|
23 |
+
* π§ Reference model-free β memory friendly
|
24 |
+
* π Replaces SFT+DPO/PPO with 1 single method (ORPO)
|
25 |
+
* π ORPO Outperforms SFT, SFT+DPO on PHI-2, Llama 2, and Mistral
|
26 |
+
* π Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta
|
27 |
|
28 |
|
29 |
#### Usage
|
|
|
39 |
|
40 |
inputs = tokenizer('''def print_prime(n):
|
41 |
"""
|
42 |
+
Write a detailed analogy between mathematics and a lighthouse.
|
43 |
"""''', return_tensors="pt", return_attention_mask=False)
|
44 |
|
45 |
outputs = model.generate(**inputs, max_length=200)
|