jpacifico commited on
Commit
de0be44
1 Parent(s): b133074

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -13,7 +13,7 @@ pipeline_tag: text-generation
13
 
14
  ### Chocolatine-78B-Instruct-DPO-v1.3
15
 
16
- DPO fine-tuned of [dfurman/CalmeRys-78B-Orpo-v0.1](https://huggingface.co/dfurman/CalmeRys-78B-Orpo-v0.1) itself based on multiple fine tunings. Initialy based on foundation model [Qwen/Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct)
17
  using the [jpacifico/french-orca-dpo-pairs-revised](https://huggingface.co/datasets/jpacifico/french-orca-dpo-pairs-revised) rlhf dataset.
18
 
19
  My goal here is to verify whether the French DPO fine-tuning I developed for my Chocolatine model series can be applied with equal performance to model sizes > 70B params,
 
13
 
14
  ### Chocolatine-78B-Instruct-DPO-v1.3
15
 
16
+ DPO fine-tuned of [dfurman/CalmeRys-78B-Orpo-v0.1](https://huggingface.co/dfurman/CalmeRys-78B-Orpo-v0.1) itself based on multiple fine tunings; initialy based on the foundation model [Qwen/Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct)
17
  using the [jpacifico/french-orca-dpo-pairs-revised](https://huggingface.co/datasets/jpacifico/french-orca-dpo-pairs-revised) rlhf dataset.
18
 
19
  My goal here is to verify whether the French DPO fine-tuning I developed for my Chocolatine model series can be applied with equal performance to model sizes > 70B params,