mlabonne commited on
Commit
df26768
1 Parent(s): 86fc1af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ datasets:
15
 
16
  This is a DPO fine-tuned version of [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) using the [chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) preference dataset. It improves the performance of the model on Nous benchmark suite (waiting for the results on the Open LLM Benchmark).
17
 
18
- You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralHermes-2.5-Mistral-7B-laser-GGUF-Chat) (GGUF Q4_K_M).
19
 
20
  ## ⚡ Quantized models
21
 
 
15
 
16
  This is a DPO fine-tuned version of [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) using the [chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) preference dataset. It improves the performance of the model on Nous benchmark suite (waiting for the results on the Open LLM Benchmark).
17
 
18
+ You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralMarcoro14-7B-GGUF-Chat) (GGUF Q4_K_M).
19
 
20
  ## ⚡ Quantized models
21